Strategic and Adaptive Behaviours in Trust Systems
Strategic and Adaptive Behaviours in Trust Systems
Intelligent systems are having significant impact on our daily lives in many ways. These systems can help guide human decisions, act on our behalf and cooperate within mixed-initiative teams. This inter-dependency between humans and AI systems inherently includes a level of risk about the outcomes of actions. To mitigate this risk, the concepts of trust and reputation have received significant attention in multi-agent systems (MASs). Numerous techniques have been proposed to answer the interrelated questions of how to reliably assess the trustworthiness of autonomous systems, how to make robust decisions under uncertainty, and how to establish trust between agents and between agents and humans. Computational models of trust typically focus on evaluating the
trustworthiness of others using direct observations and the opinions of others in order to select partners for delegation or to form and maintain relationships. Significantly less attention, however, has been given to understanding how these systems can reliably operate under budgetary constraints, or their vulnerabilities to external attacks. In this thesis, we propose and evaluate a suite of new decision-making strategies to progressively select trustworthy partners under budgetary constraints. First, we show how this decision-making problem maps to budget-limited multi-armed bandit problems. We then present new decision-making models that incorporate both observations and opinions from others. Finally, we show how these approaches can minimise costs associated with, and the risks involved in interaction with agents with varying and uncertain
reliability.
In order to better understand the performance and reliability of such algorithms, we propose a novel, generic method to automate the process of identifying vulnerabilities in Trust and Reputation systems. We do this by mapping the vulnerability analysis problem to an optimisation problem, and show how efficient sampling methods can be used to search the attack space. We devise an attack model and generate attacks that involve the injection of false evidence to identify vulnerabilities in existing trust models. In this way, we provide an objective means to assess how robust trust and reputation algorithms are to different kinds of attacks and conduct comparative analyses.
University of Southampton
Gunes, Taha, Dogan
d250eea0-636d-41a8-96c0-a50813e4b780
2021
Gunes, Taha, Dogan
d250eea0-636d-41a8-96c0-a50813e4b780
Norman, Timothy
663e522f-807c-4569-9201-dc141c8eb50d
Gunes, Taha, Dogan
(2021)
Strategic and Adaptive Behaviours in Trust Systems.
University of Southampton, Doctoral Thesis, 129pp.
Record type:
Thesis
(Doctoral)
Abstract
Intelligent systems are having significant impact on our daily lives in many ways. These systems can help guide human decisions, act on our behalf and cooperate within mixed-initiative teams. This inter-dependency between humans and AI systems inherently includes a level of risk about the outcomes of actions. To mitigate this risk, the concepts of trust and reputation have received significant attention in multi-agent systems (MASs). Numerous techniques have been proposed to answer the interrelated questions of how to reliably assess the trustworthiness of autonomous systems, how to make robust decisions under uncertainty, and how to establish trust between agents and between agents and humans. Computational models of trust typically focus on evaluating the
trustworthiness of others using direct observations and the opinions of others in order to select partners for delegation or to form and maintain relationships. Significantly less attention, however, has been given to understanding how these systems can reliably operate under budgetary constraints, or their vulnerabilities to external attacks. In this thesis, we propose and evaluate a suite of new decision-making strategies to progressively select trustworthy partners under budgetary constraints. First, we show how this decision-making problem maps to budget-limited multi-armed bandit problems. We then present new decision-making models that incorporate both observations and opinions from others. Finally, we show how these approaches can minimise costs associated with, and the risks involved in interaction with agents with varying and uncertain
reliability.
In order to better understand the performance and reliability of such algorithms, we propose a novel, generic method to automate the process of identifying vulnerabilities in Trust and Reputation systems. We do this by mapping the vulnerability analysis problem to an optimisation problem, and show how efficient sampling methods can be used to search the attack space. We devise an attack model and generate attacks that involve the injection of false evidence to identify vulnerabilities in existing trust models. In this way, we provide an objective means to assess how robust trust and reputation algorithms are to different kinds of attacks and conduct comparative analyses.
Text
2_1_TahaDoganGunes_Thesis_with_Corrections
- Version of Record
Restricted to Repository staff only
More information
Published date: 2021
Identifiers
Local EPrints ID: 452406
URI: http://eprints.soton.ac.uk/id/eprint/452406
PURE UUID: 86112a26-4954-4f67-bb23-d5d36d73f95f
Catalogue record
Date deposited: 09 Dec 2021 18:09
Last modified: 17 Mar 2024 03:41
Export record
Contributors
Author:
Taha, Dogan Gunes
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics