A Trust and Reputation Model for Agent-Based Virtual Organisations.
University of Southampton, Electronics and Computer Science,
The aim of this research is to develop a model of trust that will endeavour to assure good interactions amongst autonomous software agents in complex, networked environments. In this context, we identify the following as key characteristics. Firstly, such environments are open, meaning that agents are free to enter and exit the system at their will, so an agent cannot be aware of all of its interaction partners. Furthermore, there is a possibility that these interaction partners may be malicious or colluding agents. Secondly, the openness and dynamism of these environments means agents will need to interact with other agents, with which they have had no past experience. Even in this context, an agent must be able to accurately assess the trustworthiness of another. Thirdly, the distributed and heterogeneous nature of these systems influences any model or application developed for such environments. Specifically, this often requires models and applications to be decentralised. Lastly, many of the interactions that occur between agents in such systems are in the context of a virtual organisation (VO). Here VOs are viewed as collections of agents belonging to different organisations, in which each agent has a specific problem solving capability which when combined provides a particular service to meet the requirements of an end user. Now, VOs are social structures, and the presence of certain inter-agent relationships may influence the behaviour of certain members. For this reason it is important to consider not only personal experiences with an individual to determine its behaviour, but to also examine the social relationships that it has with other agents. Against this background, we have developed TRAVOS (A Trust and Reputation Model for Agent-Based Virtual Organisations) which focuses, in particular, on providing a measure of trust for an agent to place in an interaction partner. This measure of trust is calculated by considering the past experiences between the agent and its interaction partner. In instances when there is no personal experience, the model substitutes past experience with reputation information gathered from other agents in the society or from special reputation broker agents. Reputation is gathered in a way that filters out biased or false opinions. In addition to this, the model is constrained by issues of scalability and decentralisation. Furthermore, by extending TRAVOS we developed a set of mechanisms (TRAVOS-R) related to learning and exploiting the social relationships present in VO-rich environments. More specifically, TRAVOS-R presents a novel approach to learning the type of relationship present between two agents, and uses this knowledge to adjust the opinions obtained from one agent about the other. The TRAVOS models have been tested empirically and have significantly outperformed other similar models. Moreover, to further evaluate the applicability of our approach a realistic system evaluation was also carried out, which involved applying our models in an industrial application of agent-based VOs. In undertaking this research, we have shown that trust is a key component of networked systems and that a computational trust model can be used by agents in large, dynamic, uncertain and open environments to account for the uncertainty inherent in their social decision-making processes. More specifically, we have shown that by using personal experience, opinions from others, and knowledge of social relationships, an agent is able to arrive at a more accurate trust value, and, as a consequence, that it can interact in a more effective manner.
Actions (login required)