A learning based approach to modelling bilateral adaptive agent negotiations
A learning based approach to modelling bilateral adaptive agent negotiations
In large multi-agent systems, individual agents often have conflicting goals, but are dependent on each other for the achievement of these objectives. In such situations, negotiation between the agents is a key means of resolving conflicts and reaching a compromise. Hence it is imperative to develop good automated negotiation techniques to enable effective interactions. However this problem is made harder by the fact that such environments are invariably dynamic (e.g. the bandwidth available for commu- nications can fluctuate, the availability of computation resources can change, and the time available for negotiations can change). Moreover, these changes can have a direct effect on the negotiation process. Thus an agent has to adapt its negotiation behaviour in response to changes in the environment and its opponent's behaviour if it is to be effective. Given this, this research has developed negotiation mechanisms that enable an agent to perform effectively in a particular class of negotiation encounters; namely, bilateral negotiation in which a service provider and a service consumer interact to fix the price of the service. In more detail, we use both reinforcement and Bayesian learning methods to derive an optimal agent strategy for bilateral negotiations in dynamic environments with incom- plete information. Specifically, an agent models the change in its opponent's behaviour using Markov Chains and determines an optimal policy to use in response to changes in the environment. Also using the Markov chain framework, the agent updates its prior knowledge of the opponent by observing successive offers using Bayesian inference
University of Southampton
Narayanan, Vidya
1231f947-d259-4e80-8175-17496faac346
2008
Narayanan, Vidya
1231f947-d259-4e80-8175-17496faac346
Narayanan, Vidya
(2008)
A learning based approach to modelling bilateral adaptive agent negotiations.
University of Southampton, Doctoral Thesis.
Record type:
Thesis
(Doctoral)
Abstract
In large multi-agent systems, individual agents often have conflicting goals, but are dependent on each other for the achievement of these objectives. In such situations, negotiation between the agents is a key means of resolving conflicts and reaching a compromise. Hence it is imperative to develop good automated negotiation techniques to enable effective interactions. However this problem is made harder by the fact that such environments are invariably dynamic (e.g. the bandwidth available for commu- nications can fluctuate, the availability of computation resources can change, and the time available for negotiations can change). Moreover, these changes can have a direct effect on the negotiation process. Thus an agent has to adapt its negotiation behaviour in response to changes in the environment and its opponent's behaviour if it is to be effective. Given this, this research has developed negotiation mechanisms that enable an agent to perform effectively in a particular class of negotiation encounters; namely, bilateral negotiation in which a service provider and a service consumer interact to fix the price of the service. In more detail, we use both reinforcement and Bayesian learning methods to derive an optimal agent strategy for bilateral negotiations in dynamic environments with incom- plete information. Specifically, an agent models the change in its opponent's behaviour using Markov Chains and determines an optimal policy to use in response to changes in the environment. Also using the Markov chain framework, the agent updates its prior knowledge of the opponent by observing successive offers using Bayesian inference
Text
1142196.pdf
- Version of Record
More information
Published date: 2008
Identifiers
Local EPrints ID: 466461
URI: http://eprints.soton.ac.uk/id/eprint/466461
PURE UUID: e52f4c8d-409e-40b4-8089-8603439f9bb1
Catalogue record
Date deposited: 05 Jul 2022 05:17
Last modified: 16 Mar 2024 20:43
Export record
Contributors
Author:
Vidya Narayanan
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics