Sequentially optimal repeated coalition formation under uncertainty
Sequentially optimal repeated coalition formation under uncertainty
Coalition formation is a central problem in multiagent systems research, but most models assume common knowledge of agent types. In practice, however, agents are often unsure of the types or capabilities of their potential partners, but gain information about these capabilities through repeated interaction. In this paper, we propose a novel Bayesian, model-based reinforcement learning framework for this problem, assuming that coalitions are formed (and tasks undertaken) repeatedly. Our model allows agents to refine their beliefs about the types of others as they interact within a coalition. The model also allows agents to make explicit tradeoffs between exploration (forming “new” coalitions to learn more about the types of new potential partners) and exploitation (relying on partners about which more is known), using value of information to define optimal exploration policies. Our framework effectively integrates decision making during repeated coalition formation under type uncertainty with Bayesian reinforcement learning techniques. Specifically, we present several learning algorithms to approximate the optimal Bayesian solution to the repeated coalition formation and type-learning problem, providing tractable means to ensure good sequential performance. We evaluate our algorithms in a variety of settings, showing that one method in particular exhibits consistently good performance in practice. We also demonstrate the ability of our model to facilitate knowledge transfer across different dynamic tasks.
Chalkiadakis, Georgios
50ef5d10-3ffe-4253-ac88-fad4004240e7
Boutilier, Craig
803a29b4-74b9-47b2-baf7-e935b7870547
5 November 2010
Chalkiadakis, Georgios
50ef5d10-3ffe-4253-ac88-fad4004240e7
Boutilier, Craig
803a29b4-74b9-47b2-baf7-e935b7870547
Chalkiadakis, Georgios and Boutilier, Craig
(2010)
Sequentially optimal repeated coalition formation under uncertainty.
Journal of Autonomous Agents and Multi-Agent Systems.
Abstract
Coalition formation is a central problem in multiagent systems research, but most models assume common knowledge of agent types. In practice, however, agents are often unsure of the types or capabilities of their potential partners, but gain information about these capabilities through repeated interaction. In this paper, we propose a novel Bayesian, model-based reinforcement learning framework for this problem, assuming that coalitions are formed (and tasks undertaken) repeatedly. Our model allows agents to refine their beliefs about the types of others as they interact within a coalition. The model also allows agents to make explicit tradeoffs between exploration (forming “new” coalitions to learn more about the types of new potential partners) and exploitation (relying on partners about which more is known), using value of information to define optimal exploration policies. Our framework effectively integrates decision making during repeated coalition formation under type uncertainty with Bayesian reinforcement learning techniques. Specifically, we present several learning algorithms to approximate the optimal Bayesian solution to the repeated coalition formation and type-learning problem, providing tractable means to ensure good sequential performance. We evaluate our algorithms in a variety of settings, showing that one method in particular exhibits consistently good performance in practice. We also demonstrate the ability of our model to facilitate knowledge transfer across different dynamic tasks.
Text
cbSeqOptCFuncertJAAMAS.pdf
- Version of Record
More information
Published date: 5 November 2010
Organisations:
Agents, Interactions & Complexity
Identifiers
Local EPrints ID: 271679
URI: http://eprints.soton.ac.uk/id/eprint/271679
PURE UUID: 4e2bd668-ec43-4e46-bb31-6f40ca43a656
Catalogue record
Date deposited: 07 Nov 2010 16:57
Last modified: 14 Mar 2024 09:37
Export record
Contributors
Author:
Georgios Chalkiadakis
Author:
Craig Boutilier
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics