Sequentially optimal repeated coalition formation under uncertainty

Chalkiadakis, Georgios and Boutilier, Craig (2010) Sequentially optimal repeated coalition formation under uncertainty Journal of Autonomous Agents and Multi-Agent Systems


[img] PDF cbSeqOptCFuncertJAAMAS.pdf - Version of Record
Download (1MB)


Coalition formation is a central problem in multiagent systems research, but most models assume common knowledge of agent types. In practice, however, agents are often unsure of the types or capabilities of their potential partners, but gain information about these capabilities through repeated interaction. In this paper, we propose a novel Bayesian, model-based reinforcement learning framework for this problem, assuming that coalitions are formed (and tasks undertaken) repeatedly. Our model allows agents to refine their beliefs about the types of others as they interact within a coalition. The model also allows agents to make explicit tradeoffs between exploration (forming “new” coalitions to learn more about the types of new potential partners) and exploitation (relying on partners about which more is known), using value of information to define optimal exploration policies. Our framework effectively integrates decision making during repeated coalition formation under type uncertainty with Bayesian reinforcement learning techniques. Specifically, we present several learning algorithms to approximate the optimal Bayesian solution to the repeated coalition formation and type-learning problem, providing tractable means to ensure good sequential performance. We evaluate our algorithms in a variety of settings, showing that one method in particular exhibits consistently good performance in practice. We also demonstrate the ability of our model to facilitate knowledge transfer across different dynamic tasks.

Item Type: Article
Organisations: Agents, Interactions & Complexity
ePrint ID: 271679
Date :
Date Event
5 November 2010Published
Date Deposited: 07 Nov 2010 16:57
Last Modified: 17 Apr 2017 18:08
Further Information:Google Scholar

Actions (login required)

View Item View Item