Teacy, W.T.L., Chalkiadakis, G., Farinelli, A., Rogers, A., Jennings, N.R., McClean, S. and Parr, G.
Decentralized Bayesian reinforcement learning for online agent collaboration
At 11th International Conference on Autonomous Agents and Multiagent Systems, Spain.
04 - 08 Jun 2012.
Solving complex but structured problems in a decentralized manner via multiagent collaboration has received much attention in recent years. This is natural, as on one hand, multiagent systems usually possess a structure that determines the allowable interactions among the agents; and on the other hand, the single most pressing need in a cooperative multiagent system is to coordinate the local policies of autonomous agents with restricted capabilities to serve a system-wide goal. The presence of uncertainty makes this even more challenging, as the agents face the additional need to learn the unknown environment parameters while forming (and following) local policies in an online fashion. In this paper, we provide the first Bayesian reinforcement learning (BRL) approach for distributed coordination and learning in a cooperative multiagent system by devising two solutions to this type of problem. More specifically, we show how the Value of Perfect Information (VPI) can be used to perform efficient decentralised exploration in both model-based and model-free BRL, and in the latter case, provide a closed form solution for VPI, correcting a decade old result by Dearden, Friedman and Russell. To evaluate these solutions, we present experimental results comparing their relative merits, and demonstrate empirically that both solutions outperform an existing multiagent learning method, representative of the state-of-the-art.
Conference or Workshop Item
|Venue - Dates:
||11th International Conference on Autonomous Agents and Multiagent Systems, Spain, 2012-06-04 - 2012-06-08
||multiagent learning, Bayesian techniques, uncertainty
||Agents, Interactions & Complexity
|4 June 2012||Published|
||08 Feb 2012 13:11
||17 Apr 2017 17:32
|Further Information:||Google Scholar|
Actions (login required)