Planning Against Fictitious Players in Repeated Normal Form Games
Planning Against Fictitious Players in Repeated Normal Form Games
Planning how to interact against bounded memory and unbounded memory learning opponents needs different treatment. Thus far, however, work in this area has shown how to design plans against bounded memory learning opponents, but no work has dealt with the unbounded memory case. This paper tackles this gap. In particular, we frame this as a planning problem using the framework of repeated matrix games, where the planner's objective is to compute the best exploiting sequence of actions against a learning opponent. The particular class of opponent we study uses a fictitious play process to update her beliefs, but the analysis generalizes to many forms of Bayesian learning agents. Our analysis is inspired by Banerjee and Peng's AIM framework, which works for planning and learning against bounded memory opponents (e.g an adaptive player). Building on this, we show how an unbounded memory opponent (specifically a fictitious player) can also be modelled as a finite MDP and present a new efficient algorithm that can find a way to exploit the opponent by computing in polynomial time a sequence of play that can obtain a higher average reward than those obtained by playing a game theoretic (Nash or correlated) equilibrium.
1073-1080
Munoz de Cote, Enrique
0b38ed33-005a-44e5-aa5d-cae0474039ae
Jennings, Nick
ab3d94cc-247c-4545-9d1e-65873d6cdb30
2010
Munoz de Cote, Enrique
0b38ed33-005a-44e5-aa5d-cae0474039ae
Jennings, Nick
ab3d94cc-247c-4545-9d1e-65873d6cdb30
Munoz de Cote, Enrique and Jennings, Nick
(2010)
Planning Against Fictitious Players in Repeated Normal Form Games.
9th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS2010), Toronto, Canada.
14 - 18 May 2010.
.
Record type:
Conference or Workshop Item
(Other)
Abstract
Planning how to interact against bounded memory and unbounded memory learning opponents needs different treatment. Thus far, however, work in this area has shown how to design plans against bounded memory learning opponents, but no work has dealt with the unbounded memory case. This paper tackles this gap. In particular, we frame this as a planning problem using the framework of repeated matrix games, where the planner's objective is to compute the best exploiting sequence of actions against a learning opponent. The particular class of opponent we study uses a fictitious play process to update her beliefs, but the analysis generalizes to many forms of Bayesian learning agents. Our analysis is inspired by Banerjee and Peng's AIM framework, which works for planning and learning against bounded memory opponents (e.g an adaptive player). Building on this, we show how an unbounded memory opponent (specifically a fictitious player) can also be modelled as a finite MDP and present a new efficient algorithm that can find a way to exploit the opponent by computing in polynomial time a sequence of play that can obtain a higher average reward than those obtained by playing a game theoretic (Nash or correlated) equilibrium.
Text
AAMAS10-cameraReady.pdf
- Version of Record
More information
Published date: 2010
Venue - Dates:
9th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS2010), Toronto, Canada, 2010-05-14 - 2010-05-18
Organisations:
Agents, Interactions & Complexity
Identifiers
Local EPrints ID: 268481
URI: http://eprints.soton.ac.uk/id/eprint/268481
PURE UUID: d4ff7535-bc00-4aff-897a-13183de76b9a
Catalogue record
Date deposited: 08 Feb 2010 18:16
Last modified: 14 Mar 2024 09:10
Export record
Contributors
Author:
Enrique Munoz de Cote
Author:
Nick Jennings
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics