Planning Against Fictitious Players in Repeated Normal Form Games
Munoz de Cote, Enrique and Jennings, Nick (2010) Planning Against Fictitious Players in Repeated Normal Form Games. At 9th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS2010), Toronto, Canada, , 1073-1080.
- Published Version
Planning how to interact against bounded memory and unbounded memory learning opponents needs different treatment. Thus far, however, work in this area has shown how to design plans against bounded memory learning opponents, but no work has dealt with the unbounded memory case. This paper tackles this gap. In particular, we frame this as a planning problem using the framework of repeated matrix games, where the planner's objective is to compute the best exploiting sequence of actions against a learning opponent. The particular class of opponent we study uses a fictitious play process to update her beliefs, but the analysis generalizes to many forms of Bayesian learning agents. Our analysis is inspired by Banerjee and Peng's AIM framework, which works for planning and learning against bounded memory opponents (e.g an adaptive player). Building on this, we show how an unbounded memory opponent (specifically a fictitious player) can also be modelled as a finite MDP and present a new efficient algorithm that can find a way to exploit the opponent by computing in polynomial time a sequence of play that can obtain a higher average reward than those obtained by playing a game theoretic (Nash or correlated) equilibrium.
|Item Type:||Conference or Workshop Item (Speech)|
|Divisions:||Faculty of Physical and Applied Science > Electronics and Computer Science > Agents, Interactions & Complexity
|Date Deposited:||08 Feb 2010 18:16|
|Last Modified:||01 Mar 2012 15:31|
|Contributors:||Munoz de Cote, Enrique (Author)
Jennings, Nick (Author)
|Contact Email Address:||firstname.lastname@example.org|
|Further Information:||Google Scholar|
|RDF:||RDF+N-Triples, RDF+N3, RDF+XML, Browse.|
Actions (login required)