On Similarities between Inference in Game Theory and Machine Learning
On Similarities between Inference in Game Theory and Machine Learning
In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution).
259-283
Rezek, I
a0a84a91-e145-410e-a254-0d1bff6ff49b
Leslie, D.
59905221-fcd1-42fc-a76c-90e110ac6c0e
Reece, S
97eefa81-bf4b-4951-8dc4-71d026f02479
Roberts, S
a517fb6d-5698-4560-a766-8ba5320504ba
Rogers, Alex
f9130bc6-da32-474e-9fab-6c6cb8077fdc
Dash, Rajdeep
6c83d6ec-5b7d-4fd9-ab62-0394a8181ff4
Jennings, Nick
ab3d94cc-247c-4545-9d1e-65873d6cdb30
2008
Rezek, I
a0a84a91-e145-410e-a254-0d1bff6ff49b
Leslie, D.
59905221-fcd1-42fc-a76c-90e110ac6c0e
Reece, S
97eefa81-bf4b-4951-8dc4-71d026f02479
Roberts, S
a517fb6d-5698-4560-a766-8ba5320504ba
Rogers, Alex
f9130bc6-da32-474e-9fab-6c6cb8077fdc
Dash, Rajdeep
6c83d6ec-5b7d-4fd9-ab62-0394a8181ff4
Jennings, Nick
ab3d94cc-247c-4545-9d1e-65873d6cdb30
Rezek, I, Leslie, D., Reece, S, Roberts, S, Rogers, Alex, Dash, Rajdeep and Jennings, Nick
(2008)
On Similarities between Inference in Game Theory and Machine Learning.
Journal of Artificial Intelligence Research, 33, .
Abstract
In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution).
Text
jair08.pdf
- Accepted Manuscript
More information
Published date: 2008
Organisations:
Agents, Interactions & Complexity
Identifiers
Local EPrints ID: 266713
URI: http://eprints.soton.ac.uk/id/eprint/266713
PURE UUID: fa8ab344-9fd6-449e-ac0d-531a1c494bb5
Catalogue record
Date deposited: 25 Sep 2008 07:51
Last modified: 14 Mar 2024 08:33
Export record
Contributors
Author:
I Rezek
Author:
D. Leslie
Author:
S Reece
Author:
S Roberts
Author:
Alex Rogers
Author:
Rajdeep Dash
Author:
Nick Jennings
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics