The University of Southampton
University of Southampton Institutional Repository

Learning strict Nash equilibria through reinforcement

Learning strict Nash equilibria through reinforcement
Learning strict Nash equilibria through reinforcement
This paper studies the analytical properties of the reinforcement learning model proposed in Erev and Roth (1998), also termed cumulative reinforcement learning in Laslier et al (2001). This stochastic model of learning in games accounts for two main elements: the law of effect (positive reinforcement of actions that perform well) and the law of practice (the magnitude of the reinforcement effect decreases with players' experience).

The main results of the paper show that, if the solution trajectories of the underlying replicator equation converge exponentially fast, then, with probability arbitrarily close to one, all the realizations of the reinforcement learning process lie within an e band of that solution. As the property of exponential convergence is shown to hold in proximity of any strict Nash equilibrium, the paper improves upon results currently available in the literature by showing that, whenever a strict Nash equilibrium exists, a reinforcement learning process started sufficiently close to it, will reach it with probability one.
University of Southampton
Ianni, Antonella
35024f65-34cd-4e20-9b2a-554600d739f3
Ianni, Antonella
35024f65-34cd-4e20-9b2a-554600d739f3

Ianni, Antonella (2010) Learning strict Nash equilibria through reinforcement Southampton, UK. University of Southampton 26pp.

Record type: Monograph (Working Paper)

Abstract

This paper studies the analytical properties of the reinforcement learning model proposed in Erev and Roth (1998), also termed cumulative reinforcement learning in Laslier et al (2001). This stochastic model of learning in games accounts for two main elements: the law of effect (positive reinforcement of actions that perform well) and the law of practice (the magnitude of the reinforcement effect decreases with players' experience).

The main results of the paper show that, if the solution trajectories of the underlying replicator equation converge exponentially fast, then, with probability arbitrarily close to one, all the realizations of the reinforcement learning process lie within an e band of that solution. As the property of exponential convergence is shown to hold in proximity of any strict Nash equilibrium, the paper improves upon results currently available in the literature by showing that, whenever a strict Nash equilibrium exists, a reinforcement learning process started sufficiently close to it, will reach it with probability one.

Text
Learning_Strict_Nash_Equilibrium_Throught_Reinforcement_by_A_Ianni.pdf - Author's Original
Download (648kB)

More information

Published date: April 2010

Identifiers

Local EPrints ID: 156897
URI: http://eprints.soton.ac.uk/id/eprint/156897
PURE UUID: d5ecbe8b-8307-4529-87ad-1d4584e8eedc
ORCID for Antonella Ianni: ORCID iD orcid.org/0000-0002-5003-4482

Catalogue record

Date deposited: 02 Jun 2010 15:40
Last modified: 14 Mar 2024 02:39

Export record

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×