The University of Southampton
University of Southampton Institutional Repository

Using reinforcement learning to coordinate better

Using reinforcement learning to coordinate better
Using reinforcement learning to coordinate better
This paper examines the potential and the impact of introducing learning capabilities into autonomous agents that make decisions at run-time about which mechanism to exploit in order to coordinate their activities. Specifically, our motivating hypothesis is that to deal with dynamic and unpredictable environments it is important to have agents that learn the right situations in which to attempt coordination and the right coordination method to use in those situations. In particular, the efficacy of learning is evaluated when agents have varying types and amounts of information when those coordinating decisions are taken. This hypothesis is evaluated empirically, in a grid-world scenario in which a) an agent’s predictions about the other agents in the environment are approximately correct and b) an agent cannot correctly predict the others’ behaviour. The results presented show when, where and why learning is effective when it comes to making a decision about selecting a coordination mechanism.
Coordination, agent interaction, collaborative agents, reinforcement learning
217-245
Excelente-Toledo, C.B.
20ddd7e4-1e7d-4b11-9ce2-fa8bd6676984
Jennings, N. R.
ab3d94cc-247c-4545-9d1e-65873d6cdb30
Excelente-Toledo, C.B.
20ddd7e4-1e7d-4b11-9ce2-fa8bd6676984
Jennings, N. R.
ab3d94cc-247c-4545-9d1e-65873d6cdb30

Excelente-Toledo, C.B. and Jennings, N. R. (2005) Using reinforcement learning to coordinate better. Computational Intelligence, 21 (3), 217-245.

Record type: Article

Abstract

This paper examines the potential and the impact of introducing learning capabilities into autonomous agents that make decisions at run-time about which mechanism to exploit in order to coordinate their activities. Specifically, our motivating hypothesis is that to deal with dynamic and unpredictable environments it is important to have agents that learn the right situations in which to attempt coordination and the right coordination method to use in those situations. In particular, the efficacy of learning is evaluated when agents have varying types and amounts of information when those coordinating decisions are taken. This hypothesis is evaluated empirically, in a grid-world scenario in which a) an agent’s predictions about the other agents in the environment are approximately correct and b) an agent cannot correctly predict the others’ behaviour. The results presented show when, where and why learning is effective when it comes to making a decision about selecting a coordination mechanism.

Text
CI05.pdf - Accepted Manuscript
Download (323kB)
Text
j.1467-8640.2005.00272.x.pdf - Version of Record
Download (274kB)

More information

Published date: 2005
Keywords: Coordination, agent interaction, collaborative agents, reinforcement learning
Organisations: Agents, Interactions & Complexity

Identifiers

Local EPrints ID: 260811
URI: http://eprints.soton.ac.uk/id/eprint/260811
PURE UUID: 77b53b10-0c27-42a4-a112-f6d4b67493f5

Catalogue record

Date deposited: 29 Apr 2005
Last modified: 14 Mar 2024 06:43

Export record

Contributors

Author: C.B. Excelente-Toledo
Author: N. R. Jennings

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×