The University of Southampton
University of Southampton Institutional Repository

Cooperative Information Sharing to Improve Distributed Learning

Cooperative Information Sharing to Improve Distributed Learning
Cooperative Information Sharing to Improve Distributed Learning
Effective coordination in partially observable MAS requires agent actions to be based on reliable estimates of non-local states. One way of generating such estimates is to allow the agents to share state information that is not directly observable. To this end, we propose a novel strategy of delayed distribution of state estimates. Our empirical studies of this mechanism demonstrate that individual reinforcement-learning agents in a simulated network routing problem achieve a significant improvement in the overall success, robustness, and efficiency of routing compared with the standard Q-routing algorithm.
Q-learning, cooperative communication, cooperative multi-agent resource allocation
18-23
Dutta, Partha S.
7ecec4d0-e70c-498d-bfe4-e7c6b6f289fd
Dasmahapatra, Srinandan
eb5fd76f-4335-4ae9-a88a-20b9e2b3f698
Gunn, Steve R.
306af9b3-a7fa-4381-baf9-5d6a6ec89868
Jennings, N. R.
ab3d94cc-247c-4545-9d1e-65873d6cdb30
Moreau, Luc
033c63dd-3fe9-4040-849f-dfccbe0406f8
Dutta, Partha S.
7ecec4d0-e70c-498d-bfe4-e7c6b6f289fd
Dasmahapatra, Srinandan
eb5fd76f-4335-4ae9-a88a-20b9e2b3f698
Gunn, Steve R.
306af9b3-a7fa-4381-baf9-5d6a6ec89868
Jennings, N. R.
ab3d94cc-247c-4545-9d1e-65873d6cdb30
Moreau, Luc
033c63dd-3fe9-4040-849f-dfccbe0406f8

Dutta, Partha S., Dasmahapatra, Srinandan, Gunn, Steve R., Jennings, N. R. and Moreau, Luc (2004) Cooperative Information Sharing to Improve Distributed Learning. The AAMAS 2004 workshop on Learning and Evolution in Agent-Based Systems, New York. 19 - 24 Jul 2004. pp. 18-23 .

Record type: Conference or Workshop Item (Paper)

Abstract

Effective coordination in partially observable MAS requires agent actions to be based on reliable estimates of non-local states. One way of generating such estimates is to allow the agents to share state information that is not directly observable. To this end, we propose a novel strategy of delayed distribution of state estimates. Our empirical studies of this mechanism demonstrate that individual reinforcement-learning agents in a simulated network routing problem achieve a significant improvement in the overall success, robustness, and efficiency of routing compared with the standard Q-routing algorithm.

Text
wshop_dutta.pdf - Accepted Manuscript
Download (739kB)

More information

Published date: 2004
Additional Information: Event Dates: July 19 -- 24
Venue - Dates: The AAMAS 2004 workshop on Learning and Evolution in Agent-Based Systems, New York, 2004-07-19 - 2004-07-24
Keywords: Q-learning, cooperative communication, cooperative multi-agent resource allocation
Organisations: Web & Internet Science, Agents, Interactions & Complexity, Electronic & Software Systems, Southampton Wireless Group

Identifiers

Local EPrints ID: 259497
URI: http://eprints.soton.ac.uk/id/eprint/259497
PURE UUID: 159183be-f1ea-4e57-90f2-32529fa1128d
ORCID for Luc Moreau: ORCID iD orcid.org/0000-0002-3494-120X

Catalogue record

Date deposited: 28 Jun 2004
Last modified: 14 Mar 2024 06:24

Export record

Contributors

Author: Partha S. Dutta
Author: Srinandan Dasmahapatra
Author: Steve R. Gunn
Author: N. R. Jennings
Author: Luc Moreau ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×