Map-based experience replay: a memory-efficient solution to catastrophic forgetting in reinforcement learning
Map-based experience replay: a memory-efficient solution to catastrophic forgetting in reinforcement learning
Deep reinforcement learning (RL) agents often suffer from catastrophic forgetting, forgetting previously found solutions in parts of the input space when training new data. Replay memories are a common solution to the problem by decorrelating and shuffling old and new training samples. They naively store state transitions as they arrive, without regard for redundancy. We introduce a novel cognitive-inspired replay memory approach based on the Grow-When-Required (GWR) self-organizing network, which resembles a map-based mental model of the world. Our approach organizes stored transitions into a concise environment-model-like network of state nodes and transition edges, merging similar samples to reduce the memory size and increase pair-wise distance among samples, which increases the relevancy of each sample. Overall, our study shows that map-based experience replay allows for significant memory reduction with only small decreases in performance.
Hafez, Muhammad Burhan
e8c991ab-d800-46f2-abeb-cb169a1ed47e
Immisch, Tilman
9be10a3f-7267-4c8d-a591-0c2b27bc74d1
Weber, Tom
1563631f-cbaf-4428-ad9b-95a4aa89cc51
Wermter, Stefan
80682cc6-4251-420a-af8a-f4d616fb0fcc
27 June 2023
Hafez, Muhammad Burhan
e8c991ab-d800-46f2-abeb-cb169a1ed47e
Immisch, Tilman
9be10a3f-7267-4c8d-a591-0c2b27bc74d1
Weber, Tom
1563631f-cbaf-4428-ad9b-95a4aa89cc51
Wermter, Stefan
80682cc6-4251-420a-af8a-f4d616fb0fcc
Hafez, Muhammad Burhan, Immisch, Tilman, Weber, Tom and Wermter, Stefan
(2023)
Map-based experience replay: a memory-efficient solution to catastrophic forgetting in reinforcement learning.
Frontiers in Neurorobotics, 17.
(doi:10.3389/fnbot.2023.1127642).
Abstract
Deep reinforcement learning (RL) agents often suffer from catastrophic forgetting, forgetting previously found solutions in parts of the input space when training new data. Replay memories are a common solution to the problem by decorrelating and shuffling old and new training samples. They naively store state transitions as they arrive, without regard for redundancy. We introduce a novel cognitive-inspired replay memory approach based on the Grow-When-Required (GWR) self-organizing network, which resembles a map-based mental model of the world. Our approach organizes stored transitions into a concise environment-model-like network of state nodes and transition edges, merging similar samples to reduce the memory size and increase pair-wise distance among samples, which increases the relevancy of each sample. Overall, our study shows that map-based experience replay allows for significant memory reduction with only small decreases in performance.
Text
fnbot-17-1127642
- Version of Record
More information
Accepted/In Press date: 28 April 2023
Published date: 27 June 2023
Identifiers
Local EPrints ID: 496188
URI: http://eprints.soton.ac.uk/id/eprint/496188
PURE UUID: d3727130-cbe2-46f8-9371-fbf1782299c6
Catalogue record
Date deposited: 06 Dec 2024 17:34
Last modified: 07 Dec 2024 03:13
Export record
Altmetrics
Contributors
Author:
Muhammad Burhan Hafez
Author:
Tilman Immisch
Author:
Tom Weber
Author:
Stefan Wermter
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics