Improving model-based reinforcement learning with internal state representations through self-supervision
Improving model-based reinforcement learning with internal state representations through self-supervision
Using a model of the environment, reinforcement learning agents can plan their future moves and achieve superhuman performance in board games like Chess, Shogi, and Go, while remaining relatively sample-efficient. As demonstrated by the MuZero Algorithm, the environment model can even be learned dynamically, generalizing the agent to many more tasks while at the same time achieving state-of-the-art performance. Notably, MuZero uses internal state representations derived from real environment states for its predictions. In this paper, we bind the model's predicted internal state representation to the environment state via two additional terms: a reconstruction model loss and a simpler consistency loss, both of which work independently and unsupervised, acting as constraints to stabilize the learning process. Our experiments show that this new integration of reconstruction model loss and simpler consistency loss provide a significant performance increase in OpenAI Gym environments. Our modifications also enable self-supervised pretraining for MuZero, so the algorithm can learn about environment dynamics before a goal is made available.
1-8
Scholz, Julien
5ec8de61-1d5a-40e3-a7f6-01d9a1a76a1f
Weber, Cornelius
4e097e6c-840c-460a-8572-e8759f137e43
Hafez, Muhammad Burhan
e8c991ab-d800-46f2-abeb-cb169a1ed47e
Wermter, Stefan
80682cc6-4251-420a-af8a-f4d616fb0fcc
20 September 2021
Scholz, Julien
5ec8de61-1d5a-40e3-a7f6-01d9a1a76a1f
Weber, Cornelius
4e097e6c-840c-460a-8572-e8759f137e43
Hafez, Muhammad Burhan
e8c991ab-d800-46f2-abeb-cb169a1ed47e
Wermter, Stefan
80682cc6-4251-420a-af8a-f4d616fb0fcc
Scholz, Julien, Weber, Cornelius, Hafez, Muhammad Burhan and Wermter, Stefan
(2021)
Improving model-based reinforcement learning with internal state representations through self-supervision.
International Joint Conference on Neural Networks, , Shenzhen, China.
18 - 22 Jul 2021.
.
(doi:10.1109/IJCNN52387.2021.9534023).
Record type:
Conference or Workshop Item
(Paper)
Abstract
Using a model of the environment, reinforcement learning agents can plan their future moves and achieve superhuman performance in board games like Chess, Shogi, and Go, while remaining relatively sample-efficient. As demonstrated by the MuZero Algorithm, the environment model can even be learned dynamically, generalizing the agent to many more tasks while at the same time achieving state-of-the-art performance. Notably, MuZero uses internal state representations derived from real environment states for its predictions. In this paper, we bind the model's predicted internal state representation to the environment state via two additional terms: a reconstruction model loss and a simpler consistency loss, both of which work independently and unsupervised, acting as constraints to stabilize the learning process. Our experiments show that this new integration of reconstruction model loss and simpler consistency loss provide a significant performance increase in OpenAI Gym environments. Our modifications also enable self-supervised pretraining for MuZero, so the algorithm can learn about environment dynamics before a goal is made available.
Text
IJCNN-1552-ScholzHafez-IEEE-copyright-notice_
- Accepted Manuscript
Restricted to Repository staff only
Request a copy
More information
Published date: 20 September 2021
Venue - Dates:
International Joint Conference on Neural Networks, , Shenzhen, China, 2021-07-18 - 2021-07-22
Identifiers
Local EPrints ID: 495937
URI: http://eprints.soton.ac.uk/id/eprint/495937
PURE UUID: bd66dd83-978e-47ff-958f-429b07164ed4
Catalogue record
Date deposited: 27 Nov 2024 17:59
Last modified: 28 Nov 2024 03:07
Export record
Altmetrics
Contributors
Author:
Julien Scholz
Author:
Cornelius Weber
Author:
Muhammad Burhan Hafez
Author:
Stefan Wermter
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics