Curiosity-driven exploration enhances motor skills of continuous actor-critic learner
Curiosity-driven exploration enhances motor skills of continuous actor-critic learner
Guiding the action selection mechanism of an autonomous agent for learning control behaviors is a crucial issue in reinforcement learning. While classical approaches to reinforcement learning seem to be deeply dependent on external feedback, intrinsically motivated approaches are more natural and follow the principles of infant sensorimotor development. In this work, we investigate the role of incremental learning of predictive models in generating curiosity, an intrinsic motivation, for directing the agent's choice of action and propose a curiosity-driven reinforcement learning algorithm for continuous motor control. Our algorithm builds an internal representation of the state space that handles the computation of curiosity signals using the learned predictive models and extends the Continuous-Actor-Critic-Learning-Automaton to use extrinsic and intrinsic feedback. Evaluation of our algorithm on simple and complex robotic control tasks shows a significant performance gain for the intrinsically motivated goal reaching agent compared to agents that are only motivated by extrinsic rewards.
39-46
Hafez, Muhammad Burhan
e8c991ab-d800-46f2-abeb-cb169a1ed47e
Weber, Cornelius
4e097e6c-840c-460a-8572-e8759f137e43
Wermter, Stefan
80682cc6-4251-420a-af8a-f4d616fb0fcc
5 April 2018
Hafez, Muhammad Burhan
e8c991ab-d800-46f2-abeb-cb169a1ed47e
Weber, Cornelius
4e097e6c-840c-460a-8572-e8759f137e43
Wermter, Stefan
80682cc6-4251-420a-af8a-f4d616fb0fcc
Hafez, Muhammad Burhan, Weber, Cornelius and Wermter, Stefan
(2018)
Curiosity-driven exploration enhances motor skills of continuous actor-critic learner.
Joint IEEE International Conference on Development and Learning and Epigenetic Robotics, , Lisbon, Portugal.
18 - 21 Sep 2017.
.
(doi:10.1109/DEVLRN.2017.8329785).
Record type:
Conference or Workshop Item
(Paper)
Abstract
Guiding the action selection mechanism of an autonomous agent for learning control behaviors is a crucial issue in reinforcement learning. While classical approaches to reinforcement learning seem to be deeply dependent on external feedback, intrinsically motivated approaches are more natural and follow the principles of infant sensorimotor development. In this work, we investigate the role of incremental learning of predictive models in generating curiosity, an intrinsic motivation, for directing the agent's choice of action and propose a curiosity-driven reinforcement learning algorithm for continuous motor control. Our algorithm builds an internal representation of the state space that handles the computation of curiosity signals using the learned predictive models and extends the Continuous-Actor-Critic-Learning-Automaton to use extrinsic and intrinsic feedback. Evaluation of our algorithm on simple and complex robotic control tasks shows a significant performance gain for the intrinsically motivated goal reaching agent compared to agents that are only motivated by extrinsic rewards.
Text
ICDL-EpiRob-2017_HWW_0057
- Accepted Manuscript
More information
Published date: 5 April 2018
Venue - Dates:
Joint IEEE International Conference on Development and Learning and Epigenetic Robotics, , Lisbon, Portugal, 2017-09-18 - 2017-09-21
Identifiers
Local EPrints ID: 495807
URI: http://eprints.soton.ac.uk/id/eprint/495807
PURE UUID: 6453b72d-6884-436d-856b-4ce5cdcee12e
Catalogue record
Date deposited: 22 Nov 2024 18:06
Last modified: 23 Nov 2024 03:11
Export record
Altmetrics
Contributors
Author:
Muhammad Burhan Hafez
Author:
Cornelius Weber
Author:
Stefan Wermter
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics