Explicit Explore, Exploit, or Escape (E4): near-optimal safety-constrained reinforcement learning in polynomial time
Explicit Explore, Exploit, or Escape (E4): near-optimal safety-constrained reinforcement learning in polynomial time
In reinforcement learning (RL), an agent must explore an initially unknown environment in order to learn a desired behaviour. When RL agents are deployed in real world environments, safety is of primary concern. Constrained Markov decision processes (CMDPs) can provide long-term safety constraints; however, the agent may violate the constraints in an effort to explore its environment. This paper proposes a model-based RL algorithm called Explicit Explore, Exploit, or Escape (E
4), which extends the Explicit Explore or Exploit (E
3) algorithm to a robust CMDP setting. E
4 explicitly separates exploitation, exploration, and escape CMDPs, allowing targeted policies for policy improvement across known states, discovery of unknown states, as well as safe return to known states. E
4 robustly optimises these policies on the worst-case CMDP from a set of CMDP models consistent with the empirical observations of the deployment environment. Theoretical results show that E
4 finds a near-optimal constraint-satisfying policy in polynomial time whilst satisfying safety constraints throughout the learning process. We then discuss E
4 as a practical algorithmic framework, including robust-constrained offline optimisation algorithms, the design of uncertainty sets for the transition dynamics of unknown states, and how to further leverage empirical observations and prior knowledge to relax some of the worst-case assumptions underlying the theory.
Constrained Markov decision processes, Model-based reinforcement learning, Robust Markov decision processes, Safe artificial intelligence, Safe exploration
Bossens, David
633a4d28-2e59-4343-98fe-283082ba1873
Bishop, Nicholas
e2b8dc1a-a609-4709-84af-9b2455fd73e6
2022
Bossens, David
633a4d28-2e59-4343-98fe-283082ba1873
Bishop, Nicholas
e2b8dc1a-a609-4709-84af-9b2455fd73e6
Bossens, David and Bishop, Nicholas
(2022)
Explicit Explore, Exploit, or Escape (E4): near-optimal safety-constrained reinforcement learning in polynomial time.
Machine Learning.
(doi:10.1007/s10994-022-06201-z).
Abstract
In reinforcement learning (RL), an agent must explore an initially unknown environment in order to learn a desired behaviour. When RL agents are deployed in real world environments, safety is of primary concern. Constrained Markov decision processes (CMDPs) can provide long-term safety constraints; however, the agent may violate the constraints in an effort to explore its environment. This paper proposes a model-based RL algorithm called Explicit Explore, Exploit, or Escape (E
4), which extends the Explicit Explore or Exploit (E
3) algorithm to a robust CMDP setting. E
4 explicitly separates exploitation, exploration, and escape CMDPs, allowing targeted policies for policy improvement across known states, discovery of unknown states, as well as safe return to known states. E
4 robustly optimises these policies on the worst-case CMDP from a set of CMDP models consistent with the empirical observations of the deployment environment. Theoretical results show that E
4 finds a near-optimal constraint-satisfying policy in polynomial time whilst satisfying safety constraints throughout the learning process. We then discuss E
4 as a practical algorithmic framework, including robust-constrained offline optimisation algorithms, the design of uncertainty sets for the transition dynamics of unknown states, and how to further leverage empirical observations and prior knowledge to relax some of the worst-case assumptions underlying the theory.
Text
s10994-022-06201-z
- Version of Record
More information
Accepted/In Press date: 26 May 2022
e-pub ahead of print date: 22 June 2022
Published date: 2022
Additional Information:
Funding Information:
David M. Bossens was supported by the UKRI Trustworthy Autonomous Systems Hub, EP/V00784X/1. Nicholas Bishop was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Partnership grant.
Publisher Copyright:
© 2022, The Author(s).
Keywords:
Constrained Markov decision processes, Model-based reinforcement learning, Robust Markov decision processes, Safe artificial intelligence, Safe exploration
Identifiers
Local EPrints ID: 467920
URI: http://eprints.soton.ac.uk/id/eprint/467920
PURE UUID: 56654c4d-246f-42ea-a544-d7727c64c46a
Catalogue record
Date deposited: 25 Jul 2022 16:53
Last modified: 16 Mar 2024 18:08
Export record
Altmetrics
Contributors
Author:
David Bossens
Author:
Nicholas Bishop
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics