The University of Southampton
University of Southampton Institutional Repository

Explicit Explore, Exploit, or Escape (E4): near-optimal safety-constrained reinforcement learning in polynomial time

Explicit Explore, Exploit, or Escape (E4): near-optimal safety-constrained reinforcement learning in polynomial time
Explicit Explore, Exploit, or Escape (E4): near-optimal safety-constrained reinforcement learning in polynomial time

In reinforcement learning (RL), an agent must explore an initially unknown environment in order to learn a desired behaviour. When RL agents are deployed in real world environments, safety is of primary concern. Constrained Markov decision processes (CMDPs) can provide long-term safety constraints; however, the agent may violate the constraints in an effort to explore its environment. This paper proposes a model-based RL algorithm called Explicit Explore, Exploit, or Escape (E 4), which extends the Explicit Explore or Exploit (E 3) algorithm to a robust CMDP setting. E 4 explicitly separates exploitation, exploration, and escape CMDPs, allowing targeted policies for policy improvement across known states, discovery of unknown states, as well as safe return to known states. E 4 robustly optimises these policies on the worst-case CMDP from a set of CMDP models consistent with the empirical observations of the deployment environment. Theoretical results show that E 4 finds a near-optimal constraint-satisfying policy in polynomial time whilst satisfying safety constraints throughout the learning process. We then discuss E 4 as a practical algorithmic framework, including robust-constrained offline optimisation algorithms, the design of uncertainty sets for the transition dynamics of unknown states, and how to further leverage empirical observations and prior knowledge to relax some of the worst-case assumptions underlying the theory.

Constrained Markov decision processes, Model-based reinforcement learning, Robust Markov decision processes, Safe artificial intelligence, Safe exploration
Bossens, David
633a4d28-2e59-4343-98fe-283082ba1873
Bishop, Nicholas
e2b8dc1a-a609-4709-84af-9b2455fd73e6
Bossens, David
633a4d28-2e59-4343-98fe-283082ba1873
Bishop, Nicholas
e2b8dc1a-a609-4709-84af-9b2455fd73e6

Bossens, David and Bishop, Nicholas (2022) Explicit Explore, Exploit, or Escape (E4): near-optimal safety-constrained reinforcement learning in polynomial time. Machine Learning. (doi:10.1007/s10994-022-06201-z).

Record type: Article

Abstract

In reinforcement learning (RL), an agent must explore an initially unknown environment in order to learn a desired behaviour. When RL agents are deployed in real world environments, safety is of primary concern. Constrained Markov decision processes (CMDPs) can provide long-term safety constraints; however, the agent may violate the constraints in an effort to explore its environment. This paper proposes a model-based RL algorithm called Explicit Explore, Exploit, or Escape (E 4), which extends the Explicit Explore or Exploit (E 3) algorithm to a robust CMDP setting. E 4 explicitly separates exploitation, exploration, and escape CMDPs, allowing targeted policies for policy improvement across known states, discovery of unknown states, as well as safe return to known states. E 4 robustly optimises these policies on the worst-case CMDP from a set of CMDP models consistent with the empirical observations of the deployment environment. Theoretical results show that E 4 finds a near-optimal constraint-satisfying policy in polynomial time whilst satisfying safety constraints throughout the learning process. We then discuss E 4 as a practical algorithmic framework, including robust-constrained offline optimisation algorithms, the design of uncertainty sets for the transition dynamics of unknown states, and how to further leverage empirical observations and prior knowledge to relax some of the worst-case assumptions underlying the theory.

Text
s10994-022-06201-z - Version of Record
Available under License Creative Commons Attribution.
Download (3MB)

More information

Accepted/In Press date: 26 May 2022
e-pub ahead of print date: 22 June 2022
Published date: 2022
Additional Information: Funding Information: David M. Bossens was supported by the UKRI Trustworthy Autonomous Systems Hub, EP/V00784X/1. Nicholas Bishop was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Partnership grant. Publisher Copyright: © 2022, The Author(s).
Keywords: Constrained Markov decision processes, Model-based reinforcement learning, Robust Markov decision processes, Safe artificial intelligence, Safe exploration

Identifiers

Local EPrints ID: 467920
URI: http://eprints.soton.ac.uk/id/eprint/467920
PURE UUID: 56654c4d-246f-42ea-a544-d7727c64c46a
ORCID for David Bossens: ORCID iD orcid.org/0000-0003-1924-5756
ORCID for Nicholas Bishop: ORCID iD orcid.org/0000-0001-7062-9072

Catalogue record

Date deposited: 25 Jul 2022 16:53
Last modified: 16 Mar 2024 18:08

Export record

Altmetrics

Contributors

Author: David Bossens ORCID iD
Author: Nicholas Bishop ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×