Reachability verification based reliability assessment for deep reinforcement learning controlled robotics and autonomous systems
Reachability verification based reliability assessment for deep reinforcement learning controlled robotics and autonomous systems
Deep Reinforcement Learning (DRL) has achieved impressive performance in robotics and autonomous systems (RASs). A key impediment to its deployment in real-life operations is the spuriously unsafe DRL policies--unexplored states may lead the agent to make wrong decisions that may cause hazards, especially in applications where end-to-end controllers of the RAS were trained by DRL. In this paper, we propose a novel quantitative reliability assessment framework for DRL-controlled RASs, leveraging verification evidence generated from formal reliability analysis of neural networks. A two-level verification framework is introduced to check the safety property with respect to inaccurate observations that are due to, e.g., environmental noises and state changes. Reachability verification tools are leveraged at the local level to generate safety evidence of trajectories, while at the global level, we quantify the overall reliability as an aggregated metric of local safety evidence, according to an operational profile. The effectiveness of the proposed verification framework is demonstrated and validated via experiments on real RASs.
cs.RO, cs.AI
Dong, Yi
355a62d9-5d1a-4c14-a900-9911e8c62453
Zhao, Xingyu
56d69104-77e5-4741-bca1-c0fa13f433fe
Wang, Sen
003d3e09-ec33-4b78-852a-626e8532f5cb
Huang, Xiaowei
ea80b217-6df4-4708-970d-93303f2a17e5
Dong, Yi
355a62d9-5d1a-4c14-a900-9911e8c62453
Zhao, Xingyu
56d69104-77e5-4741-bca1-c0fa13f433fe
Wang, Sen
003d3e09-ec33-4b78-852a-626e8532f5cb
Huang, Xiaowei
ea80b217-6df4-4708-970d-93303f2a17e5
[Unknown type: UNSPECIFIED]
Abstract
Deep Reinforcement Learning (DRL) has achieved impressive performance in robotics and autonomous systems (RASs). A key impediment to its deployment in real-life operations is the spuriously unsafe DRL policies--unexplored states may lead the agent to make wrong decisions that may cause hazards, especially in applications where end-to-end controllers of the RAS were trained by DRL. In this paper, we propose a novel quantitative reliability assessment framework for DRL-controlled RASs, leveraging verification evidence generated from formal reliability analysis of neural networks. A two-level verification framework is introduced to check the safety property with respect to inaccurate observations that are due to, e.g., environmental noises and state changes. Reachability verification tools are leveraged at the local level to generate safety evidence of trajectories, while at the global level, we quantify the overall reliability as an aggregated metric of local safety evidence, according to an operational profile. The effectiveness of the proposed verification framework is demonstrated and validated via experiments on real RASs.
Text
2210.14991v1
- Author's Original
More information
Submitted date: 26 October 2022
Keywords:
cs.RO, cs.AI
Identifiers
Local EPrints ID: 483958
URI: http://eprints.soton.ac.uk/id/eprint/483958
PURE UUID: efc5a05f-159b-4964-b514-bad379ab3f6f
Catalogue record
Date deposited: 07 Nov 2023 18:53
Last modified: 18 Mar 2024 04:17
Export record
Altmetrics
Contributors
Author:
Yi Dong
Author:
Xingyu Zhao
Author:
Sen Wang
Author:
Xiaowei Huang
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics