Reliability assessment and safety arguments for machine learning components in system assurance
Reliability assessment and safety arguments for machine learning components in system assurance
The increasing use of Machine Learning (ML) components embedded in autonomous systems - so-called Learning-Enabled Systems (LESs) - has resulted in the pressing need to assure their functional safety. As for traditional functional safety, the emerging consensus within both, industry and academia, is to use assurance cases for this purpose. Typically assurance cases support claims of reliability in support of safety, and can be viewed as a structured way of organising arguments and evidence generated from safety analysis and reliability modelling activities. While such assurance activities are traditionally guided by consensus-based standards developed from vast engineering experience, LESs pose new challenges in safety-critical application due to the characteristics and design of ML models. In this article, we first present an overall assurance framework for LESs with an emphasis on quantitative aspects, e.g., breaking down system-level safety targets to component-level requirements and supporting claims stated in reliability metrics. We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers that utilises the operational profile and robustness verification evidence. We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM and propose solutions to practical use. Probabilistic safety argument templates at the lower ML component-level are also developed based on the RAM. Finally, to evaluate and demonstrate our methods, we not only conduct experiments on synthetic/benchmark datasets but also scope our methods with case studies on simulated Autonomous Underwater Vehicles and physical Unmanned Ground Vehicles.
assurance cases, Learning-Enabled Systems, operational profile, probabilistic claims, Robotics and Autonomous Systems, robustness verification, safe AI, safety arguments, safety regulation, safety-critical systems, Software reliability, statistical testing
Dong, Yi
355a62d9-5d1a-4c14-a900-9911e8c62453
Huang, Wei
bd1464ed-9914-4bab-8eb0-37e1bd50f9bf
Bharti, Vibhav
5d67ee03-99e1-453a-893d-eac903c5cdaf
Cox, Victoria
5ef297fc-0e97-4271-8aca-32c4bd8655d7
Banks, Alec
6eae788e-f142-449d-9914-80eba0031d15
Wang, Sen
003d3e09-ec33-4b78-852a-626e8532f5cb
Zhao, Xingyu
56d69104-77e5-4741-bca1-c0fa13f433fe
Schewe, Sven
eabccdbc-088f-4bc7-b101-1b83af6b6185
Huang, Xiaowei
ea80b217-6df4-4708-970d-93303f2a17e5
20 April 2023
Dong, Yi
355a62d9-5d1a-4c14-a900-9911e8c62453
Huang, Wei
bd1464ed-9914-4bab-8eb0-37e1bd50f9bf
Bharti, Vibhav
5d67ee03-99e1-453a-893d-eac903c5cdaf
Cox, Victoria
5ef297fc-0e97-4271-8aca-32c4bd8655d7
Banks, Alec
6eae788e-f142-449d-9914-80eba0031d15
Wang, Sen
003d3e09-ec33-4b78-852a-626e8532f5cb
Zhao, Xingyu
56d69104-77e5-4741-bca1-c0fa13f433fe
Schewe, Sven
eabccdbc-088f-4bc7-b101-1b83af6b6185
Huang, Xiaowei
ea80b217-6df4-4708-970d-93303f2a17e5
Dong, Yi, Huang, Wei, Bharti, Vibhav, Cox, Victoria, Banks, Alec, Wang, Sen, Zhao, Xingyu, Schewe, Sven and Huang, Xiaowei
(2023)
Reliability assessment and safety arguments for machine learning components in system assurance.
ACM Transactions on Embedded Computing Systems, 22 (3), [48].
(doi:10.1145/3570918).
Abstract
The increasing use of Machine Learning (ML) components embedded in autonomous systems - so-called Learning-Enabled Systems (LESs) - has resulted in the pressing need to assure their functional safety. As for traditional functional safety, the emerging consensus within both, industry and academia, is to use assurance cases for this purpose. Typically assurance cases support claims of reliability in support of safety, and can be viewed as a structured way of organising arguments and evidence generated from safety analysis and reliability modelling activities. While such assurance activities are traditionally guided by consensus-based standards developed from vast engineering experience, LESs pose new challenges in safety-critical application due to the characteristics and design of ML models. In this article, we first present an overall assurance framework for LESs with an emphasis on quantitative aspects, e.g., breaking down system-level safety targets to component-level requirements and supporting claims stated in reliability metrics. We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers that utilises the operational profile and robustness verification evidence. We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM and propose solutions to practical use. Probabilistic safety argument templates at the lower ML component-level are also developed based on the RAM. Finally, to evaluate and demonstrate our methods, we not only conduct experiments on synthetic/benchmark datasets but also scope our methods with case studies on simulated Autonomous Underwater Vehicles and physical Unmanned Ground Vehicles.
Text
Reliability Assessment and Safety Arguments for Machine Learning Components in System Assurance
- Accepted Manuscript
More information
Accepted/In Press date: 12 October 2022
e-pub ahead of print date: 17 November 2022
Published date: 20 April 2023
Additional Information:
Funding Information:
This work is supported by the UK DSTL (through the project of Safety Argument for Learning-enabled Autonomous Underwater Vehicles) and the UK EPSRC (through the Offshore Robotics for Certification of Assets [EP/W001136/1] and End-to-End Conceptual Guarding of Neural Architectures [EP/T026995/1]). Xingyu Zhao and Alec Banks’ contribution to the work is partially supported through Fellowships at the Assuring Autonomy International Programme. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 956123.
Keywords:
assurance cases, Learning-Enabled Systems, operational profile, probabilistic claims, Robotics and Autonomous Systems, robustness verification, safe AI, safety arguments, safety regulation, safety-critical systems, Software reliability, statistical testing
Identifiers
Local EPrints ID: 484269
URI: http://eprints.soton.ac.uk/id/eprint/484269
ISSN: 1539-9087
PURE UUID: cce96318-2952-4d03-a6c0-bba40363bf0a
Catalogue record
Date deposited: 13 Nov 2023 18:54
Last modified: 18 Mar 2024 04:17
Export record
Altmetrics
Contributors
Author:
Yi Dong
Author:
Wei Huang
Author:
Vibhav Bharti
Author:
Victoria Cox
Author:
Alec Banks
Author:
Sen Wang
Author:
Xingyu Zhao
Author:
Sven Schewe
Author:
Xiaowei Huang
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics