Safety cases for the formal verification of automatically generated code
Safety cases for the formal verification of automatically generated code
Model-based development and automated code generation are increasingly used for actual production code, in particular in mathematical and engineering domains. However, since code generators are typically not qualified, there is no guarantee that their output is correct or even safe. Formal methods which are based on mathematically-based techniques have been proposed as a means to improve software quality by providing formal safety proofs as explicit evidence for the assurance claims. However, the proofs are often complex and may also be based on assumptions and reasoning principles that are not justified. This causes concerns about the trustworthiness of the proofs and hence the assurance claims on the safety of the program. This thesis presents an approach to systematically and automatically construct comprehensive safety cases using the Goal Structuring Notation from a formal analysis of automatically generated code, based on automated theorem proving, and driven by a set of safety requirements and properties. We also present an approach to systematically derive safety cases that argue along the hierarchical structure of systems in model-based development. This core safety case is extended by separately specified auxiliary information from other verification and validation activities such as testing. The thesis also presents an approach to develop safety cases that correspond to the formal proofs found by automated theorem provers and that reveal the underlying proof argumentation structure and top-level assumptions. The resulting safety cases will make explicit the formal and informal reasoning principles, and reveal the top-level assumptions and external dependencies that must be taken into account in demonstrating software safety. The safety cases can be thought as “structured reading guide" for the software and the safety proofs that provide traceable arguments on the assurance provided. The approach has been illustrated on code generated using Real-Time Workshop for Guidance, Navigation, and Control (GN&C) systems of NASA' s Project Constellation and on code for deep space attitude estimation generated by the AutoFilter system developed at NASA Ames.
Basir, Nurlida
bdf108c1-4187-44d5-a792-24ec81feb912
July 2010
Basir, Nurlida
bdf108c1-4187-44d5-a792-24ec81feb912
Fischer, Bernd
0c9575e6-d099-47f1-b3a2-2dbc93c53d18
Basir, Nurlida
(2010)
Safety cases for the formal verification of automatically generated code.
University of Southampton, School of Electronics and Computer Science, Doctoral Thesis, 191pp.
Record type:
Thesis
(Doctoral)
Abstract
Model-based development and automated code generation are increasingly used for actual production code, in particular in mathematical and engineering domains. However, since code generators are typically not qualified, there is no guarantee that their output is correct or even safe. Formal methods which are based on mathematically-based techniques have been proposed as a means to improve software quality by providing formal safety proofs as explicit evidence for the assurance claims. However, the proofs are often complex and may also be based on assumptions and reasoning principles that are not justified. This causes concerns about the trustworthiness of the proofs and hence the assurance claims on the safety of the program. This thesis presents an approach to systematically and automatically construct comprehensive safety cases using the Goal Structuring Notation from a formal analysis of automatically generated code, based on automated theorem proving, and driven by a set of safety requirements and properties. We also present an approach to systematically derive safety cases that argue along the hierarchical structure of systems in model-based development. This core safety case is extended by separately specified auxiliary information from other verification and validation activities such as testing. The thesis also presents an approach to develop safety cases that correspond to the formal proofs found by automated theorem provers and that reveal the underlying proof argumentation structure and top-level assumptions. The resulting safety cases will make explicit the formal and informal reasoning principles, and reveal the top-level assumptions and external dependencies that must be taken into account in demonstrating software safety. The safety cases can be thought as “structured reading guide" for the software and the safety proofs that provide traceable arguments on the assurance provided. The approach has been illustrated on code generated using Real-Time Workshop for Guidance, Navigation, and Control (GN&C) systems of NASA' s Project Constellation and on code for deep space attitude estimation generated by the AutoFilter system developed at NASA Ames.
Text
PhDThesis_Nurlida.pdf
- Other
More information
Published date: July 2010
Organisations:
University of Southampton
Identifiers
Local EPrints ID: 160073
URI: http://eprints.soton.ac.uk/id/eprint/160073
PURE UUID: 35135ffa-232c-4f80-997c-ed4b2a88d67f
Catalogue record
Date deposited: 15 Jul 2010 15:39
Last modified: 14 Mar 2024 01:57
Export record
Contributors
Author:
Nurlida Basir
Thesis advisor:
Bernd Fischer
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics