The University of Southampton
University of Southampton Institutional Repository

Safety cases for the formal verification of automatically generated code

Safety cases for the formal verification of automatically generated code
Safety cases for the formal verification of automatically generated code
Model-based development and automated code generation are increasingly used for actual production code, in particular in mathematical and engineering domains. However, since code generators are typically not qualified, there is no guarantee that their output is correct or even safe. Formal methods which are based on mathematically-based techniques have been proposed as a means to improve software quality by providing formal safety proofs as explicit evidence for the assurance claims. However, the proofs are often complex and may also be based on assumptions and reasoning principles that are not justified. This causes concerns about the trustworthiness of the proofs and hence the assurance claims on the safety of the program. This thesis presents an approach to systematically and automatically construct comprehensive safety cases using the Goal Structuring Notation from a formal analysis of automatically generated code, based on automated theorem proving, and driven by a set of safety requirements and properties. We also present an approach to systematically derive safety cases that argue along the hierarchical structure of systems in model-based development. This core safety case is extended by separately specified auxiliary information from other verification and validation activities such as testing. The thesis also presents an approach to develop safety cases that correspond to the formal proofs found by automated theorem provers and that reveal the underlying proof argumentation structure and top-level assumptions. The resulting safety cases will make explicit the formal and informal reasoning principles, and reveal the top-level assumptions and external dependencies that must be taken into account in demonstrating software safety. The safety cases can be thought as “structured reading guide" for the software and the safety proofs that provide traceable arguments on the assurance provided. The approach has been illustrated on code generated using Real-Time Workshop for Guidance, Navigation, and Control (GN&C) systems of NASA' s Project Constellation and on code for deep space attitude estimation generated by the AutoFilter system developed at NASA Ames.
Basir, Nurlida
bdf108c1-4187-44d5-a792-24ec81feb912
Basir, Nurlida
bdf108c1-4187-44d5-a792-24ec81feb912
Fischer, Bernd
0c9575e6-d099-47f1-b3a2-2dbc93c53d18

Basir, Nurlida (2010) Safety cases for the formal verification of automatically generated code. University of Southampton, School of Electronics and Computer Science, Doctoral Thesis, 191pp.

Record type: Thesis (Doctoral)

Abstract

Model-based development and automated code generation are increasingly used for actual production code, in particular in mathematical and engineering domains. However, since code generators are typically not qualified, there is no guarantee that their output is correct or even safe. Formal methods which are based on mathematically-based techniques have been proposed as a means to improve software quality by providing formal safety proofs as explicit evidence for the assurance claims. However, the proofs are often complex and may also be based on assumptions and reasoning principles that are not justified. This causes concerns about the trustworthiness of the proofs and hence the assurance claims on the safety of the program. This thesis presents an approach to systematically and automatically construct comprehensive safety cases using the Goal Structuring Notation from a formal analysis of automatically generated code, based on automated theorem proving, and driven by a set of safety requirements and properties. We also present an approach to systematically derive safety cases that argue along the hierarchical structure of systems in model-based development. This core safety case is extended by separately specified auxiliary information from other verification and validation activities such as testing. The thesis also presents an approach to develop safety cases that correspond to the formal proofs found by automated theorem provers and that reveal the underlying proof argumentation structure and top-level assumptions. The resulting safety cases will make explicit the formal and informal reasoning principles, and reveal the top-level assumptions and external dependencies that must be taken into account in demonstrating software safety. The safety cases can be thought as “structured reading guide" for the software and the safety proofs that provide traceable arguments on the assurance provided. The approach has been illustrated on code generated using Real-Time Workshop for Guidance, Navigation, and Control (GN&C) systems of NASA' s Project Constellation and on code for deep space attitude estimation generated by the AutoFilter system developed at NASA Ames.

Text
PhDThesis_Nurlida.pdf - Other
Download (2MB)

More information

Published date: July 2010
Organisations: University of Southampton

Identifiers

Local EPrints ID: 160073
URI: http://eprints.soton.ac.uk/id/eprint/160073
PURE UUID: 35135ffa-232c-4f80-997c-ed4b2a88d67f

Catalogue record

Date deposited: 15 Jul 2010 15:39
Last modified: 29 Jan 2020 14:08

Export record

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×