EnnCore: end-to-end conceptual guarding of neural architectures
EnnCore: end-to-end conceptual guarding of neural architectures
The EnnCore project addresses the fundamental security problem of guaranteeing safety, transparency, and robustness in neural-based architectures. Specifically, EnnCore aims at enabling system designers to specify essential conceptual/behavioral properties of neural-based systems, verify them, and thus safeguard the system against unpredictable behavior and attacks. In this respect, EnnCore will pioneer the dialogue between contemporary explainable neural models and full-stack neural software verification. This paper describes existing studies' limitations, our research objectives, current achievements, and future trends towards this goal. In particular, we describe the development and evaluation of new methods, algorithms, and tools to achieve fully-verifiable intelligent systems, which are explainable, whose correct behavior is guaranteed, and robust against attacks. We also describe how EnnCore will be validated on two diverse and high-impact application scenarios: securing an AI system for (i) cancer diagnosis and (ii) energy demand response.
CEUR Workshop Proceedings
Manino, Edoardo
e5cec65c-c44b-45de-8255-7b1d8edfc04d
Carvalho, Danilo
ff944214-71c6-44ed-bcb7-62a9b8a4f0c7
Dong, Yi
355a62d9-5d1a-4c14-a900-9911e8c62453
Rozanova, Julia
7ae18d69-43bc-4e6e-83f2-6ad50f3e9c97
Song, Xidan
65408f37-b280-4994-9992-6fd66dbf87e3
Mustafa, Mustafa A.
30db5304-1f3e-4260-b381-757f667c8773
Freitas, Andre
c7a66eef-8f9d-4006-9d6c-cc75e6d6fe19
Brown, Gavin
0f2ffd63-38e7-4a02-b965-0cfbccd495e3
Luján, Mikel
7915418a-e15b-4dae-a083-f096cb9a92a4
Huang, Xiaowei
ea80b217-6df4-4708-970d-93303f2a17e5
Cordeiro, Lucas
fc7cb054-f39e-4013-9faa-a471bd006596
February 2022
Manino, Edoardo
e5cec65c-c44b-45de-8255-7b1d8edfc04d
Carvalho, Danilo
ff944214-71c6-44ed-bcb7-62a9b8a4f0c7
Dong, Yi
355a62d9-5d1a-4c14-a900-9911e8c62453
Rozanova, Julia
7ae18d69-43bc-4e6e-83f2-6ad50f3e9c97
Song, Xidan
65408f37-b280-4994-9992-6fd66dbf87e3
Mustafa, Mustafa A.
30db5304-1f3e-4260-b381-757f667c8773
Freitas, Andre
c7a66eef-8f9d-4006-9d6c-cc75e6d6fe19
Brown, Gavin
0f2ffd63-38e7-4a02-b965-0cfbccd495e3
Luján, Mikel
7915418a-e15b-4dae-a083-f096cb9a92a4
Huang, Xiaowei
ea80b217-6df4-4708-970d-93303f2a17e5
Cordeiro, Lucas
fc7cb054-f39e-4013-9faa-a471bd006596
Manino, Edoardo, Carvalho, Danilo, Dong, Yi, Rozanova, Julia, Song, Xidan, Mustafa, Mustafa A., Freitas, Andre, Brown, Gavin, Luján, Mikel, Huang, Xiaowei and Cordeiro, Lucas
(2022)
EnnCore: end-to-end conceptual guarding of neural architectures.
Pedroza, Gabriel, Hernández-Orallo, José, Chen, Xin Cynthia, Huang, Xiaowei, Espinoza, Huáscar, Castillo-Effen, Mauricio, McDermid, John, Mallah, Richard and Ó hÉigeartaigh, Seán
(eds.)
In SafeAI 2022 Artificial Intelligence Safety 2022: Proceedings of the Workshop on Artificial Intelligence Safety 2022 (SafeAI 2022) co-located with the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI2022).
vol. 3087,
CEUR Workshop Proceedings.
8 pp
.
Record type:
Conference or Workshop Item
(Paper)
Abstract
The EnnCore project addresses the fundamental security problem of guaranteeing safety, transparency, and robustness in neural-based architectures. Specifically, EnnCore aims at enabling system designers to specify essential conceptual/behavioral properties of neural-based systems, verify them, and thus safeguard the system against unpredictable behavior and attacks. In this respect, EnnCore will pioneer the dialogue between contemporary explainable neural models and full-stack neural software verification. This paper describes existing studies' limitations, our research objectives, current achievements, and future trends towards this goal. In particular, we describe the development and evaluation of new methods, algorithms, and tools to achieve fully-verifiable intelligent systems, which are explainable, whose correct behavior is guaranteed, and robust against attacks. We also describe how EnnCore will be validated on two diverse and high-impact application scenarios: securing an AI system for (i) cancer diagnosis and (ii) energy demand response.
Text
paper_9
- Version of Record
More information
Published date: February 2022
Additional Information:
Funding Information:
The work is funded by EPSRC grant EP/T026995/1 entitled “EnnCore: End-to-End Conceptual Guarding of Neural Architectures” under Security for all in an AI enabled society. Prof. Luján is funded by an Arm/RAEng Research Chair award and a Royal Society Wolfson Fellowship.
Venue - Dates:
2022 Workshop on Artificial Intelligence Safety, SafeAI 2022, , Virtual, Online, Canada, 2022-02-28
Identifiers
Local EPrints ID: 484423
URI: http://eprints.soton.ac.uk/id/eprint/484423
ISSN: 1613-0073
PURE UUID: 493dd370-4c4d-4be0-85ab-5c5927656f45
Catalogue record
Date deposited: 16 Nov 2023 11:59
Last modified: 06 Jun 2024 02:20
Export record
Contributors
Author:
Edoardo Manino
Author:
Danilo Carvalho
Author:
Yi Dong
Author:
Julia Rozanova
Author:
Xidan Song
Author:
Mustafa A. Mustafa
Author:
Andre Freitas
Author:
Gavin Brown
Author:
Mikel Luján
Author:
Xiaowei Huang
Author:
Lucas Cordeiro
Editor:
Gabriel Pedroza
Editor:
José Hernández-Orallo
Editor:
Xin Cynthia Chen
Editor:
Xiaowei Huang
Editor:
Huáscar Espinoza
Editor:
Mauricio Castillo-Effen
Editor:
John McDermid
Editor:
Richard Mallah
Editor:
Seán Ó hÉigeartaigh
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics