Relative robustness of quantized neural networks against adversarial attacks
Relative robustness of quantized neural networks against adversarial attacks
Neural networks are increasingly being moved to edge computing devices and smart sensors, to reduce latency and save bandwidth. Neural network compression such as quantization is necessary to fit trained neural networks into these resource constrained devices. At the same time, their use in safety-critical applications raises the need to verify properties of neural networks. Adversarial perturbations have potential to be used as an attack mechanism on neural networks, leading to "obviously wrong" misclassification. SMT solvers have been proposed to formally prove robustness guarantees against such adversarial perturbations. We investigate how well these robustness guarantees are preserved when the precision of a neural network is quantized. We also evaluate how effectively adversarial attacks transfer to quantized neural networks. Our results show that quantized neural networks are generally robust relative to their full precision counterpart (98.6%-99.7%), and the transfer of adversarial attacks decreases to as low as 52.05% when the subtlety of perturbation increases. These results show that quantization introduces resilience against transfer of adversarial attacks whilst causing negligible loss of robustness.
adversarial attack, neural network, verification
Duncan, Kirsty
87fe90a2-5621-468c-9b77-f47ea659d543
Komendantskaya, Ekaterina
f12d9c23-5589-40b8-bcf9-a04fe9dedf61
Stewart, Robert
3b99f51f-d1fe-4783-a845-5668e67b72bb
Lones, Michael
12925c9c-11ed-44ef-b470-828d638d9fb4
July 2020
Duncan, Kirsty
87fe90a2-5621-468c-9b77-f47ea659d543
Komendantskaya, Ekaterina
f12d9c23-5589-40b8-bcf9-a04fe9dedf61
Stewart, Robert
3b99f51f-d1fe-4783-a845-5668e67b72bb
Lones, Michael
12925c9c-11ed-44ef-b470-828d638d9fb4
Duncan, Kirsty, Komendantskaya, Ekaterina, Stewart, Robert and Lones, Michael
(2020)
Relative robustness of quantized neural networks against adversarial attacks.
In 2020 International Joint Conference on Neural Networks, IJCNN 2020 - Proceedings.
IEEE..
(doi:10.1109/IJCNN48605.2020.9207596).
Record type:
Conference or Workshop Item
(Paper)
Abstract
Neural networks are increasingly being moved to edge computing devices and smart sensors, to reduce latency and save bandwidth. Neural network compression such as quantization is necessary to fit trained neural networks into these resource constrained devices. At the same time, their use in safety-critical applications raises the need to verify properties of neural networks. Adversarial perturbations have potential to be used as an attack mechanism on neural networks, leading to "obviously wrong" misclassification. SMT solvers have been proposed to formally prove robustness guarantees against such adversarial perturbations. We investigate how well these robustness guarantees are preserved when the precision of a neural network is quantized. We also evaluate how effectively adversarial attacks transfer to quantized neural networks. Our results show that quantized neural networks are generally robust relative to their full precision counterpart (98.6%-99.7%), and the transfer of adversarial attacks decreases to as low as 52.05% when the subtlety of perturbation increases. These results show that quantization introduces resilience against transfer of adversarial attacks whilst causing negligible loss of robustness.
This record has no associated files available for download.
More information
Published date: July 2020
Additional Information:
Funding Information:
Supported by grant SecCon-NN: Neural Networks with Security Contracts towards lightweight, modular security for neural networks funded by the National Cyber Security Center UK, and EPSRC grant Border Patrol: Improving Smart Device Security through Type-Aware Systems Design (EP/N028201/1).
Publisher Copyright:
© 2020 IEEE.
Venue - Dates:
2020 International Joint Conference on Neural Networks, IJCNN 2020, , Virtual, Glasgow, United Kingdom, 2020-07-19 - 2020-07-24
Keywords:
adversarial attack, neural network, verification
Identifiers
Local EPrints ID: 482782
URI: http://eprints.soton.ac.uk/id/eprint/482782
PURE UUID: 131dcfd2-ab20-4fb3-bff1-210e54537131
Catalogue record
Date deposited: 12 Oct 2023 16:43
Last modified: 17 Mar 2024 13:32
Export record
Altmetrics
Contributors
Author:
Kirsty Duncan
Author:
Ekaterina Komendantskaya
Author:
Robert Stewart
Author:
Michael Lones
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics