The University of Southampton
University of Southampton Institutional Repository

An annealing mechanism for adversarial training acceleration

An annealing mechanism for adversarial training acceleration
An annealing mechanism for adversarial training acceleration
Despite the empirical success in various domains, it has been revealed that deep neural networks are vulnerable to maliciously perturbed input data that can dramatically degrade their performance. These are known as adversarial attacks. To counter adversarial attacks, adversarial training formulated as a form of robust optimization has been demonstrated to be effective. However, conducting adversarial training brings much computational overhead compared with standard training. In order to reduce the computational cost, we propose an annealing mechanism, annealing mechanism for adversarial training acceleration (Amata), to reduce the overhead associated with adversarial training. The proposed Amata is provably convergent, well-motivated from the lens of optimal control theory, and can be combined with existing acceleration methods to further enhance performance. It is demonstrated that, on standard datasets, Amata can achieve similar or better robustness with around 1/3-1/2 the computational time compared with traditional methods. In addition, Amata can be incorporated into other adversarial training acceleration algorithms (e.g., YOPO, Free, Fast, and ATTA), which leads to a further reduction in computational time on large-scale problems.
Acceleration, adversarial training, deep neural networks (DNNs)
2162-237X
882-893
Ye, Nanyang
a87cef03-6348-4407-92ed-00af2a77f295
Li, Qianxiao
6217d5cd-ba9a-4b79-a0d8-d638d91f8785
Zhou, Xiao Yun
a7b2c7f4-3668-4664-ae19-91736c5a3f5f
Zhu, Zhanxing
e55e7385-8ba2-4a85-8bae-e00defb7d7f0
Ye, Nanyang
a87cef03-6348-4407-92ed-00af2a77f295
Li, Qianxiao
6217d5cd-ba9a-4b79-a0d8-d638d91f8785
Zhou, Xiao Yun
a7b2c7f4-3668-4664-ae19-91736c5a3f5f
Zhu, Zhanxing
e55e7385-8ba2-4a85-8bae-e00defb7d7f0

Ye, Nanyang, Li, Qianxiao, Zhou, Xiao Yun and Zhu, Zhanxing (2023) An annealing mechanism for adversarial training acceleration. IEEE Transactions on Neural Networks and Learning Systems, 34 (2), 882-893. (doi:10.1109/TNNLS.2021.3103528).

Record type: Article

Abstract

Despite the empirical success in various domains, it has been revealed that deep neural networks are vulnerable to maliciously perturbed input data that can dramatically degrade their performance. These are known as adversarial attacks. To counter adversarial attacks, adversarial training formulated as a form of robust optimization has been demonstrated to be effective. However, conducting adversarial training brings much computational overhead compared with standard training. In order to reduce the computational cost, we propose an annealing mechanism, annealing mechanism for adversarial training acceleration (Amata), to reduce the overhead associated with adversarial training. The proposed Amata is provably convergent, well-motivated from the lens of optimal control theory, and can be combined with existing acceleration methods to further enhance performance. It is demonstrated that, on standard datasets, Amata can achieve similar or better robustness with around 1/3-1/2 the computational time compared with traditional methods. In addition, Amata can be incorporated into other adversarial training acceleration algorithms (e.g., YOPO, Free, Fast, and ATTA), which leads to a further reduction in computational time on large-scale problems.

This record has no associated files available for download.

More information

Published date: 1 February 2023
Additional Information: Funding Information: The work of Nanyang Ye was supported in part by the National Key Research and Development Program of China under Grant 2017YFB1003000; in part by the National Natural Science Foundation of China under Grant 61672342, Grant 61671478, Grant 61532012, Grant 61822206, Grant 61832013, Grant 61960206002, and Grant 62041205; in part by the National Natural Science Foundation of China (Youth) Grant; in part by the Tencent AI Lab Rhino Bird Focused Research Program under Grant JR202034; in part by the Science and Technology Innovation Program of Shanghai under Grant 18XD1401800 and Grant 18510761200; in part by Shanghai Key Laboratory of Scalable Computing and Systems; in part by the Explore X Funding; and in part by the SJTU-BIREN Funding. The work of Zhanxing Zhu was supported in part by Beijing Nova Program under Grant 202072 from Beijing Municipal Science & Technology Commission, in part by the National Natural Science Foundation of China under Grant 61806009 and Grant 61932001, and in part by the PKU-Baidu Funding under Grant 2019BD005. Publisher Copyright: © 2022 IEEE.
Keywords: Acceleration, adversarial training, deep neural networks (DNNs)

Identifiers

Local EPrints ID: 485994
URI: http://eprints.soton.ac.uk/id/eprint/485994
ISSN: 2162-237X
PURE UUID: baa2d16e-3af6-46d8-9fcb-fdd13b4749dc

Catalogue record

Date deposited: 05 Jan 2024 17:30
Last modified: 17 Mar 2024 06:41

Export record

Altmetrics

Contributors

Author: Nanyang Ye
Author: Qianxiao Li
Author: Xiao Yun Zhou
Author: Zhanxing Zhu

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×