Black-box certification with randomized smoothing: a functional optimization based framework
Black-box certification with randomized smoothing: a functional optimization based framework
Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning. However, most existing methods only leverage Gaussian smoothing noise and only work for ℓ2 perturbation. We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks, from a unified functional optimization perspective. Our new framework allows us to identify a key trade-off between accuracy and robustness via designing smoothing distributions and leverage it to design new families of non-Gaussian smoothing distributions that work more efficiently for different ℓp settings, including ℓ1, ℓ2 and ℓ∞ attacks. Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
Neural Information Processing Systems Foundation
Zhang, Dinghuai
67b304ad-71ea-444f-8edf-906b956704a6
Ye, Mao
102437c3-1adc-43d1-95f8-10ff4d4c3b51
Gong, Chengyue
75a8c096-7487-4dca-99be-5c7b734978ae
Zhu, Zhanxing
e55e7385-8ba2-4a85-8bae-e00defb7d7f0
Liu, Qiang
25d88e4a-b583-4540-8833-6ce6c42b6bb0
2020
Zhang, Dinghuai
67b304ad-71ea-444f-8edf-906b956704a6
Ye, Mao
102437c3-1adc-43d1-95f8-10ff4d4c3b51
Gong, Chengyue
75a8c096-7487-4dca-99be-5c7b734978ae
Zhu, Zhanxing
e55e7385-8ba2-4a85-8bae-e00defb7d7f0
Liu, Qiang
25d88e4a-b583-4540-8833-6ce6c42b6bb0
Zhang, Dinghuai, Ye, Mao, Gong, Chengyue, Zhu, Zhanxing and Liu, Qiang
(2020)
Black-box certification with randomized smoothing: a functional optimization based framework.
Larochelle, H., Ranzanto, M., Hadsell, R., Balcan, M.F. and Lin, H.
(eds.)
In Advances in Neural Information Processing Systems 33.
Neural Information Processing Systems Foundation..
Record type:
Conference or Workshop Item
(Paper)
Abstract
Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning. However, most existing methods only leverage Gaussian smoothing noise and only work for ℓ2 perturbation. We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks, from a unified functional optimization perspective. Our new framework allows us to identify a key trade-off between accuracy and robustness via designing smoothing distributions and leverage it to design new families of non-Gaussian smoothing distributions that work more efficiently for different ℓp settings, including ℓ1, ℓ2 and ℓ∞ attacks. Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
This record has no associated files available for download.
More information
Published date: 2020
Venue - Dates:
Thirty-fourth Conference on Neural Information Processing Systems, virtual, 2020-12-06 - 2020-12-12
Identifiers
Local EPrints ID: 486057
URI: http://eprints.soton.ac.uk/id/eprint/486057
PURE UUID: 9577bfd0-90f5-4f7f-8d0d-370f5a011d45
Catalogue record
Date deposited: 08 Jan 2024 17:35
Last modified: 17 Mar 2024 06:43
Export record
Contributors
Author:
Dinghuai Zhang
Author:
Mao Ye
Author:
Chengyue Gong
Author:
Zhanxing Zhu
Author:
Qiang Liu
Editor:
H. Larochelle
Editor:
M. Ranzanto
Editor:
R. Hadsell
Editor:
M.F. Balcan
Editor:
H. Lin
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics