CaltechAUTHORS
  A Caltech Library Service

Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds

Huang, Yujia and Zhang, Huan and Shi, Yuanyuan and Kolter, J. Zico and Anandkumar, Anima (2021) Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds. In: Advances in Neural Information Processing Systems 34 (NeurIPS 2021). Advances in Neural Information Processing Systems , pp. 22745-22757. ISBN 9781713845393. https://resolver.caltech.edu/CaltechAUTHORS:20220714-224653496

[img] PDF - Published Version
See Usage Policy.

581kB
[img] PDF - Supplemental Material
See Usage Policy.

908kB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20220714-224653496

Abstract

Certified robustness is a desirable property for deep neural networks in safety-critical applications, and popular training algorithms can certify robustness of a neural network by computing a global bound on its Lipschitz constant. However, such a bound is often loose: it tends to over-regularize the neural network and degrade its natural accuracy. A tighter Lipschitz bound may provide a better tradeoff between natural and certified accuracy, but is generally hard to compute exactly due to non-convexity of the network. In this work, we propose an efficient and trainable \emph{local} Lipschitz upper bound by considering the interactions between activation functions (e.g. ReLU) and weight matrices. Specifically, when computing the induced norm of a weight matrix, we eliminate the corresponding rows and columns where the activation function is guaranteed to be a constant in the neighborhood of each given data point, which provides a provably tighter bound than the global Lipschitz constant of the neural network. Our method can be used as a plug-in module to tighten the Lipschitz bound in many certifiable training algorithms. Furthermore, we propose to clip activation functions (e.g., ReLU and MaxMin) with a learnable upper threshold and a sparsity loss to assist the network to achieve an even tighter local Lipschitz bound. Experimentally, we show that our method consistently outperforms state-of-the-art methods in both clean and certified accuracy on MNIST, CIFAR-10 and TinyImageNet datasets with various network architectures.


Item Type:Book Section
Related URLs:
URLURL TypeDescription
https://proceedings.neurips.cc/paper/2021/hash/c055dcc749c2632fd4dd806301f05ba6-Abstract.htmlPublisherArticle
https://doi.org/10.48550/arXiv.2111.01395arXivDiscussion Paper
https://proceedings.neurips.cc/paper/2021/file/c055dcc749c2632fd4dd806301f05ba6-Supplemental.pdfPublisherSupporting Information
https://github. com/yjhuangcd/local-lipschitzRelated ItemCode
ORCID:
AuthorORCID
Huang, Yujia0000-0001-7667-8342
Zhang, Huan0000-0002-1096-4255
Shi, Yuanyuan0000-0002-6182-7664
Anandkumar, Anima0000-0002-6974-6797
Additional Information:Y. Huang is supported by DARPA LwLL grants. A. Anandkumar is supported in part by Bren endowed chair, DARPA LwLL grants, Microsoft, Google, Adobe faculty fellowships, and DE Logi grant. Huan Zhang is supported by funding from the Bosch Center for Artificial Intelligence.
Funders:
Funding AgencyGrant Number
Defense Advanced Research Projects Agency (DARPA)UNSPECIFIED
Learning with Less Labels (LwLL)UNSPECIFIED
Bren Professor of Computing and Mathematical SciencesUNSPECIFIED
Microsoft Faculty FellowshipUNSPECIFIED
Google Faculty Research AwardUNSPECIFIED
AdobeUNSPECIFIED
Caltech De Logi FundUNSPECIFIED
Bosch Center for Artificial IntelligenceUNSPECIFIED
DOI:10.48550/arXiv.arXiv.2111.01395
Record Number:CaltechAUTHORS:20220714-224653496
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20220714-224653496
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:115606
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:15 Jul 2022 23:17
Last Modified:02 Jun 2023 01:32

Repository Staff Only: item control page