A Caltech Library Service

Measuring the Robustness of Neural Networks via Minimal Adversarial Examples

Dathathri, Sumanth and Zheng, Stephan and Gao, Sicun and Murray, Richard M. (2017) Measuring the Robustness of Neural Networks via Minimal Adversarial Examples. In: Deep Learning: Bridging Theory and Practice, NIPS 2017 workshop, 9 December 2017, Long Beach. CA. (Unpublished)

This is the latest version of this item.

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


Neural networks are highly sensitive to adversarial examples, which cause large output deviations with only small input perturbations. However, little is known quantitatively about the distribution and prevalence of such adversarial examples. To address this issue, we propose a rigorous search method that provably finds the smallest possible adversarial example. The key benefit of our method is that it gives precise quantitative insight into the distribution of adversarial examples, and guarantees the absence of adversarial examples if they are not found. The primary idea is to consider the nonlinearity exhibited by the network in a small region of the input space, and search exhaustively for adversarial examples in that region. We show that the frequency of adversarial examples and robustness of neural networks is up to twice as large as reported in previous works that use empirical adversarial attacks. In addition, we provide an approach to approximate the nonlinear behavior of neural networks, that makes our search method computationally feasible.

Item Type:Conference or Workshop Item (Paper)
Related URLs:
URLURL TypeDescription schedule
Murray, Richard M.0000-0002-5785-7481
Subject Keywords:Machine Learning, Formal Methods, Adversarial Examples, Robustness
Record Number:CaltechAUTHORS:20171128-230807299
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:83561
Deposited By: Sumanth Dathathri
Deposited On:30 Nov 2017 17:47
Last Modified:03 Oct 2019 19:07

Available Versions of this Item

  • Measuring the Robustness of Neural Networks via Minimal Adversarial Examples. (deposited 30 Nov 2017 17:47) [Currently Displayed]

Repository Staff Only: item control page