A Caltech Library Service

Improve Robustness of Deep Neural Networks by Coding

Huang, Kunping and Raviv, Netanel and Jain, Siddharth and Upadhyaya, Pulakesh and Bruck, Jehoshua and Siegel, Paul H. and Jiang, Anxiao (Andrew) (2020) Improve Robustness of Deep Neural Networks by Coding. In: 2020 Information Theory and Applications Workshop (ITA). IEEE , Piscataway, NJ, pp. 1-7. ISBN 9781728141909.

Full text is not posted in this repository. Consult Related URLs below.

Use this Persistent URL to link to this item:


Deep neural networks (DNNs) typically have many weights. When errors appear in their weights, which are usually stored in non-volatile memories, their performance can degrade significantly. We review two recently presented approaches that improve the robustness of DNNs in complementary ways. In the first approach, we use error-correcting codes as external redundancy to protect the weights from errors. A deep reinforcement learning algorithm is used to optimize the redundancy-performance tradeoff. In the second approach, internal redundancy is added to neurons via coding. It enables neurons to perform robust inference in noisy environments.

Item Type:Book Section
Related URLs:
URLURL TypeDescription
Raviv, Netanel0000-0002-1686-1994
Jain, Siddharth0000-0002-9164-6119
Upadhyaya, Pulakesh0000-0003-1054-1380
Bruck, Jehoshua0000-0001-8474-0812
Siegel, Paul H.0000-0002-2539-4646
Jiang, Anxiao (Andrew)0000-0002-0120-7930
Additional Information:© 2020 IEEE.
Record Number:CaltechAUTHORS:20201209-153308085
Persistent URL:
Official Citation:K. Huang et al., "Improve Robustness of Deep Neural Networks by Coding," 2020 Information Theory and Applications Workshop (ITA), San Diego, CA, USA, 2020, pp. 1-7, doi: 10.1109/ITA50056.2020.9244998
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:106993
Deposited By: George Porter
Deposited On:10 Dec 2020 15:43
Last Modified:16 Nov 2021 18:58

Repository Staff Only: item control page