CaltechAUTHORS
  A Caltech Library Service

AugMax: Adversarial Composition of Random Augmentations for Robust Training

Wang, Haotao and Xiao, Chaowei and Kossaifi, Jean and Yu, Zhiding and Anandkumar, Anima and Wang, Zhangyang (2021) AugMax: Adversarial Composition of Random Augmentations for Robust Training. In: Advances in Neural Information Processing Systems 34. Advances in Neural Information Processing Systems , pp. 237-250. https://resolver.caltech.edu/CaltechAUTHORS:20220714-224700672

[img] PDF - Published Version
See Usage Policy.

1MB
[img] PDF - Supplemental Material
See Usage Policy.

162kB
[img] PDF (ArXiv discussion paper) - Submitted Version
See Usage Policy.

1MB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20220714-224700672

Abstract

Data augmentation is a simple yet effective way to improve the robustness of deep neural networks (DNNs). Diversity and hardness are two complementary dimensions of data augmentation to achieve robustness. For example, AugMix explores random compositions of a diverse set of augmentations to enhance broader coverage, while adversarial training generates adversarially hard samples to spot the weakness. Motivated by this, we propose a data augmentation framework, termed AugMax, to unify the two aspects of diversity and hardness. AugMax first randomly samples multiple augmentation operators and then learns an adversarial mixture of the selected operators. Being a stronger form of data augmentation, AugMax leads to a significantly augmented input distribution which makes model training more challenging. To solve this problem, we further design a disentangled normalization module, termed DuBIN (Dual-Batch-and-Instance Normalization), that disentangles the instance-wise feature heterogeneity arising from AugMax. Experiments show that AugMax-DuBIN leads to significantly improved out-of-distribution robustness, outperforming prior arts by 3.03%, 3.49%, 1.82% and 0.71% on CIFAR10-C, CIFAR100-C, Tiny ImageNet-C and ImageNet-C. Codes and pretrained models are available: https://github.com/VITA-Group/AugMax.


Item Type:Book Section
Related URLs:
URLURL TypeDescription
https://proceedings.neurips.cc/paper/2021/hash/01e9565cecc4e989123f9620c1d09c09-Abstract.htmlPublisherArticle
https://doi.org/10.48550/arXiv.2110.13771arXivDiscussion Paper
https://proceedings.neurips.cc/paper/2021/file/01e9565cecc4e989123f9620c1d09c09-Supplemental.pdfPublisherSupporting Information
https://github.com/VITA-Group/AugMaxRelated ItemCodes and pretrained models
ORCID:
AuthorORCID
Kossaifi, Jean0000-0002-4445-3429
Anandkumar, Anima0000-0002-6974-6797
Wang, Zhangyang0000-0002-2050-5693
Additional Information:Work partially done during an internship at NVIDIA. Z.W. is supported by the U.S. Army Research Laboratory Cooperative Research Agreement W911NF17-2-0196 (IOBT REIGN), and an NVIDIA Applied Research Accelerator Program.
Funders:
Funding AgencyGrant Number
Army Research Office (ARO)W911NF17-2-0196
NVIDIA CorporationUNSPECIFIED
Record Number:CaltechAUTHORS:20220714-224700672
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20220714-224700672
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:115608
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:15 Jul 2022 23:06
Last Modified:15 Jul 2022 23:06

Repository Staff Only: item control page