Chen, Wuyang and Yu, Zhiding and De Mello, Shalini and Liu, Sifei and Alvarez, Jose M. and Wang, Zhangyang and Anandkumar, Animashree (2021) Contrastive Syn-to-Real Generalization. . (Unpublished) https://resolver.caltech.edu/CaltechAUTHORS:20210510-133253602
![]() |
PDF
- Submitted Version
See Usage Policy. 3MB |
Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20210510-133253602
Abstract
Training on synthetic data can be beneficial for label or data-scarce scenarios. However, synthetically trained models often suffer from poor generalization in real domains due to domain gaps. In this work, we make a key observation that the diversity of the learned feature embeddings plays an important role in the generalization performance. To this end, we propose contrastive synthetic-to-real generalization (CSG), a novel framework that leverages the pre-trained ImageNet knowledge to prevent overfitting to the synthetic domain, while promoting the diversity of feature embeddings as an inductive bias to improve generalization. In addition, we enhance the proposed CSG framework with attentional pooling (A-pool) to let the model focus on semantically important regions and further improve its generalization. We demonstrate the effectiveness of CSG on various synthetic training tasks, exhibiting state-of-the-art performance on zero-shot domain generalization.
Item Type: | Report or Paper (Discussion Paper) | ||||||
---|---|---|---|---|---|---|---|
Related URLs: |
| ||||||
ORCID: |
| ||||||
Record Number: | CaltechAUTHORS:20210510-133253602 | ||||||
Persistent URL: | https://resolver.caltech.edu/CaltechAUTHORS:20210510-133253602 | ||||||
Usage Policy: | No commercial reproduction, distribution, display or performance rights in this work are provided. | ||||||
ID Code: | 109037 | ||||||
Collection: | CaltechAUTHORS | ||||||
Deposited By: | Tony Diaz | ||||||
Deposited On: | 10 May 2021 20:54 | ||||||
Last Modified: | 10 May 2021 20:54 |
Repository Staff Only: item control page