CaltechAUTHORS
  A Caltech Library Service

When Does Contrastive Visual Representation Learning Work?

Cole, Elijah and Yang, Xuan and Wilber, Kimberly and Mac Aodha, Oisin and Belongie, Serge (2022) When Does Contrastive Visual Representation Learning Work? . (Unpublished) https://resolver.caltech.edu/CaltechAUTHORS:20220406-160758984

This is the latest version of this item.

[img] PDF - Accepted Version
See Usage Policy.

8MB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20220406-160758984

Abstract

Recent self-supervised representation learning techniques have largely closed the gap between supervised and unsupervised learning on ImageNet classification. While the particulars of pretraining on ImageNet are now relatively well understood, the field still lacks widely accepted best practices for replicating this success on other datasets. As a first step in this direction, we study contrastive self-supervised learning on four diverse large-scale datasets. By looking through the lenses of data quantity, data domain, data quality, and task granularity, we provide new insights into the necessary conditions for successful self-supervised learning. Our key findings include observations such as: (i) the benefit of additional pretraining data beyond 500k images is modest, (ii) adding pretraining images from another domain does not lead to more general representations, (iii) corrupted pretraining images have a disparate impact on supervised and self-supervised pretraining, and (iv) contrastive learning lags far behind supervised learning on fine-grained visual classification tasks.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
http://arxiv.org/abs/2105.05837arXivDiscussion Paper
ORCID:
AuthorORCID
Cole, Elijah0000-0001-6623-0966
Wilber, Kimberly0000-0001-7040-0251
Mac Aodha, Oisin0000-0002-5787-5073
Belongie, Serge0000-0002-0388-5217
Contact Email Address:ecole@caltech.edu
Additional Information:We thank Mason McGill for detailed feedback, and Grant Van Horn, Christine Kaeser-Chen, Yin Cui, Sergey Ioffe, Pietro Perona, and the rest of the Perona Lab for insightful discussions. This work was supported by the Caltech Resnick Sustainability Institute, an NSF Graduate Research Fellowship (grant number DGE1745301), and the Pioneer Centre for AI (DNRF grant number P1).
Group:Resnick Sustainability Institute
Funders:
Funding AgencyGrant Number
Resnick Sustainability InstituteUNSPECIFIED
NSF Graduate Research FellowshipDGE-1745301
Danish National Research FoundationDNRF-P1
DOI:10.48550/arXiv.2105.05837
Record Number:CaltechAUTHORS:20220406-160758984
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20220406-160758984
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:114160
Collection:CaltechAUTHORS
Deposited By: Elijah Cole
Deposited On:06 Apr 2022 17:28
Last Modified:06 Apr 2022 17:28

Available Versions of this Item

Repository Staff Only: item control page