CaltechAUTHORS
  A Caltech Library Service

Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models

Shu, Manli and Nie, Weili and Huang, De-An and Yu, Zhiding and Goldstein, Tom and Anandkumar, Anima and Xiao, Chaowei (2022) Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models. . (Unpublished) https://resolver.caltech.edu/CaltechAUTHORS:20221221-004651204

Full text is not posted in this repository. Consult Related URLs below.

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20221221-004651204

Abstract

Pre-trained vision-language models (e.g., CLIP) have shown promising zero-shot generalization in many downstream tasks with properly designed text prompts. Instead of relying on hand-engineered prompts, recent works learn prompts using the training data from downstream tasks. While effective, training on domain-specific data reduces a model's generalization capability to unseen new domains. In this work, we propose test-time prompt tuning (TPT), a method that can learn adaptive prompts on the fly with a single test sample. For image classification, TPT optimizes the prompt by minimizing the entropy with confidence selection so that the model has consistent predictions across different augmented views of each test sample. In evaluating generalization to natural distribution shifts, TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average, surpassing previous prompt tuning approaches that require additional task-specific training data. In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data. Project page: https://azshue.github.io/TPT


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
http://arxiv.org/abs/2209.07511arXivDiscussion Paper
ORCID:
AuthorORCID
Huang, De-An0000-0002-6945-7768
Anandkumar, Anima0000-0002-6974-6797
Xiao, Chaowei0000-0002-7043-4926
Additional Information:This work was supported by Nvidia Research. Shu and Goldstein were supported by the ONR MURI program and DARPA GARD.
Funders:
Funding AgencyGrant Number
NVIDIA CorporationUNSPECIFIED
Office of Naval Research (ONR)UNSPECIFIED
Defense Advanced Research Projects Agency (DARPA)UNSPECIFIED
DOI:10.48550/arXiv.2209.07511
Record Number:CaltechAUTHORS:20221221-004651204
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20221221-004651204
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:118544
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:22 Dec 2022 18:54
Last Modified:02 Jun 2023 01:29

Repository Staff Only: item control page