A Caltech Library Service

Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models

Shu, Manli and Nie, Weili and Huang, De-An and Yu, Zhiding and Goldstein, Tom and Anandkumar, Anima and Xiao, Chaowei (2022) Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models. . (Unpublished)

Full text is not posted in this repository. Consult Related URLs below.

Use this Persistent URL to link to this item:


Pre-trained vision-language models (e.g., CLIP) have shown promising zero-shot generalization in many downstream tasks with properly designed text prompts. Instead of relying on hand-engineered prompts, recent works learn prompts using the training data from downstream tasks. While effective, training on domain-specific data reduces a model's generalization capability to unseen new domains. In this work, we propose test-time prompt tuning (TPT), a method that can learn adaptive prompts on the fly with a single test sample. For image classification, TPT optimizes the prompt by minimizing the entropy with confidence selection so that the model has consistent predictions across different augmented views of each test sample. In evaluating generalization to natural distribution shifts, TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average, surpassing previous prompt tuning approaches that require additional task-specific training data. In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data. Project page:

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper
Huang, De-An0000-0002-6945-7768
Anandkumar, Anima0000-0002-6974-6797
Xiao, Chaowei0000-0002-7043-4926
Additional Information:This work was supported by Nvidia Research. Shu and Goldstein were supported by the ONR MURI program and DARPA GARD.
Funding AgencyGrant Number
Office of Naval Research (ONR)UNSPECIFIED
Defense Advanced Research Projects Agency (DARPA)UNSPECIFIED
Record Number:CaltechAUTHORS:20221221-004651204
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:118544
Deposited By: George Porter
Deposited On:22 Dec 2022 18:54
Last Modified:02 Jun 2023 01:29

Repository Staff Only: item control page