A Caltech Library Service

A Lazy Man’s Approach to Benchmarking: Semisupervised Classifier Evaluation and Recalibration

Welinder, Peter and Welling, Max and Perona, Pietro (2013) A Lazy Man’s Approach to Benchmarking: Semisupervised Classifier Evaluation and Recalibration. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE , Piscataway, NJ, pp. 3262-3269.

[img] PDF - Accepted Version
See Usage Policy.

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


How many labeled examples are needed to estimate a classifier’s performance on a new dataset? We study the case where data is plentiful, but labels are expensive. We show that by making a few reasonable assumptions on the structure of the data, it is possible to estimate performance curves, with confidence bounds, using a small number of ground truth labels. Our approach, which we call Semisupervised Performance Evaluation (SPE), is based on a generative model for the classifier’s confidence scores. In addition to estimating the performance of classifiers on new datasets, SPE can be used to recalibrate a classifier by reestimating the class-conditional confidence distributions.

Item Type:Book Section
Related URLs:
URLURL TypeDescription Paper
Perona, Pietro0000-0002-7583-5809
Alternate Title:Semisupervised Classifier Evaluation and Recalibration
Additional Information:© 2013 IEEE. This work was supported by National Science Foundation grants 0914783 and 1216045, NASA Stennis grant NAS7.03001, ONR MURI grant N00014-10-1-0933, and gifts from Qualcomm and Google.
Funding AgencyGrant Number
Office of Naval Research (ONR)N00014-10-1-0933
Record Number:CaltechAUTHORS:20150903-112410599
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:60046
Deposited By: Caroline Murphy
Deposited On:09 Sep 2015 20:14
Last Modified:03 Oct 2019 08:53

Repository Staff Only: item control page