A Caltech Library Service

Statistical Learning of Arbitrary Computable Classifiers

Soloveichik, David (2008) Statistical Learning of Arbitrary Computable Classifiers. California Institute of Technology , Pasadena, CA.

PDF - Published Version
See Usage Policy.


Use this Persistent URL to link to this item:


Statistical learning theory chiefly studies restricted hypothesis classes, particularly those with finite Vapnik-Chervonenkis (VC) dimension. The fundamental quantity of interest is the sample complexity: the number of samples required to learn to a specified level of accuracy. Here we consider learning over the set of all computable labeling functions. Since the VC-dimension is infinite and a priori (uniform) bounds on the number of samples are impossible, we let the learning algorithm decide when it has seen sufficient samples to have learned. We first show that learning in this setting is indeed possible, and develop a learning algorithm. We then show, however, that bounding sample complexity independently of the distribution is impossible. Notably, this impossibility is entirely due to the requirement that the learning algorithm be computable, and not due to the statistical nature of the problem.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
Additional Information:I thank Erik Winfree and Matthew Cook for discussions and invaluable support.
Record Number:CaltechAUTHORS:20111021-140752530
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:27357
Deposited By: Tony Diaz
Deposited On:09 Nov 2011 00:17
Last Modified:03 Oct 2019 03:22

Repository Staff Only: item control page