A Caltech Library Service

The evidence framework applied to classification networks

MacKay, David J. C. (1992) The evidence framework applied to classification networks. Neural Computation, 4 (5). pp. 720-736. ISSN 0899-7667. doi:10.1162/neco.1992.4.5.720.

PDF - Published Version
See Usage Policy.


Use this Persistent URL to link to this item:


Three Bayesian ideas are presented for supervised adaptive classifiers. First, it is argued that the output of a classifier should be obtained by marginalizing over the posterior distribution of the parameters; a simple approximation to this integral is proposed and demonstrated. This involves a "moderation" of the most probable classifier's outputs, and yields improved performance. Second, it is demonstrated that the Bayesian framework for model comparison described for regression models in MacKay (1992a,b) can also be applied to classification problems. This framework successfully chooses the magnitude of weight decay terms, and ranks solutions found using different numbers of hidden units. Third, an information-based data selection criterion is derived and demonstrated within this framework.

Item Type:Article
Related URLs:
URLURL TypeDescription
Additional Information:© 1992 Massachusetts Institute of Technology. Received 20 November 1991; accepted 18 February 1992. Posted Online March 13, 2008. This work was supported by a Caltech Fellowship and a Studentship from SERC, UK.
Funding AgencyGrant Number
Caltech FellowshipUNSPECIFIED
Studentship from SERC, UKUNSPECIFIED
Issue or Number:5
Record Number:CaltechAUTHORS:MACnc92d
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:13796
Deposited By: Tony Diaz
Deposited On:25 Jun 2009 16:34
Last Modified:08 Nov 2021 22:40

Repository Staff Only: item control page