MacKay, David J. C. (1992) A practical Bayesian framework for backpropagation networks. Neural Computation, 4 (3). pp. 448-472. ISSN 0899-7667. http://resolver.caltech.edu/CaltechAUTHORS:MACnc92b
- Published Version
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:MACnc92b
A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian "evidence" automatically embodies "Occam's razor," penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.
|Additional Information:||© 1992 Massachusetts Institute of Technology. Received 21 May 1991; accepted 29 October 1991. Posted Online March 13, 2008. I thank Mike Lewicki, Nick Weir, and Haim Sompolinsky for helpful conversations, and Andreas Herz for comments on the manuscript. This work was supported by a Caltech Fellowship and a Studentship from SERC, UK.|
|Usage Policy:||No commercial reproduction, distribution, display or performance rights in this work are provided.|
|Deposited By:||Tony Diaz|
|Deposited On:||18 Jun 2009 18:31|
|Last Modified:||26 Dec 2012 10:54|
Repository Staff Only: item control page