MacKay, David J. C. (1992) Bayesian interpolation. Neural Computation, 4 (3). pp. 415-447. ISSN 0899-7667 http://resolver.caltech.edu/CaltechAUTHORS:MACnc92a
- Published Version
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:MACnc92a
Although Bayesian analysis has been in use since Laplace, the Bayesian method of model-comparison has only recently been developed in depth. In this paper, the Bayesian approach to regularization and model-comparison is demonstrated by studying the inference problem of interpolating noisy data. The concepts and methods described are quite general and can be applied to many other data modeling problems. Regularizing constants are set by examining their posterior probability distribution. Alternative regularizers (priors) and alternative basis sets are objectively compared by evaluating the evidence for them. “Occam's razor” is automatically embodied by this process. The way in which Bayes infers the values of regularizing constants and noise levels has an elegant interpretation in terms of the effective number of parameters determined by the data set. This framework is due to Gull and Skilling.
|Additional Information:||© 1992 Massachusetts Institute of Technology. Received 21 May 1991; accepted 29 October 1991. Posted Online March 13, 2008. I thank Mike Lewicki, Nick Weir and David R. T. Robinson for helpful conversations, and Andreas Herz for comments on the manuscript. I am grateful to Dr. R. Goodman and Dr. P. Smyth for funding my trip to Maxent 90. This work was supported by a Caltech Fellowship and a Studentship from SERC, UK.|
|Usage Policy:||No commercial reproduction, distribution, display or performance rights in this work are provided.|
|Deposited By:||Tony Diaz|
|Deposited On:||17 Jun 2009 21:42|
|Last Modified:||26 Dec 2012 10:54|
Repository Staff Only: item control page