Nicholson, Alexander (2000) A Generalization Model and Learning in Hardware. California Institute of Technology . (Unpublished) http://resolver.caltech.edu/CaltechCSTR:2000.007
See Usage Policy.
Other (Adobe PDF (534KB))
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechCSTR:2000.007
We study two problems in the field of machine learning. First, we propose a novel theoretical framework for understanding learning and generalization which we call the bin model. Using the bin model, a closed form is derived for the generalization error that estimates the out-of-sample performance in terms of the in-sample performance. We address the problem of overfitting, and show that using a simple exhaustive learning algorithm it does not arise. This is independent of the target function, input distribution and learning model, and remains true even with noisy data sets. We apply our analysis to both classification and regression problems and give an example of how it may be used efficiently in practice. Second, we investigate the use of learning and evolution in hardware for digital circuit design. Using the reactive tabu search for discrete optimization, we show that we can learn a multiplier circuit from a set of examples. The learned circuit makes less than 2% error and uses fewer chip resources than the standard digital design. We compare use of a genetic algorithm and the reactive tabu search for fitness optimization and show that the reactive tabu search performs significantly better on a 2-bit adder design problem for a similar execution time.
|Item Type:||Report or Paper (Technical Report)|
|Group:||Computer Science Technical Reports|
|Usage Policy:||You are granted permission for individual, educational, research and non-commercial reproduction, distribution, display and performance of this work in any format.|
|Deposited By:||Imported from CaltechCSTR|
|Deposited On:||25 Apr 2001|
|Last Modified:||26 Dec 2012 14:06|
Repository Staff Only: item control page