Fyfe, William John Andrew (1992) Invariance Hints and the VC Dimension. California Institute of Technology . (Unpublished) http://resolver.caltech.edu/CaltechCSTR:1992.cs-tr-92-20
See Usage Policy.
Other (Adobe PDF (420KB))
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechCSTR:1992.cs-tr-92-20
We are interested in having a neural network learn an unknown function f. If the function satisfies an invariant of some sort, such as f is an odd function, then we want to be able to take advantage of this information and not have the network deduce the invariant based on an example of f. The invariant might be defined in terms of an explicit transformation of the input space under which f is constant. In this case it is possible to build a network thatnecessarily satisfies the invariant. In general, we define the invariant in terms of a partition of the input space such that if x, x' are in the same partition element then f (x) = f (x'). An example of the invariant would be a pair (x, x') taken from a single partition element. We can combine examples of the invariant with examples of the function in the learning process. The goal is to substitute examples of the invariant for examples of the function; the extent to which we can actually do this depends on the appropriate VC dimensions. Simulations verify, at least in simple cases, that examples of the invariant do aid the learning process.
|Item Type:||Report or Paper (Technical Report)|
|Group:||Computer Science Technical Reports|
|Usage Policy:||You are granted permission for individual, educational, research and non-commercial reproduction, distribution, display and performance of this work in any format.|
|Deposited By:||Imported from CaltechCSTR|
|Deposited On:||25 Apr 2001|
|Last Modified:||26 Dec 2012 14:03|
Repository Staff Only: item control page