Platt, John C. and Barr, Alan H. (1988) Constrained Differential Optimization for Neural Networks. California Institute of Technology . (Unpublished) http://resolver.caltech.edu/CaltechCSTR:1988.cs-tr-88-17
See Usage Policy.
Other (Adobe PDF (665KB))
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechCSTR:1988.cs-tr-88-17
Many optimization models of neural networks need constraints to restrict the space of outputs to subspace which satisfies external criteria. Optimizations using energy methods yield "forces" which act upon the state of the neural network. The penalty method, in which quadratic energy constraints are added to an existing optimization energy, has become popular recently, but is not guaranteed to satisfy the constraint conditions when there are other forces on the neural model or when there are multiple constraints. In this paper, we present the basic differential multiplier method (BDMM), which satisfies constraints exactly; we create forces which gradually apply the constraints over time, using "neurons" that estimate Lagrange multipliers. The basic differential multiplier method is a system of differential equations first proposed by [ARROW] as an economic model. These differential equations locally converge to a constrained minimum. Examples of applications of the differential method of multipliers include enforcing permutation codewords in the analog decoding problem and enforcing valid tours in the traveling salesman problem.
|Item Type:||Report or Paper (Technical Report)|
|Group:||Computer Science Technical Reports|
|Usage Policy:||You are granted permission for individual, educational, research and non-commercial reproduction, distribution, display and performance of this work in any format.|
|Deposited By:||Imported from CaltechCSTR|
|Deposited On:||24 Apr 2001|
|Last Modified:||26 Dec 2012 14:02|
Repository Staff Only: item control page