A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization
Creators
Contributors
Abstract
A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology. No explicit information about internal network structure is needed. The method is based on the model-free distributed learning mechanism of Dembo and Kailath. A modified parameter update rule is proposed by which each individual parameter vector perturbation contributes a decrease in error. A substantially faster learning speed is hence allowed. Furthermore, the modified algorithm supports learning time-varying features in dynamical networks. We analyze the convergence and scaling properties of the algorithm, and present simulation results for dynamic trajectory learning in recurrent networks.
Additional Information
© 1993 Morgan Kaufmann. We thank J. Alspector, P. Baldi, B. Flower, D. Kirk, M. van Putten, A. Yariv, and many other individuals for valuable suggestions and comments on the work presented here.Attached Files
Published - 690-a-fast-stochastic-error-descent-algorithm-for-supervised-learning-and-optimization.pdf
Files
690-a-fast-stochastic-error-descent-algorithm-for-supervised-learning-and-optimization.pdf
Files
(2.1 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:1dd5fa68c39afff515cab59201534bb7
|
2.1 MB | Preview Download |
Additional details
Identifiers
- Eprint ID
- 64438
- Resolver ID
- CaltechAUTHORS:20160211-161323747
Related works
Dates
- Created
-
2016-02-19Created from EPrint's datestamp field
- Updated
-
2019-10-03Created from EPrint's last_modified field
Caltech Custom Metadata
- Series Name
- Advances in Neural Information Processing Systems
- Series Volume or Issue Number
- 5