Published 1993 | Version Published
Book Section - Chapter Open

A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization

Abstract

A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology. No explicit information about internal network structure is needed. The method is based on the model-free distributed learning mechanism of Dembo and Kailath. A modified parameter update rule is proposed by which each individual parameter vector perturbation contributes a decrease in error. A substantially faster learning speed is hence allowed. Furthermore, the modified algorithm supports learning time-varying features in dynamical networks. We analyze the convergence and scaling properties of the algorithm, and present simulation results for dynamic trajectory learning in recurrent networks.

Additional Information

© 1993 Morgan Kaufmann. We thank J. Alspector, P. Baldi, B. Flower, D. Kirk, M. van Putten, A. Yariv, and many other individuals for valuable suggestions and comments on the work presented here.

Attached Files

Published - 690-a-fast-stochastic-error-descent-algorithm-for-supervised-learning-and-optimization.pdf

Files

690-a-fast-stochastic-error-descent-algorithm-for-supervised-learning-and-optimization.pdf

Additional details

Identifiers

Eprint ID
64438
Resolver ID
CaltechAUTHORS:20160211-161323747

Dates

Created
2016-02-19
Created from EPrint's datestamp field
Updated
2019-10-03
Created from EPrint's last_modified field

Caltech Custom Metadata

Series Name
Advances in Neural Information Processing Systems
Series Volume or Issue Number
5