A Caltech Library Service

A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization

Cauwenberghs, Gert (1993) A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization. In: Advances in Neural Information Processing Systems 5 (NIPS 1992). Advances in Neural Information Processing Systems. No.5. Morgan Kaufmann , San Mateo, CA, pp. 244-251. ISBN 1-55860-274-7.

[img] PDF - Published Version
See Usage Policy.


Use this Persistent URL to link to this item:


A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology. No explicit information about internal network structure is needed. The method is based on the model-free distributed learning mechanism of Dembo and Kailath. A modified parameter update rule is proposed by which each individual parameter vector perturbation contributes a decrease in error. A substantially faster learning speed is hence allowed. Furthermore, the modified algorithm supports learning time-varying features in dynamical networks. We analyze the convergence and scaling properties of the algorithm, and present simulation results for dynamic trajectory learning in recurrent networks.

Item Type:Book Section
Related URLs:
URLURL TypeDescription
Additional Information:© 1993 Morgan Kaufmann. We thank J. Alspector, P. Baldi, B. Flower, D. Kirk, M. van Putten, A. Yariv, and many other individuals for valuable suggestions and comments on the work presented here.
Series Name:Advances in Neural Information Processing Systems
Issue or Number:5
Record Number:CaltechAUTHORS:20160211-161323747
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:64438
Deposited By: Kristin Buxton
Deposited On:19 Feb 2016 22:04
Last Modified:03 Oct 2019 09:37

Repository Staff Only: item control page