Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published July 1994 | Published
Book Section - Chapter Open

A learning algorithm for multi-layer perceptrons with hard-limiting threshold units


We propose a novel learning algorithm to train networks with multilayer linear-threshold or hard-limiting units. The learning scheme is based on the standard backpropagation, but with "pseudo-gradient" descent, which uses the gradient of a sigmoid function as a heuristic hint in place of that of the hard-limiting function. A justification that the pseudo-gradient always points in the right down hill direction in error surface for networks with one hidden layer is provided. The advantages of such networks are that their internal representations in the hidden layers are clearly interpretable, and well-defined classification rules can be easily obtained, that calculations for classifications after training are very simple, and that they are easily implementable in hardware. Comparative experimental results on several benchmark problems using both the conventional backpropagation networks and our learning scheme for multilayer perceptrons are presented and analyzed.

Additional Information

© 1994 IEEE. The research described in this paper was supported by ARPA under grants number AFOSR-90-0199 and N00014-92-J-1860.

Attached Files

Published - 00374161.pdf


Files (411.9 kB)
Name Size Download all
411.9 kB Preview Download

Additional details

August 20, 2023
August 20, 2023