A floating-gate MOS learning array with locally computed weight updates
Abstract
We have demonstrated on-chip learning in an array of floating-gate MOS synapse transistors. The array comprises one synapse transistor at each node, and normalization circuitry at the row boundaries. The array computes the inner product of a column input vector and a stored weight matrix. The weights are stored as floating-gate charge; they are nonvolatile, but can increase when we apply a row-learn signal. The input and learn signals are digital pulses; column input pulses that are coincident with row-learn pulses cause weight increases at selected synapses. The normalization circuitry forces row synapses to compete for floating-gate charge, bounding the weight values. The array simultaneously exhibits fast computation and slow adaptation: The inner product computes in 10 μs, whereas the weight normalization takes minutes to hours.
Additional Information
© 1997 IEEE. Manuscript received February 21, 1997; revised June 19, 1997. The review of this paper was arranged by Editor C.-Y. Lu. This work was supported by the Office of Naval Research, the Advanced Research Projects Agency, the Beckman Hearing Institute, the Center for Neuromorphic Systems Engineering as a part of the National Science Foundation Engineering Research Center Program, and the California Trade and Commerce Agency, Office of Strategic Technology.Attached Files
Published - 00644652.pdf
Files
Name | Size | Download all |
---|---|---|
md5:2d39a9cfd6ac7cb6227d9e01035e0e8d
|
384.6 kB | Preview Download |
Additional details
- Eprint ID
- 53662
- Resolver ID
- CaltechAUTHORS:20150113-162250820
- Office of Naval Research (ONR)
- Advanced Research Projects Agency (ARPA)
- Beckman Hearing Institute
- Center for Neuromorphic Systems Engineering
- NSF
- California Trade and Commerce Agency, Office of Strategic Technology
- Created
-
2015-01-14Created from EPrint's datestamp field
- Updated
-
2021-11-10Created from EPrint's last_modified field