Published June 2001
| public
Journal Article
Minimizing memory loss in learning a new environment
Abstract
Human and other living species can learn new concepts without losing the old ones. On the other hand, artificial neural networks tend to "forget" old concepts. In this paper, we present three methods to minimize the loss of the old information. These methods are analyzed and compared for the linear model. In particular, a method called network sampling is shown to be optimal under certain condition on the sampled data distribution. We also show how to apply these methods in the nonlinear models.
Additional Information
© 2001 Elsevier Science B.V. Available online 31 May 2001.Additional details
- Eprint ID
- 96897
- Resolver ID
- CaltechAUTHORS:20190702-153115049
- Created
-
2019-07-08Created from EPrint's datestamp field
- Updated
-
2021-11-16Created from EPrint's last_modified field