The p-norm generalization of the LMS algorithm for adaptive filtering
Abstract
Recently much work has been done analyzing online machine learning algorithms in a worst case setting, where no probabilistic assumptions are made about the data. This is analogous to the H/sup /spl infin// setting used in adaptive linear filtering. Bregman divergences have become a standard tool for analyzing online machine learning algorithms. Using these divergences, we motivate a generalization of the least mean squared (LMS) algorithm. The loss bounds for these so-called p-norm algorithms involve other norms than the standard 2-norm. The bounds can be significantly better if a large proportion of the input variables are irrelevant, i.e., if the weight vector we are trying to learn is sparse. We also prove results for nonstationary targets. We only know how to apply kernel methods to the standard LMS algorithm (i.e., p=2). However, even in the general p-norm case, we can handle generalized linear models where the output of the system is a linear function combined with a nonlinear transfer function (e.g., the logistic sigmoid).
Additional Information
© Copyright 2006 IEEE. Reprinted with permission. Manuscript received December 1, 2004; revised June 26, 2005. [Posted online: 2006-04-18] This work was supported by the National Science Foundation under Grant CCR 9821087, the Australian Research Council, the Academy of Finland under Decision 210796, and the IST Programme of the European Community under PASCAL Network of Excellence IST-2002-506778. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Dominic K. C. Ho.Files
Name | Size | Download all |
---|---|---|
md5:1ca673da82a1dd25c404fc9aa0ae659c
|
486.4 kB | Preview Download |
Additional details
- Eprint ID
- 6281
- Resolver ID
- CaltechAUTHORS:KIVieeetsp06
- Created
-
2006-11-30Created from EPrint's datestamp field
- Updated
-
2021-11-08Created from EPrint's last_modified field