Perceptron learning with random coordinate descent
- Creators
- Li, Ling
Abstract
A perceptron is a linear threshold classifier that separates examples with a hyperplane. It is perhaps the simplest learning model that is used standalone. In this paper, we propose a family of random coordinate descent algorithms for perceptron learning on binary classification problems. Unlike most perceptron learning algorithms which require smooth cost functions, our algorithms directly minimize the training error, and usually achieve the lowest training error compared with other algorithms. The algorithms are also computational efficient. Such advantages make them favorable for both standalone use and ensemble learning, on problems that are not linearly separable. Experiments show that our algorithms work very well with AdaBoost, and achieve the lowest test errors for half of the datasets.
Additional Information
For C++ code of the perceptron learning algorithms used in this paper, please see http://www.work.caltech.edu/ling/lemga/ ; For the artificial datasets used in this paper, please see http://www.work.caltech.edu/ling/data/ .Attached Files
Submitted - tr05perceptron.pdf
Files
Name | Size | Download all |
---|---|---|
md5:9eb430d9f4f24638cc0f178a0e200abc
|
204.8 kB | Preview Download |
Additional details
- Eprint ID
- 27078
- Resolver ID
- CaltechCSTR:2005.006
- Created
-
2005-08-31Created from EPrint's datestamp field
- Updated
-
2019-10-03Created from EPrint's last_modified field
- Caltech groups
- Computer Science Technical Reports
- Series Name
- Computer Science Technical Reports
- Series Volume or Issue Number
- 2005.005