Li, Ling (2005) Perceptron learning with random coordinate descent. California Institute of Technology , Pasadena, USA. (Unpublished) http://resolver.caltech.edu/CaltechCSTR:2005.006
See Usage Policy.
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechCSTR:2005.006
A perceptron is a linear threshold classifier that separates examples with a hyperplane. It is perhaps the simplest learning model that is used standalone. In this paper, we propose a family of random coordinate descent algorithms for perceptron learning on binary classification problems. Unlike most perceptron learning algorithms which require smooth cost functions, our algorithms directly minimize the training error, and usually achieve the lowest training error compared with other algorithms. The algorithms are also computational efficient. Such advantages make them favorable for both standalone use and ensemble learning, on problems that are not linearly separable. Experiments show that our algorithms work very well with AdaBoost, and achieve the lowest test errors for half of the datasets.
|Item Type:||Report or Paper (Technical Report)|
|Additional Information:||For C++ code of the perceptron learning algorithms used in this paper, please see http://www.work.caltech.edu/ling/lemga/ ; For the artificial datasets used in this paper, please see http://www.work.caltech.edu/ling/data/ .|
|Group:||Computer Science Technical Reports|
|Official Citation:||L. Li. Perceptron learning with random coordinate descent. Computer Science Technical Report CaltechCSTR:2005.006, California Institute of Technology, Aug. 2005.|
|Usage Policy:||You are granted permission for individual, educational, research and non-commercial reproduction, distribution, display and performance of this work in any format.|
|Deposited By:||Imported from CaltechCSTR|
|Deposited On:||31 Aug 2005|
|Last Modified:||26 Dec 2012 14:14|
Repository Staff Only: item control page