A Caltech Library Service

Perceptron learning with random coordinate descent

Li, Ling (2005) Perceptron learning with random coordinate descent. Computer Science Technical Reports, 2005.005. California Institute of Technology , Pasadena, USA. (Unpublished)

PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


A perceptron is a linear threshold classifier that separates examples with a hyperplane. It is perhaps the simplest learning model that is used standalone. In this paper, we propose a family of random coordinate descent algorithms for perceptron learning on binary classification problems. Unlike most perceptron learning algorithms which require smooth cost functions, our algorithms directly minimize the training error, and usually achieve the lowest training error compared with other algorithms. The algorithms are also computational efficient. Such advantages make them favorable for both standalone use and ensemble learning, on problems that are not linearly separable. Experiments show that our algorithms work very well with AdaBoost, and achieve the lowest test errors for half of the datasets.

Item Type:Report or Paper (Technical Report)
Additional Information:For C++ code of the perceptron learning algorithms used in this paper, please see ; For the artificial datasets used in this paper, please see .
Group:Computer Science Technical Reports
Series Name:Computer Science Technical Reports
Issue or Number:2005.005
Record Number:CaltechCSTR:2005.006
Persistent URL:
Official Citation:L. Li. Perceptron learning with random coordinate descent. Computer Science Technical Report CaltechCSTR:2005.006, California Institute of Technology, Aug. 2005.
Usage Policy:You are granted permission for individual, educational, research and non-commercial reproduction, distribution, display and performance of this work in any format.
ID Code:27078
Deposited By: Imported from CaltechCSTR
Deposited On:31 Aug 2005
Last Modified:03 Oct 2019 03:20

Repository Staff Only: item control page