Published 2016 | Version Submitted
Discussion Paper Open

Improved Multi-Class Cost-Sensitive Boosting via Estimation of the Minimum-Risk Class

Abstract

We present a simple unified framework for multi-class cost-sensitive boosting. The minimum-risk class is estimated directly, rather than via an approximation of the posterior distribution. Our method jointly optimizes binary weak learners and their corresponding output vectors, requiring classes to share features at each iteration. By training in a cost-sensitive manner, weak learners are invested in separating classes whose discrimination is important, at the expense of less relevant classification boundaries. Additional contributions are a family of loss functions along with proof that our algorithm is Boostable in the theoretical sense, as well as an efficient procedure for growing decision trees for use as weak learners. We evaluate our method on a variety of datasets: a collection of synthetic planar data, common UCI datasets, MNIST digits, SUN scenes, and CUB-200 birds. Results show state-of-the-art performance across all datasets against several strong baselines, including non-boosting multi-class approaches.

Additional Information

Includes Supplementary Material.

Attached Files

Submitted - Appel_ArXiv.pdf

Files

Appel_ArXiv.pdf

Files (2.3 MB)

Name Size Download all
md5:21bbe54352a389684942c87339f0346d
2.3 MB Preview Download

Additional details

Identifiers

Eprint ID
70538
Resolver ID
CaltechAUTHORS:20160922-125241569

Dates

Created
2016-09-22
Created from EPrint's datestamp field
Updated
2019-10-03
Created from EPrint's last_modified field