Published August 2017
| Published + Supplemental Material
Journal Article
Open
A simple multi-class boosting framework with theoretical guarantees and empirical proficiency
- Creators
- Appel, Ron
-
Perona, Pietro
Abstract
There is a need for simple yet accurate white-box learning systems that train quickly and with lit- tle data. To this end, we showcase REBEL, a multi-class boosting method, and present a novel family of weak learners called localized similar- ities. Our framework provably minimizes the training error of any dataset at an exponential rate. We carry out experiments on a variety of synthetic and real datasets, demonstrating a con- sistent tendency to avoid overfitting. We eval- uate our method on MNIST and standard UCI datasets against other state-of-the-art methods, showing the empirical proficiency of our method.
Additional Information
© 2017 The author(s). The authors would like to thank anonymous reviewers for their feedback and Google Inc. and the Office of Naval Research MURI N00014-10-1-0933 for funding this work.Attached Files
Published - appel17a.pdf
Supplemental Material - appel17a-supp.pdf
Files
appel17a-supp.pdf
Files
(2.4 MB)
Name | Size | Download all |
---|---|---|
md5:e2003d3945c64bd4de4d0d08507e8750
|
36.6 kB | Preview Download |
md5:109ef56520a5cac2a3f4bd4d27372754
|
2.4 MB | Preview Download |
Additional details
- Eprint ID
- 87405
- Resolver ID
- CaltechAUTHORS:20180627-133511664
- Office of Naval Research (ONR)
- N00014-10-1-0933
- Created
-
2018-06-27Created from EPrint's datestamp field
- Updated
-
2019-10-03Created from EPrint's last_modified field