The Performance Analysis of Generalized Margin Maximizer (GMM) on Separable Data
- Creators
- Salehi, Fariborz
- Abbasi, Ehsan
- Hassibi, Babak
Abstract
Logistic models are commonly used for binary classification tasks. The success of such models has often been attributed to their connection to maximum-likelihood estimators. It has been shown that gradient descent algorithm, when applied on the logistic loss, converges to the max-margin classifier (a.k.a. hard-margin SVM). The performance of the max-margin classifier has been recently analyzed. Inspired by these results, in this paper, we present and study a more general setting, where the underlying parameters of the logistic model possess certain structures (sparse, block-sparse, low-rank, etc.) and introduce a more general framework (which is referred to as "Generalized Margin Maximizer", GMM). While classical max-margin classifiers minimize the 2-norm of the parameter vector subject to linearly separating the data, GMM minimizes any arbitrary convex function of the parameter vector. We provide a precise analysis of the performance of GMM via the solution of a system of nonlinear equations. We also provide a detailed study for three special cases: (1) ℓ₂-GMM that is the max-margin classifier, (2) ℓ₁-GMM which encourages sparsity, and (3) ℓ_∞-GMM which is often used when the parameter vector has binary entries. Our theoretical results are validated by extensive simulation results across a range of parameter values, problem instances, and model structures.
Additional Information
Copyright 2020 by the author(s).Attached Files
Submitted - 2010.15379.pdf
Files
Name | Size | Download all |
---|---|---|
md5:7cfa6ff7856ec71c9f822ad51a41db3a
|
540.6 kB | Preview Download |
Additional details
- Eprint ID
- 106573
- Resolver ID
- CaltechAUTHORS:20201109-155538204
- Created
-
2020-11-10Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field