Published June 27, 2019
| Submitted
Report
Open
Penalizing Unfairness in Binary Classification
- Creators
- Bechavod, Yahav
- Ligett, Katrina
Abstract
We present a new approach for mitigating unfairness in learned classifiers. In particular, we focus on binary classification tasks over individuals from two populations, where, as our criterion for fairness, we wish to achieve similar false positive rates in both populations, and similar false negative rates in both populations. As a proof of concept, we implement our approach and empirically evaluate its ability to achieve both fairness and accuracy, using datasets from the fields of criminal risk assessment, credit, lending, and college admissions.
Additional Information
This work was supported in part by NSF grants CNS-1254169 and CNS-1518941, US-Israel Binational Science Foundation grant 2012348, Israeli Science Foundation (ISF) grant #1044/16, a subcontract on the DARPA Brandeis Project, and the HUJI Cyber Security Research Center in conjunction with the Israel National Cyber Bureau in the Prime Minister's Office.Attached Files
Submitted - 1707.00044.pdf
Files
1707.00044.pdf
Files
(512.4 kB)
Name | Size | Download all |
---|---|---|
md5:fde30cb4d4c74d94994bccc0a67ec47e
|
512.4 kB | Preview Download |
Additional details
- Eprint ID
- 96801
- Resolver ID
- CaltechAUTHORS:20190627-153828844
- NSF
- CNS-1254169
- NSF
- CNS-1518941
- Binational Science Foundation (USA-Israel)
- 2012348
- Israel Science Foundation
- 1044/16
- Defense Advanced Research Projects Agency (DARPA)
- Hebrew University of Jerusalem
- Israel National Cyber Bureau
- Created
-
2019-06-27Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field