CaltechAUTHORS
  A Caltech Library Service

Robust Fairness Under Covariate Shift

Rezaei, Ashkan and Liu, Anqi and Memarrast, Omid and Ziebart, Brian D. (2021) Robust Fairness Under Covariate Shift. In: Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence. AAAI Press , Palo Alto, CA, pp. 9419-9427. https://resolver.caltech.edu/CaltechAUTHORS:20211014-173153985

[img] PDF - Published Version
See Usage Policy.

2MB
[img] PDF - Submitted Version
See Usage Policy.

834kB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20211014-173153985

Abstract

Making predictions that are fair with regard to protected attributes (race, gender, age, etc.) has become an important requirement for classification algorithms. Existing techniques derive a fair model from sampled labeled data relying on the assumption that training and testing data are identically and independently drawn (iid) from the same distribution. In practice, distribution shift can and does occur between training and testing datasets as the characteristics of individuals interacting with the machine learning system change. We investigate fairness under covariate shift, a relaxation of the iid assumption in which the inputs or covariates change while the conditional label distribution remains the same. We seek fair decisions under these assumptions on target data with unknown labels. We propose an approach that obtains the predictor that is robust to the worst-case testing performance while satisfying target fairness requirements and matching statistical properties of the source data. We demonstrate the benefits of our approach on benchmark prediction tasks.


Item Type:Book Section
Related URLs:
URLURL TypeDescription
https://ojs.aaai.org/index.php/AAAI/article/view/17135PublisherArticle
https://arxiv.org/abs/2010.05166arXivDiscussion Paper
Additional Information:© 2021 Association for the Advancement of Artificial Intelligence. Published 2021-05-18. This work was supported by the National Science Foundation Program on Fairness in AI in collaboration with Amazon under award No. 1939743.
Funders:
Funding AgencyGrant Number
NSFIIS-1939743
Subject Keywords:Ethics -- Bias, Fairness, Transparency & Privacy, Adversarial Learning & Robustness, Classification and Regression
Record Number:CaltechAUTHORS:20211014-173153985
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20211014-173153985
Official Citation:Rezaei, A., Liu, A., Memarrast, O., & Ziebart, B. D. (2021). Robust Fairness Under Covariate Shift. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9419-9427. https://ojs.aaai.org/index.php/AAAI/article/view/17135.
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:111442
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:14 Oct 2021 19:13
Last Modified:14 Oct 2021 19:13

Repository Staff Only: item control page