Individual Fairness Under Uncertainty Book Chapter

Zhang, W, Wang, Z, Kim, J et al. (2023). Individual Fairness Under Uncertainty . 372 3042-3049. 10.3233/FAIA230621

cited authors

  • Zhang, W; Wang, Z; Kim, J; Cheng, C; Oommen, T; Ravikumar, P; Weiss, J

authors

abstract

  • Algorithmic fairness, the research field of making machine learning (ML) algorithms fair, is an established area in ML. As ML technologies expand their application domains, including ones with high societal impact, it becomes essential to take fairness into consideration during the building of ML systems. Yet, despite its wide range of socially sensitive applications, most work treats the issue of algorithmic bias as an intrinsic property of supervised learning, i.e., the class label is given as a precondition. Unlike prior studies in fairness, we propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels, while enforcing similar individuals to be treated similarly from a ranking perspective, free of the Lipschitz condition in the conventional individual fairness definition. We argue that this perspective represents a more realistic model of fairness research for real-world application deployment and show how learning with such a relaxed precondition draws new insights that better explains algorithmic fairness. We conducted experiments on four real-world datasets to evaluate our proposed method compared to other fairness models, demonstrating its superiority in minimizing discrimination while maintaining predictive performance with uncertainty present.

publication date

  • September 28, 2023

Digital Object Identifier (DOI)

start page

  • 3042

end page

  • 3049

volume

  • 372