Improving Fairness in Machine Learning Software via Counterfactual Fairness Thinking Conference

Yin, Z, Wang, Z, Zhang, W. (2024). Improving Fairness in Machine Learning Software via Counterfactual Fairness Thinking . 420-421. 10.1145/3639478.3643531

cited authors

  • Yin, Z; Wang, Z; Zhang, W

authors

abstract

  • Machine Learning (ML) software is increasingly influencing decisions that impact individuals' lives. However, some of these decisions show discrimination and thus introduce algorithmic biases against certain social subgroups defined by sensitive attributes (e.g., gender or race). This has elevated software fairness bugs to an increasingly significant concern for software engineering (SE). However, most existing bias mitigation works enhance software fairness, a non-functional software property, at the cost of software performance. To this end, we proposed a novel framework, namely Group Equality Counterfactual Fairness (GECF), which aims to mitigate sensitive attribute bias and labeling bias using counterfactual fairness while reducing the resulting performance loss based on ensemble learning. Experimental results on 6 real-world datasets show the superiority of our proposed framework from different aspects.

publication date

  • April 14, 2024

Digital Object Identifier (DOI)

start page

  • 420

end page

  • 421