Fairness in Language Models: A Tutorial Conference

Wang, Z, Palikhe, A, Yin, Z et al. (2025). Fairness in Language Models: A Tutorial . 6849-6852. 10.1145/3746252.3761453

cited authors

  • Wang, Z; Palikhe, A; Yin, Z; Zhang, W

authors

abstract

  • Language Models (LMs) achieve outstanding performance across diverse applications but often produce biased outcomes, raising concerns about their trustworthy deployment. These concerns call for fairness research specific to LMs; however, most existing work in machine learning assumes access to model internals or training data, conditions that rarely hold in practice. As LMs continue to exert growing societal influence, it becomes increasingly important to understand and address fairness challenges unique to these models. To this end, our tutorial begins by showcasing real-world examples of bias to highlight their practical implications and uncover underlying sources. We then define fairness concepts tailored to LMs, review methods for bias evaluation and mitigation, and present a multi-dimensional taxonomy of benchmark datasets for fairness assessment. We conclude by outlining open research challenges, aiming to provide the community with both conceptual clarity and practical tools for fostering fairness in LMs. All tutorial resources are publicly accessible at https://github.com/vanbanTruong/fairness-in-large-language-models.

publication date

  • November 10, 2025

Digital Object Identifier (DOI)

start page

  • 6849

end page

  • 6852