The University of Southampton
University of Southampton Institutional Repository

Group vs. individual algorithmic fairness

Group vs. individual algorithmic fairness
Group vs. individual algorithmic fairness
Machine learning algorithms are increasingly used in making people’s life decisions across a range of different areas such as loan applications, university admissions, insurance pricing and criminal justice sentencing. If the historical data used to train the algorithm is biased against certain demographic groups (e.g. black people or women), the predictive results of the algorithms will too. From both regulation and ethical perspectives, we need to reduce discrimination and improve group fairness which concentrates on equalizing the outcomes across distinct groups. However, there are cases where the outcomes are unfair from an individual’s point of view when group fairness is satisfied. Individual fairness states that similar individuals should be treated similarly. It is also an important concept of fairness and needs to be considered carefully while we improve group fairness, but it has not yet received much attention in the literature. Most of the existing fairness algorithms concentrates on achieving group fairness disregarding individual fairness. It is important to explore the relationship between group fairness and individual fairness, specifically, in what cases the individual fairness would be affected when we improve group fairness. We show by practical results from real data sets that, after removing the sensitive attributes, there generally exists a trade off between group fairness and individual fairness. Moreover, we use experimental results from simulated data sets to show that satisfying group fairness decreases the level of individual fairness when the Wasserstein distance (which is a measure of the distance between two distributions) between the attribute distributions of two groups is large. By adjusting the parameters of the simulated distributions, we show that, if a large Wasserstein distance is caused by a large mean difference rather than a large variance difference, individual fairness is more likely to be affected when group fairness is satisfied. Furthermore, we not only tweak the existing reweighing algorithm to obtain more flexible performance on individual fairness and group fairness, but also construct a new algorithm to achieve fairness. This approach reduces the mean difference in attribute values between different groups so that the association between the sensitive attribute and non-sensitive attributes is decreased. This method can be used to achieve fairness among more than two demographic groups and solve fairness problems in multi-classification or regression scenarios. We assess the performance of this method in terms of both group fairness and individual fairness and the results show that our method outperforms two existing fairness algorithms: reweighing and reject option based classification.
University of Southampton
Zhou, Wanying
1a590832-6cea-4f30-92c8-6bf6b07beafa
Zhou, Wanying
1a590832-6cea-4f30-92c8-6bf6b07beafa
Gerding, Enrico
d9e92ee5-1a8c-4467-a689-8363e7743362

Zhou, Wanying (2022) Group vs. individual algorithmic fairness. University of Southampton, Doctoral Thesis, 66pp.

Record type: Thesis (Doctoral)

Abstract

Machine learning algorithms are increasingly used in making people’s life decisions across a range of different areas such as loan applications, university admissions, insurance pricing and criminal justice sentencing. If the historical data used to train the algorithm is biased against certain demographic groups (e.g. black people or women), the predictive results of the algorithms will too. From both regulation and ethical perspectives, we need to reduce discrimination and improve group fairness which concentrates on equalizing the outcomes across distinct groups. However, there are cases where the outcomes are unfair from an individual’s point of view when group fairness is satisfied. Individual fairness states that similar individuals should be treated similarly. It is also an important concept of fairness and needs to be considered carefully while we improve group fairness, but it has not yet received much attention in the literature. Most of the existing fairness algorithms concentrates on achieving group fairness disregarding individual fairness. It is important to explore the relationship between group fairness and individual fairness, specifically, in what cases the individual fairness would be affected when we improve group fairness. We show by practical results from real data sets that, after removing the sensitive attributes, there generally exists a trade off between group fairness and individual fairness. Moreover, we use experimental results from simulated data sets to show that satisfying group fairness decreases the level of individual fairness when the Wasserstein distance (which is a measure of the distance between two distributions) between the attribute distributions of two groups is large. By adjusting the parameters of the simulated distributions, we show that, if a large Wasserstein distance is caused by a large mean difference rather than a large variance difference, individual fairness is more likely to be affected when group fairness is satisfied. Furthermore, we not only tweak the existing reweighing algorithm to obtain more flexible performance on individual fairness and group fairness, but also construct a new algorithm to achieve fairness. This approach reduces the mean difference in attribute values between different groups so that the association between the sensitive attribute and non-sensitive attributes is decreased. This method can be used to achieve fairness among more than two demographic groups and solve fairness problems in multi-classification or regression scenarios. We assess the performance of this method in terms of both group fairness and individual fairness and the results show that our method outperforms two existing fairness algorithms: reweighing and reject option based classification.

Text
Wanying_Zhou_MPhil_report - Version of Record
Available under License University of Southampton Thesis Licence.
Download (1MB)
Text
Permission to deposit thesis - form
Restricted to Repository staff only
Available under License University of Southampton Thesis Licence.

More information

Submitted date: 25 April 2022

Identifiers

Local EPrints ID: 467483
URI: http://eprints.soton.ac.uk/id/eprint/467483
PURE UUID: 4634d4d5-77c4-4867-ad37-22349f8d2f54
ORCID for Enrico Gerding: ORCID iD orcid.org/0000-0001-7200-552X

Catalogue record

Date deposited: 11 Jul 2022 16:42
Last modified: 17 Mar 2024 03:03

Export record

Contributors

Author: Wanying Zhou
Thesis advisor: Enrico Gerding ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×