The University of Southampton
University of Southampton Institutional Repository

Artificializing whiteness? How AI bolsters white privilege

Artificializing whiteness? How AI bolsters white privilege
Artificializing whiteness? How AI bolsters white privilege
For, over the last few years, a critically compelling set of evidence has been emerging to expose the racism and sexism embedded in many contemporary applications of AI. These are predominantly the data driven technologies, such as automated decision-making (ADM) processes, whereby decisions are made through, for example, ‘algorithms’ or sets of rules which find correlations between datasets to make a wide range of decisions (ICO 2020). In contrast, therefore, to the more futuristic, science fiction versions imagined by ‘General’ AI (Broussard 2019), what we have now is ‘Narrow AI’: machine learning techniques which use data to predict and determine outcomes across a range of service and governance contexts. ADM is increasingly widely used in the US (Benjamin 2019; Costanza-Chock 2020; Eubanks 2018; Noble 2018; Perez 2019) but is also starting to appear in service delivery in other areas such as the UK and Western Europe (Algorithm Watch 2020; Chiusi 2020). At the same time, a growing body of research from the US is cataloguing the injustices built into the very design of ADM. From denying loans, mortgages, and credit cards to minorities (Savchuk 2019), profiling non-white faces as more likely to (re)commit crime (Cossins 2018), or designing technology which only recognizes white skin, such that self-driving cars are more likely to drive into black pedestrians (Cutherbertson 2019), recruitment software more likely to select white candidates (Dastin 2018), beauty competitions more likely to reject women with dark skin as ‘not beautiful’ (Levin 2016) and dispensers more likely to release soap onto white hands (Morris 2020), the impact of ADM is clearly life-changing. In other words, many of the ADM systems currently deployed, through defaulting human beings to a white-biased physiognomy, naturalize Whiteness as a dominant social identity and, in the process, reinforce inequalities and
oppressive social relationships (Noble 2018). At best, this renders black and ethnic minorities invisible but, at worse, significantly entrenches the denial of resources and life chances, and amplifies processes of discrimination and criminalization. I term sets of design processes such as these ‘artificializing whiteness’, by which the social outcomes of AI are routinely constructed to artificially bolster white and, as I will proceed to demonstrate, also, usually, male and middle-class privilege.
In this chapter, I seek to advance a conceptual discussion of this topic, specifically bringing to bear theoretical perspectives from Critical Race Theory (CRT) and Critical Whiteness Studies (CWS), with Socio-Technical Studies (STS) and governmentality.
Routledge
Leonard, Pauline
a2839090-eccc-4d84-ab63-c6a484c6d7c1
Andreasson, Rikke
Keskinen, Suvi
Tate, Shirley Anne
Lundstrom, Catrin
Leonard, Pauline
a2839090-eccc-4d84-ab63-c6a484c6d7c1
Andreasson, Rikke
Keskinen, Suvi
Tate, Shirley Anne
Lundstrom, Catrin

Leonard, Pauline (2022) Artificializing whiteness? How AI bolsters white privilege. In, Andreasson, Rikke, Keskinen, Suvi, Tate, Shirley Anne and Lundstrom, Catrin (eds.) Routledge International Handbook of New Critical Race and Whiteness Studies. United States. Routledge. (In Press)

Record type: Book Section

Abstract

For, over the last few years, a critically compelling set of evidence has been emerging to expose the racism and sexism embedded in many contemporary applications of AI. These are predominantly the data driven technologies, such as automated decision-making (ADM) processes, whereby decisions are made through, for example, ‘algorithms’ or sets of rules which find correlations between datasets to make a wide range of decisions (ICO 2020). In contrast, therefore, to the more futuristic, science fiction versions imagined by ‘General’ AI (Broussard 2019), what we have now is ‘Narrow AI’: machine learning techniques which use data to predict and determine outcomes across a range of service and governance contexts. ADM is increasingly widely used in the US (Benjamin 2019; Costanza-Chock 2020; Eubanks 2018; Noble 2018; Perez 2019) but is also starting to appear in service delivery in other areas such as the UK and Western Europe (Algorithm Watch 2020; Chiusi 2020). At the same time, a growing body of research from the US is cataloguing the injustices built into the very design of ADM. From denying loans, mortgages, and credit cards to minorities (Savchuk 2019), profiling non-white faces as more likely to (re)commit crime (Cossins 2018), or designing technology which only recognizes white skin, such that self-driving cars are more likely to drive into black pedestrians (Cutherbertson 2019), recruitment software more likely to select white candidates (Dastin 2018), beauty competitions more likely to reject women with dark skin as ‘not beautiful’ (Levin 2016) and dispensers more likely to release soap onto white hands (Morris 2020), the impact of ADM is clearly life-changing. In other words, many of the ADM systems currently deployed, through defaulting human beings to a white-biased physiognomy, naturalize Whiteness as a dominant social identity and, in the process, reinforce inequalities and
oppressive social relationships (Noble 2018). At best, this renders black and ethnic minorities invisible but, at worse, significantly entrenches the denial of resources and life chances, and amplifies processes of discrimination and criminalization. I term sets of design processes such as these ‘artificializing whiteness’, by which the social outcomes of AI are routinely constructed to artificially bolster white and, as I will proceed to demonstrate, also, usually, male and middle-class privilege.
In this chapter, I seek to advance a conceptual discussion of this topic, specifically bringing to bear theoretical perspectives from Critical Race Theory (CRT) and Critical Whiteness Studies (CWS), with Socio-Technical Studies (STS) and governmentality.

Text
Leonard Artificializing whiteness Revised version Final
Restricted to Repository staff only
Request a copy

More information

Accepted/In Press date: 9 May 2022

Identifiers

Local EPrints ID: 458182
URI: http://eprints.soton.ac.uk/id/eprint/458182
PURE UUID: 7437c95e-c56d-4234-a101-678af3ee2dac
ORCID for Pauline Leonard: ORCID iD orcid.org/0000-0002-8112-0631

Catalogue record

Date deposited: 30 Jun 2022 16:39
Last modified: 17 Mar 2024 02:41

Export record

Contributors

Author: Pauline Leonard ORCID iD
Editor: Rikke Andreasson
Editor: Suvi Keskinen
Editor: Shirley Anne Tate
Editor: Catrin Lundstrom

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×