The University of Southampton
University of Southampton Institutional Repository

AI audits for assessing design logics and building ethical systems: the case of predictive policing algorithms

AI audits for assessing design logics and building ethical systems: the case of predictive policing algorithms
AI audits for assessing design logics and building ethical systems: the case of predictive policing algorithms
Organisations, governments, institutions and others across several jurisdictions are currently using AI systems for a constellation of high-stakes decisions that pose implications for human rights and civil liberties. But a fast-growing multidisciplinary scholarship on AI bias is currently documenting problems such as the discriminatory labelling and surveillance of historically marginalised subgroups. One of the ways in which AI systems generate such downstream outcomes is through their inputs. This paper focuses on a specific input dynamic which is the theoretical foundation that informs the design, operation, and outputs of such systems. The paper uses the set of technologies known as predictive policing algorithms as a case example to illustrate how theoretical assumptions can pose adverse social consequences and should therefore be systematically evaluated during audits if the objective is to detect unknown risks, avoid AI harms, and build ethical systems. In its analysis of these issues, the paper adds a new dimension to the literature on AI ethics and audits by investigating algorithmic impact in the context of underpinning theory. In doing so, the paper provides insights that can usefully inform auditing policy and practice instituted by relevant stakeholders including the developers, vendors, and procurers of AI systems as well as independent auditors.
AI Audit, AI Ethics, AI bias, Fair AI, Transparent AI, Predictive Policing Algorithms
2730-5961
Ugwudike, Pamela
2faf9318-093b-4396-9ba1-2291c8991bac
Ugwudike, Pamela
2faf9318-093b-4396-9ba1-2291c8991bac

Ugwudike, Pamela (2021) AI audits for assessing design logics and building ethical systems: the case of predictive policing algorithms. AI and Ethics. (doi:10.1007/s43681-021-00117-5).

Record type: Article

Abstract

Organisations, governments, institutions and others across several jurisdictions are currently using AI systems for a constellation of high-stakes decisions that pose implications for human rights and civil liberties. But a fast-growing multidisciplinary scholarship on AI bias is currently documenting problems such as the discriminatory labelling and surveillance of historically marginalised subgroups. One of the ways in which AI systems generate such downstream outcomes is through their inputs. This paper focuses on a specific input dynamic which is the theoretical foundation that informs the design, operation, and outputs of such systems. The paper uses the set of technologies known as predictive policing algorithms as a case example to illustrate how theoretical assumptions can pose adverse social consequences and should therefore be systematically evaluated during audits if the objective is to detect unknown risks, avoid AI harms, and build ethical systems. In its analysis of these issues, the paper adds a new dimension to the literature on AI ethics and audits by investigating algorithmic impact in the context of underpinning theory. In doing so, the paper provides insights that can usefully inform auditing policy and practice instituted by relevant stakeholders including the developers, vendors, and procurers of AI systems as well as independent auditors.

Text
AI and Ethics - Accepted Manuscript
Download (63kB)
Text
2021_Article_ - Version of Record
Available under License Creative Commons Attribution.
Download (655kB)

More information

Accepted/In Press date: 28 October 2021
e-pub ahead of print date: 13 December 2021
Additional Information: Invited paper for Special Issue on: AI Auditing, Assurance, and Certification, Submitted to AI and Ethics Journal
Keywords: AI Audit, AI Ethics, AI bias, Fair AI, Transparent AI, Predictive Policing Algorithms

Identifiers

Local EPrints ID: 452784
URI: http://eprints.soton.ac.uk/id/eprint/452784
ISSN: 2730-5961
PURE UUID: 4be7b1ff-6345-45b2-a0ce-cb335398b33e
ORCID for Pamela Ugwudike: ORCID iD orcid.org/0000-0002-1084-7796

Catalogue record

Date deposited: 20 Dec 2021 17:41
Last modified: 17 Mar 2024 03:47

Export record

Altmetrics

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×