AI audits for assessing design logics and building ethical systems: the case of predictive policing algorithms
AI audits for assessing design logics and building ethical systems: the case of predictive policing algorithms
Organisations, governments, institutions and others across several jurisdictions are currently using AI systems for a constellation of high-stakes decisions that pose implications for human rights and civil liberties. But a fast-growing multidisciplinary scholarship on AI bias is currently documenting problems such as the discriminatory labelling and surveillance of historically marginalised subgroups. One of the ways in which AI systems generate such downstream outcomes is through their inputs. This paper focuses on a specific input dynamic which is the theoretical foundation that informs the design, operation, and outputs of such systems. The paper uses the set of technologies known as predictive policing algorithms as a case example to illustrate how theoretical assumptions can pose adverse social consequences and should therefore be systematically evaluated during audits if the objective is to detect unknown risks, avoid AI harms, and build ethical systems. In its analysis of these issues, the paper adds a new dimension to the literature on AI ethics and audits by investigating algorithmic impact in the context of underpinning theory. In doing so, the paper provides insights that can usefully inform auditing policy and practice instituted by relevant stakeholders including the developers, vendors, and procurers of AI systems as well as independent auditors.
AI Audit, AI Ethics, AI bias, Fair AI, Transparent AI, Predictive Policing Algorithms
Ugwudike, Pamela
2faf9318-093b-4396-9ba1-2291c8991bac
Ugwudike, Pamela
2faf9318-093b-4396-9ba1-2291c8991bac
Ugwudike, Pamela
(2021)
AI audits for assessing design logics and building ethical systems: the case of predictive policing algorithms.
AI and Ethics.
(doi:10.1007/s43681-021-00117-5).
Abstract
Organisations, governments, institutions and others across several jurisdictions are currently using AI systems for a constellation of high-stakes decisions that pose implications for human rights and civil liberties. But a fast-growing multidisciplinary scholarship on AI bias is currently documenting problems such as the discriminatory labelling and surveillance of historically marginalised subgroups. One of the ways in which AI systems generate such downstream outcomes is through their inputs. This paper focuses on a specific input dynamic which is the theoretical foundation that informs the design, operation, and outputs of such systems. The paper uses the set of technologies known as predictive policing algorithms as a case example to illustrate how theoretical assumptions can pose adverse social consequences and should therefore be systematically evaluated during audits if the objective is to detect unknown risks, avoid AI harms, and build ethical systems. In its analysis of these issues, the paper adds a new dimension to the literature on AI ethics and audits by investigating algorithmic impact in the context of underpinning theory. In doing so, the paper provides insights that can usefully inform auditing policy and practice instituted by relevant stakeholders including the developers, vendors, and procurers of AI systems as well as independent auditors.
Text
AI and Ethics
- Accepted Manuscript
Text
2021_Article_
- Version of Record
More information
Accepted/In Press date: 28 October 2021
e-pub ahead of print date: 13 December 2021
Additional Information:
Invited paper for Special Issue on: AI Auditing, Assurance, and Certification, Submitted to AI and Ethics Journal
Keywords:
AI Audit, AI Ethics, AI bias, Fair AI, Transparent AI, Predictive Policing Algorithms
Identifiers
Local EPrints ID: 452784
URI: http://eprints.soton.ac.uk/id/eprint/452784
ISSN: 2730-5961
PURE UUID: 4be7b1ff-6345-45b2-a0ce-cb335398b33e
Catalogue record
Date deposited: 20 Dec 2021 17:41
Last modified: 17 Mar 2024 03:47
Export record
Altmetrics
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics