The University of Southampton
University of Southampton Institutional Repository

Explainability, public reason, and medical artificial intelligence

Explainability, public reason, and medical artificial intelligence
Explainability, public reason, and medical artificial intelligence
The contention that medical artificial intelligence (AI) should be ‘explainable’ is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that ‘explainability’ is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent ‘political response’ to these issues contends that AI used in high-stakes scenarios, including medical AI, must be explainable to meet basic standards of legitimacy: People are owed reasons for decisions that impact their vital interests, and this requires explainable AI. This article demonstrates why the political response fails. Attending to systemic considerations, as its proponents desire, suggests that the political response is subject to the same criticisms as other arguments for explainable AI and presents new issues. It also suggests that decision-making about non-explainable medical AI can meet public reason standards. The most plausible version of the response amounts to a simple claim that public reason demands reasons why AI is permitted. But that does not actually support explainable AI or respond to criticisms of strong requirements for explainable medical AI.
AI, artificial intelligence, governance, political philosophy, public reason, Artificial Intelligence, Political Philosophy, Public Reason, Governance
1386-2820
743–762
Da Silva, Michael
05ad649f-8409-4012-8edc-88709b1a3182
Da Silva, Michael
05ad649f-8409-4012-8edc-88709b1a3182

Da Silva, Michael (2023) Explainability, public reason, and medical artificial intelligence. Ethical Theory and Moral Practice, 26 (5), 743–762. (doi:10.1007/s10677-023-10390-4).

Record type: Article

Abstract

The contention that medical artificial intelligence (AI) should be ‘explainable’ is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that ‘explainability’ is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent ‘political response’ to these issues contends that AI used in high-stakes scenarios, including medical AI, must be explainable to meet basic standards of legitimacy: People are owed reasons for decisions that impact their vital interests, and this requires explainable AI. This article demonstrates why the political response fails. Attending to systemic considerations, as its proponents desire, suggests that the political response is subject to the same criticisms as other arguments for explainable AI and presents new issues. It also suggests that decision-making about non-explainable medical AI can meet public reason standards. The most plausible version of the response amounts to a simple claim that public reason demands reasons why AI is permitted. But that does not actually support explainable AI or respond to criticisms of strong requirements for explainable medical AI.

Text
Da Silva ETMP AAM - Accepted Manuscript
Available under License Creative Commons Attribution.
Download (263kB)
Text
s10677-023-10390-4 - Version of Record
Available under License Creative Commons Attribution.
Download (801kB)

More information

Accepted/In Press date: 22 April 2023
e-pub ahead of print date: 26 May 2023
Published date: November 2023
Additional Information: Publisher Copyright: © 2023, The Author(s).
Keywords: AI, artificial intelligence, governance, political philosophy, public reason, Artificial Intelligence, Political Philosophy, Public Reason, Governance

Identifiers

Local EPrints ID: 476908
URI: http://eprints.soton.ac.uk/id/eprint/476908
ISSN: 1386-2820
PURE UUID: 2c750dbe-2f8b-4f72-b655-14fe4b18dd36
ORCID for Michael Da Silva: ORCID iD orcid.org/0000-0002-7021-9847

Catalogue record

Date deposited: 19 May 2023 16:33
Last modified: 22 Mar 2024 03:03

Export record

Altmetrics

Contributors

Author: Michael Da Silva ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×