The University of Southampton
University of Southampton Institutional Repository

Trust, regulation, and human-in-the-loop AI: within the European region

Trust, regulation, and human-in-the-loop AI: within the European region
Trust, regulation, and human-in-the-loop AI: within the European region
Artificial intelligence (AI) systems employ learning algorithms which adapt to their users and environment, with learning either pre-trained or allowed to adapt during deployment. Because AI can optimize its behaviour, a unit's factory model behaviour can diverge after release, often at the perceived expense of safety, reliability, and human controllability. Since the Industrial Revolution, trust has ultimately resided in regulatory systems set up by governments and standards bodies. Research into human interactions with autonomous machines demonstrates a shift in the locus of trust: we must trust non-deterministic systems such as AI to self-regulate, albeit within boundaries. This radical shift is one of the biggest issues facing the deployment of AI in the European region.Trust has no accepted definition, but [Rousseau 1998] define it as "a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behaviour of another". Trust is an attitude that an agent will behave as expected and can be relied upon to reach its goal. Trust breaks down after an error or a misunderstanding between the agent and the trusting individual. The psychological state of trust in AI is an emergent property of a complex system, usually involving many cycles of design, training, deployment, measurement of performance, regulation, redesign and retraining.Trust matters, especially in critical sectors such as healthcare, defence & security where duty of care is foremost. Trustworthiness must be planned rather than an afterthought. We can trust in AI, such as when a doctor uses algorithms to screen medical images [NHS-X 2021]. We can also trust with AI, such as when journalists reference a social network algorithm to analyse sources of a news story [WeVerify 2021]. Growing adoption of AI into institutional systems relies on citizens to trust in these systems and have confidence in the way these systems are designed and regulated.Regional approaches for managing trust in AI have recently emerged, leading to different regulatory regimes in the United States, the European region and China. We review these regulatory divergences. Within the European region, research programs are examining how trust impacts user acceptance of AI. Examples include the UKRI Trustworthy Autonomous Systems Hub , the French Confidiance.ai project and the German AI Breakthrough Hub . Europe appears to be developing a "third way" alongside the United States and China [Morton 2021].Healthcare contains many examples of AI applications including online harm risk identification [ProTechThem 2021], mental health behaviour classification [SafeSpacesNLP 2021] and automated blood testing [Pinpoint 2021]. In defence and security, examples include combat management systems [DSTL 2021] and using machine learning to identify chemical and biological contamination [Alan Turing Institute 2021]. There is a growing awareness within critical sectors [Kerasidou 2020] [Taddeo 2019] that AI systems need to address a "public trust deficit" by adding reliability into the perception of AI. In the next two sections we discuss research highlights around the key trends of building safer and more reliable AI systems to engender trust and putting humans in the loop with regards AI systems and teams. We conclude with a discussion about applications and what we consider the future outlook is in this area.
Artificial Intelligence, AI, Natural Language Processing, NLP, Human-in-the-loop-AI, Regulation, Trust
0001-0782
64–68
Middleton, Stuart
404b62ba-d77e-476b-9775-32645b04473f
Letouzé, Emmanuel
da0dfe6b-2b41-4a33-816d-351ab0e59989
Hossaini, Ali
878279cd-5a8a-45d6-881f-f00bfad55adc
Chapman, Age
721b7321-8904-4be2-9b01-876c430743f1
Middleton, Stuart
404b62ba-d77e-476b-9775-32645b04473f
Letouzé, Emmanuel
da0dfe6b-2b41-4a33-816d-351ab0e59989
Hossaini, Ali
878279cd-5a8a-45d6-881f-f00bfad55adc
Chapman, Age
721b7321-8904-4be2-9b01-876c430743f1

Middleton, Stuart, Letouzé, Emmanuel, Hossaini, Ali and Chapman, Age (2022) Trust, regulation, and human-in-the-loop AI: within the European region. Communications of the ACM, 65 (4), 64–68. (doi:10.1145/3511597).

Record type: Article

Abstract

Artificial intelligence (AI) systems employ learning algorithms which adapt to their users and environment, with learning either pre-trained or allowed to adapt during deployment. Because AI can optimize its behaviour, a unit's factory model behaviour can diverge after release, often at the perceived expense of safety, reliability, and human controllability. Since the Industrial Revolution, trust has ultimately resided in regulatory systems set up by governments and standards bodies. Research into human interactions with autonomous machines demonstrates a shift in the locus of trust: we must trust non-deterministic systems such as AI to self-regulate, albeit within boundaries. This radical shift is one of the biggest issues facing the deployment of AI in the European region.Trust has no accepted definition, but [Rousseau 1998] define it as "a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behaviour of another". Trust is an attitude that an agent will behave as expected and can be relied upon to reach its goal. Trust breaks down after an error or a misunderstanding between the agent and the trusting individual. The psychological state of trust in AI is an emergent property of a complex system, usually involving many cycles of design, training, deployment, measurement of performance, regulation, redesign and retraining.Trust matters, especially in critical sectors such as healthcare, defence & security where duty of care is foremost. Trustworthiness must be planned rather than an afterthought. We can trust in AI, such as when a doctor uses algorithms to screen medical images [NHS-X 2021]. We can also trust with AI, such as when journalists reference a social network algorithm to analyse sources of a news story [WeVerify 2021]. Growing adoption of AI into institutional systems relies on citizens to trust in these systems and have confidence in the way these systems are designed and regulated.Regional approaches for managing trust in AI have recently emerged, leading to different regulatory regimes in the United States, the European region and China. We review these regulatory divergences. Within the European region, research programs are examining how trust impacts user acceptance of AI. Examples include the UKRI Trustworthy Autonomous Systems Hub , the French Confidiance.ai project and the German AI Breakthrough Hub . Europe appears to be developing a "third way" alongside the United States and China [Morton 2021].Healthcare contains many examples of AI applications including online harm risk identification [ProTechThem 2021], mental health behaviour classification [SafeSpacesNLP 2021] and automated blood testing [Pinpoint 2021]. In defence and security, examples include combat management systems [DSTL 2021] and using machine learning to identify chemical and biological contamination [Alan Turing Institute 2021]. There is a growing awareness within critical sectors [Kerasidou 2020] [Taddeo 2019] that AI systems need to address a "public trust deficit" by adding reliability into the perception of AI. In the next two sections we discuss research highlights around the key trends of building safer and more reliable AI systems to engender trust and putting humans in the loop with regards AI systems and teams. We conclude with a discussion about applications and what we consider the future outlook is in this area.

Text
CACM-final-06-12-2021 - Accepted Manuscript
Available under License Creative Commons Attribution.
Download (205kB)

More information

Accepted/In Press date: 23 November 2021
e-pub ahead of print date: 19 March 2022
Published date: 19 March 2022
Additional Information: Funding Information: Creating regulatory environments that allow nation-states to gain com mercial, military, and social advan tages in the global AI race may be the defining geopolitical challenge of this century. Regulation around AI has been developing worldwide, moving from self-assessment guidelines3 to frameworks for national or transna- tional regulation. We have noted that there are clear differences between the European region and other areas with robust capacity in AI, notably the need for public acceptance. The future will be a highly competitive environment, and regulation must balance the benefits of rapid deployment, the willingness of individuals to trust AI, and the value systems which underlie trust. Acknowledgments. This work was supported by the Engineer ing and Physical Sciences Research Council (EP/V00784X/1), Natural Environment Research Council (NE/ S015604/1), and Economic and Social Research Council (ES/V011278/1; ES/ R003254/1).
Keywords: Artificial Intelligence, AI, Natural Language Processing, NLP, Human-in-the-loop-AI, Regulation, Trust

Identifiers

Local EPrints ID: 452890
URI: http://eprints.soton.ac.uk/id/eprint/452890
ISSN: 0001-0782
PURE UUID: 9b061bde-63da-4d56-af74-a40e5ced2fbd
ORCID for Stuart Middleton: ORCID iD orcid.org/0000-0001-8305-8176
ORCID for Age Chapman: ORCID iD orcid.org/0000-0002-3814-2587

Catalogue record

Date deposited: 06 Jan 2022 17:45
Last modified: 17 Mar 2024 03:46

Export record

Altmetrics

Contributors

Author: Emmanuel Letouzé
Author: Ali Hossaini
Author: Age Chapman ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×