The University of Southampton
University of Southampton Institutional Repository

Reasoning About Responsibility in Autonomous Systems: Challenges and Opportunities

Reasoning About Responsibility in Autonomous Systems: Challenges and Opportunities
Reasoning About Responsibility in Autonomous Systems: Challenges and Opportunities
Ensuring the trustworthiness of autonomous systems and artificial intelligence
is an important interdisciplinary endeavour. In this position paper, we argue that
this endeavour will benefit from technical advancements in capturing various forms of responsibility, and we present a comprehensive research agenda to achieve this. In particular, we argue that ensuring the reliability of autonomous system can take advantage of technical approaches for quantifying degrees of responsibility and for coordinating tasks based on that. Moreover, we deem that, in certifying the legality of an AI system, formal and computationally implementable notions of responsibility, blame, accountability, and liability are applicable for addressing potential responsibility gaps (i.e., situations in which a group is responsible, but individuals’ responsibility may be unclear). This is a call to enable AI systems themselves, as well as those involved in the design, monitoring, and governance of AI systems, to represent and reason about who can be seen as responsible in prospect (e.g., for completing a task in future) and who can be seen as responsible retrospectively (e.g., for a failure that has already occurred). To that end, in this work, we show that across all stages of the design, development, and deployment of Trustworthy Autonomous Systems (TAS), responsibility reasoning should play a key role. This position paper is the first step towards establishing a road-map and research agenda on how the notion of responsibility can provide novel solution concepts for ensuring the reliability and legality of TAS and, as a result, enables an effective embedding of AI technologies into society.
Trustworthy Autonomous Systems, human-agent collectives, Multiagent Responsibility Reasoning, Multiagent Systems, Artificial Intelligence, Human-Centred AI, Citizen-Centric AI Systems
0951-5666
Yazdanpanah, Vahid
28f82058-5e51-4f56-be14-191ab5767d56
Gerding, Enrico
d9e92ee5-1a8c-4467-a689-8363e7743362
Stein, Sebastian
cb2325e7-5e63-475e-8a69-9db2dfbdb00b
Dastani, Mehdi
44cecb91-95c6-4821-a307-c43e9434ea4a
Jonker, Catholijn M.
b55441b1-d0fb-49f7-9a09-9375a9201fd0
Norman, Timothy
663e522f-807c-4569-9201-dc141c8eb50d
Ramchurn, Sarvapali
1d62ae2a-a498-444e-912d-a6082d3aaea3
Yazdanpanah, Vahid
28f82058-5e51-4f56-be14-191ab5767d56
Gerding, Enrico
d9e92ee5-1a8c-4467-a689-8363e7743362
Stein, Sebastian
cb2325e7-5e63-475e-8a69-9db2dfbdb00b
Dastani, Mehdi
44cecb91-95c6-4821-a307-c43e9434ea4a
Jonker, Catholijn M.
b55441b1-d0fb-49f7-9a09-9375a9201fd0
Norman, Timothy
663e522f-807c-4569-9201-dc141c8eb50d
Ramchurn, Sarvapali
1d62ae2a-a498-444e-912d-a6082d3aaea3

Yazdanpanah, Vahid, Gerding, Enrico, Stein, Sebastian, Dastani, Mehdi, Jonker, Catholijn M., Norman, Timothy and Ramchurn, Sarvapali (2022) Reasoning About Responsibility in Autonomous Systems: Challenges and Opportunities. AI & Society. (In Press)

Record type: Article

Abstract

Ensuring the trustworthiness of autonomous systems and artificial intelligence
is an important interdisciplinary endeavour. In this position paper, we argue that
this endeavour will benefit from technical advancements in capturing various forms of responsibility, and we present a comprehensive research agenda to achieve this. In particular, we argue that ensuring the reliability of autonomous system can take advantage of technical approaches for quantifying degrees of responsibility and for coordinating tasks based on that. Moreover, we deem that, in certifying the legality of an AI system, formal and computationally implementable notions of responsibility, blame, accountability, and liability are applicable for addressing potential responsibility gaps (i.e., situations in which a group is responsible, but individuals’ responsibility may be unclear). This is a call to enable AI systems themselves, as well as those involved in the design, monitoring, and governance of AI systems, to represent and reason about who can be seen as responsible in prospect (e.g., for completing a task in future) and who can be seen as responsible retrospectively (e.g., for a failure that has already occurred). To that end, in this work, we show that across all stages of the design, development, and deployment of Trustworthy Autonomous Systems (TAS), responsibility reasoning should play a key role. This position paper is the first step towards establishing a road-map and research agenda on how the notion of responsibility can provide novel solution concepts for ensuring the reliability and legality of TAS and, as a result, enables an effective embedding of AI technologies into society.

Text
Reasoning About Responsibility in Autonomous Systems - Accepted Manuscript
Available under License Creative Commons Attribution.
Download (175kB)

More information

Accepted/In Press date: 18 November 2022
Keywords: Trustworthy Autonomous Systems, human-agent collectives, Multiagent Responsibility Reasoning, Multiagent Systems, Artificial Intelligence, Human-Centred AI, Citizen-Centric AI Systems

Identifiers

Local EPrints ID: 471971
URI: http://eprints.soton.ac.uk/id/eprint/471971
ISSN: 0951-5666
PURE UUID: 741480fe-5205-4e61-b796-ce2b51e436bb
ORCID for Vahid Yazdanpanah: ORCID iD orcid.org/0000-0002-4468-6193
ORCID for Enrico Gerding: ORCID iD orcid.org/0000-0001-7200-552X
ORCID for Sebastian Stein: ORCID iD orcid.org/0000-0003-2858-8857
ORCID for Timothy Norman: ORCID iD orcid.org/0000-0002-6387-4034
ORCID for Sarvapali Ramchurn: ORCID iD orcid.org/0000-0001-9686-4302

Catalogue record

Date deposited: 23 Nov 2022 17:32
Last modified: 16 Apr 2024 01:58

Export record

Contributors

Author: Vahid Yazdanpanah ORCID iD
Author: Enrico Gerding ORCID iD
Author: Sebastian Stein ORCID iD
Author: Mehdi Dastani
Author: Catholijn M. Jonker
Author: Timothy Norman ORCID iD
Author: Sarvapali Ramchurn ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×