The University of Southampton
University of Southampton Institutional Repository

Applying strategic reasoning for accountability ascription in multiagent teams

Applying strategic reasoning for accountability ascription in multiagent teams
Applying strategic reasoning for accountability ascription in multiagent teams
For developing human-centred trustworthy autonomous systems and ensuring their safe and effective integration with the society, it is crucial to enrich autonomous agents with the capacity to represent and reason about their accountability. This is, on one hand, about their accountability as collaborative teams and, on the other hand, their individual degree of accountability in a team. In this context, accountability is understood as being responsible for failing to deliver a task that a team was allocated and able to fulfil. To that end, the semantic (strategic reasoning) machinery of the Alternating-time Temporal Logic (ATL) is a natural modelling approach as it captures the temporal, strategic, and coalitional dynamics of the notion of accountability. This allows focusing on the main problem on: “Who is accountable for an unfulfilled task in multiagent teams: when, why, and to what extent?” We apply ATL-based semantics to define accountability in multiagent teams and develop a fair and computationally feasible procedure for ascribing a degree of accountability to involved agents in accountable teams. Our main results are on decidability, fairness properties, and computational complexity of the presented accountability ascription methods in multiagent teams.
Accountability, Formal Methods, Modal logics, Multiagent Systems, Responsibility Reasoning, Strategic Reasoning, temporal logics
Yazdanpanah, Vahid
28f82058-5e51-4f56-be14-191ab5767d56
Stein, Sebastian
cb2325e7-5e63-475e-8a69-9db2dfbdb00b
Gerding, Enrico
d9e92ee5-1a8c-4467-a689-8363e7743362
Jennings, Nicholas R.
3f6b53c2-4b6d-4b9d-bb51-774898f6f136
Yazdanpanah, Vahid
28f82058-5e51-4f56-be14-191ab5767d56
Stein, Sebastian
cb2325e7-5e63-475e-8a69-9db2dfbdb00b
Gerding, Enrico
d9e92ee5-1a8c-4467-a689-8363e7743362
Jennings, Nicholas R.
3f6b53c2-4b6d-4b9d-bb51-774898f6f136

Yazdanpanah, Vahid, Stein, Sebastian, Gerding, Enrico and Jennings, Nicholas R. (2021) Applying strategic reasoning for accountability ascription in multiagent teams. The IJCAI-21 Workshop on Artificial Intelligence Safety (AISafety 2021), Virtual. 21 - 22 Aug 2021. 9 pp .

Record type: Conference or Workshop Item (Paper)

Abstract

For developing human-centred trustworthy autonomous systems and ensuring their safe and effective integration with the society, it is crucial to enrich autonomous agents with the capacity to represent and reason about their accountability. This is, on one hand, about their accountability as collaborative teams and, on the other hand, their individual degree of accountability in a team. In this context, accountability is understood as being responsible for failing to deliver a task that a team was allocated and able to fulfil. To that end, the semantic (strategic reasoning) machinery of the Alternating-time Temporal Logic (ATL) is a natural modelling approach as it captures the temporal, strategic, and coalitional dynamics of the notion of accountability. This allows focusing on the main problem on: “Who is accountable for an unfulfilled task in multiagent teams: when, why, and to what extent?” We apply ATL-based semantics to define accountability in multiagent teams and develop a fair and computationally feasible procedure for ascribing a degree of accountability to involved agents in accountable teams. Our main results are on decidability, fairness properties, and computational complexity of the presented accountability ascription methods in multiagent teams.

Text
Accountability Ascription in Multiagent Teams - Version of Record
Download (319kB)

More information

Published date: 21 August 2021
Additional Information: Funding Information: This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) through the Trustworthy Autonomous Systems Hub (EP/V00784X/1), the platform grant entitled “AutoTrust: Designing a Human-Centred Trusted, Secure, Intelligent and Usable Internet of Vehicles” (EP/R029563/1), and the Turing AI Fellowship on Citizen-Centric AI Systems (EP/V022067/1). Publisher Copyright: © 2021 CEUR-WS. All rights reserved.
Venue - Dates: The IJCAI-21 Workshop on Artificial Intelligence Safety (AISafety 2021), Virtual, 2021-08-21 - 2021-08-22
Keywords: Accountability, Formal Methods, Modal logics, Multiagent Systems, Responsibility Reasoning, Strategic Reasoning, temporal logics

Identifiers

Local EPrints ID: 449852
URI: http://eprints.soton.ac.uk/id/eprint/449852
PURE UUID: db3d1793-710e-4282-a5a1-03a2a292bf47
ORCID for Vahid Yazdanpanah: ORCID iD orcid.org/0000-0002-4468-6193
ORCID for Sebastian Stein: ORCID iD orcid.org/0000-0003-2858-8857
ORCID for Enrico Gerding: ORCID iD orcid.org/0000-0001-7200-552X

Catalogue record

Date deposited: 22 Jun 2021 16:31
Last modified: 17 Mar 2024 04:02

Export record

Contributors

Author: Vahid Yazdanpanah ORCID iD
Author: Sebastian Stein ORCID iD
Author: Enrico Gerding ORCID iD
Author: Nicholas R. Jennings

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×