The University of Southampton
University of Southampton Institutional Repository

The dual function of explanations: Why it is useful to compute explanations

The dual function of explanations: Why it is useful to compute explanations
The dual function of explanations: Why it is useful to compute explanations
Whilst the legal debate concerning automated decision-making has been focused mainly on whether a ‘right to explanation’ exists in the GDPR, the emergence of ‘explainable Artificial Intelligence’ (XAI) has produced taxonomies for the explanation of Artificial Intelligence (AI) systems. However, various researchers have warned that transparency of the algorithmic processes in itself is not enough. Better and easier tools for the assessment and review of the socio-technical systems that incorporate automated decision-making are needed. The PLEAD project suggests that, aside from fulfilling the obligations set forth by Article 22 of the GDPR, explanations can also assist towards a holistic compliance strategy if used as detective controls. PLEAD aims to show that computable explanations can facilitate monitoring and auditing, and make compliance more systematic. Automated computable explanations can be key controls in fulfilling accountability and data-protection-by-design obligations, able to empower both controllers and data subjects. This opinion piece presents the work undertaken by the PLEAD project towards facilitating the generation of computable explanations. PLEAD leverages provenance-based technology to compute explanations as external detective controls to the benefit of data subjects and as internal detective controls to the benefit of the data controller.
Artificial intelligence, Automated decisions, Explainability, Explainable AI, GDPR
2212-4748
Tsakalakis, Niko
eae42e98-58b8-45b9-8c11-35a798cc9671
Stalla-Bourdillon, Sophie
c189651b-9ed3-49f6-bf37-25a47c487164
Carmichael, Laura
3f71fb73-581b-43c3-a261-a6627994c96e
Huynh, Trung Dong
9e04643c-cdd0-41ce-a641-00df4804280f
Moreau, Luc
0b53974f-3e78-4c56-a47e-799d9f220911
Helal, Ayah
0e44c81d-ef58-4503-b8cb-3d1fb82ec651
Tsakalakis, Niko
eae42e98-58b8-45b9-8c11-35a798cc9671
Stalla-Bourdillon, Sophie
c189651b-9ed3-49f6-bf37-25a47c487164
Carmichael, Laura
3f71fb73-581b-43c3-a261-a6627994c96e
Huynh, Trung Dong
9e04643c-cdd0-41ce-a641-00df4804280f
Moreau, Luc
0b53974f-3e78-4c56-a47e-799d9f220911
Helal, Ayah
0e44c81d-ef58-4503-b8cb-3d1fb82ec651

Tsakalakis, Niko, Stalla-Bourdillon, Sophie, Carmichael, Laura, Huynh, Trung Dong, Moreau, Luc and Helal, Ayah (2021) The dual function of explanations: Why it is useful to compute explanations. Computer Law and Security Review: The International Journal of Technology Law and Practice, 41, [105527]. (doi:10.1016/j.clsr.2020.105527).

Record type: Article

Abstract

Whilst the legal debate concerning automated decision-making has been focused mainly on whether a ‘right to explanation’ exists in the GDPR, the emergence of ‘explainable Artificial Intelligence’ (XAI) has produced taxonomies for the explanation of Artificial Intelligence (AI) systems. However, various researchers have warned that transparency of the algorithmic processes in itself is not enough. Better and easier tools for the assessment and review of the socio-technical systems that incorporate automated decision-making are needed. The PLEAD project suggests that, aside from fulfilling the obligations set forth by Article 22 of the GDPR, explanations can also assist towards a holistic compliance strategy if used as detective controls. PLEAD aims to show that computable explanations can facilitate monitoring and auditing, and make compliance more systematic. Automated computable explanations can be key controls in fulfilling accountability and data-protection-by-design obligations, able to empower both controllers and data subjects. This opinion piece presents the work undertaken by the PLEAD project towards facilitating the generation of computable explanations. PLEAD leverages provenance-based technology to compute explanations as external detective controls to the benefit of data subjects and as internal detective controls to the benefit of the data controller.

Text
Tsakalakis_clsr_2020_accepted - Accepted Manuscript
Download (140kB)

More information

Accepted/In Press date: 21 December 2020
e-pub ahead of print date: 18 March 2021
Published date: July 2021
Additional Information: Funding Information: The work presented here has been supported by the UK Engineering and Physical Sciences Research Council ( EPSRC ) under Grant numbers EP/S027238/1 and EP/S027254/1 . Publisher Copyright: © 2021 Niko Tsakalakis, Sophie Stalla-Bourdillon, Laura Carmichael, Trung Dong Huynh, Luc Moreau, Ayah Helal
Keywords: Artificial intelligence, Automated decisions, Explainability, Explainable AI, GDPR

Identifiers

Local EPrints ID: 446899
URI: http://eprints.soton.ac.uk/id/eprint/446899
ISSN: 2212-4748
PURE UUID: 3f3fefcc-35fa-4fef-8715-eeaa7fc5178c
ORCID for Niko Tsakalakis: ORCID iD orcid.org/0000-0003-2654-0825
ORCID for Sophie Stalla-Bourdillon: ORCID iD orcid.org/0000-0003-3715-1219
ORCID for Laura Carmichael: ORCID iD orcid.org/0000-0001-9391-1310

Catalogue record

Date deposited: 25 Feb 2021 17:45
Last modified: 06 Jun 2024 04:08

Export record

Altmetrics

Contributors

Author: Niko Tsakalakis ORCID iD
Author: Trung Dong Huynh
Author: Luc Moreau
Author: Ayah Helal

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×