The dual function of explanations: why it is useful to compute explanations
The dual function of explanations: why it is useful to compute explanations
The increasing dependence of decision-making on some level of automation has naturally led to discussions about the trustworthiness of such automation, calls for transparent automated decision-making and the emergence of ‘explainable Artificial Intelligence’ (XAI). Although XAI research has produced a number of taxonomies for the explanation of Artificial Intelligence (AI) and Machine Learning (ML) models, the legal debate has so far been mainly focused on whether a ‘right to explanation’ exists in the GDPR. Lately, a growing body of interdisciplinary literature is concentrating on the goals and substance of explanations produced for automated decision-making, with a view to clarify their role and improve their value against unfairness, discrimination and opacity for the purposes ensuring compliance with Article 22 of the GDPR. At the same time, several researchers have warned that transparency of the algorithmic processes in itself is not enough and tools for better and easier assessment and review of the whole socio-technical system that include automated decision-making are needed. We suggest that explanations can be relevant for most of the obligations set forth by the GDPR and can assist towards a holistic compliance strategy if used as detective controls. Automating the creation of computable explanations that can support breach detection has the power to make compliance more systematic and to facilitate monitoring and auditing. Carefully constructed explanations can empower both data subjects and controllers and should be seen as key controls in order to meet accountability and data protection-by-design obligations. This paper thus presents the work undertaken by the PLEAD project towards ‘explainable-by-design’ socio-technical systems. PLEAD acknowledges the dual function of explanations as external detective controls (to benefit data subjects) and as internal detective controls (to benefit data controllers) and leverages provenance-based technology to compute explanations and support the deployment of systematic compliance strategies.
Tsakalakis, Nikolaos
eae42e98-58b8-45b9-8c11-35a798cc9671
Stalla-Bourdillon, Sophie
c189651b-9ed3-49f6-bf37-25a47c487164
Carmichael, Laura
3f71fb73-581b-43c3-a261-a6627994c96e
Huynh, Trung Dong
9e04643c-cdd0-41ce-a641-00df4804280f
Moreau, Luc
0b53974f-3e78-4c56-a47e-799d9f220911
Helal, Ayah
0e44c81d-ef58-4503-b8cb-3d1fb82ec651
Tsakalakis, Nikolaos
eae42e98-58b8-45b9-8c11-35a798cc9671
Stalla-Bourdillon, Sophie
c189651b-9ed3-49f6-bf37-25a47c487164
Carmichael, Laura
3f71fb73-581b-43c3-a261-a6627994c96e
Huynh, Trung Dong
9e04643c-cdd0-41ce-a641-00df4804280f
Moreau, Luc
0b53974f-3e78-4c56-a47e-799d9f220911
Helal, Ayah
0e44c81d-ef58-4503-b8cb-3d1fb82ec651
Tsakalakis, Nikolaos, Stalla-Bourdillon, Sophie, Carmichael, Laura, Huynh, Trung Dong, Moreau, Luc and Helal, Ayah
(2020)
The dual function of explanations: why it is useful to compute explanations.
Computers, Privacy & Data Protection 2021: Enforcing Rights in a Changing World, , Brussels, Belgium.
27 - 29 Jan 2021.
(In Press)
Record type:
Conference or Workshop Item
(Paper)
Abstract
The increasing dependence of decision-making on some level of automation has naturally led to discussions about the trustworthiness of such automation, calls for transparent automated decision-making and the emergence of ‘explainable Artificial Intelligence’ (XAI). Although XAI research has produced a number of taxonomies for the explanation of Artificial Intelligence (AI) and Machine Learning (ML) models, the legal debate has so far been mainly focused on whether a ‘right to explanation’ exists in the GDPR. Lately, a growing body of interdisciplinary literature is concentrating on the goals and substance of explanations produced for automated decision-making, with a view to clarify their role and improve their value against unfairness, discrimination and opacity for the purposes ensuring compliance with Article 22 of the GDPR. At the same time, several researchers have warned that transparency of the algorithmic processes in itself is not enough and tools for better and easier assessment and review of the whole socio-technical system that include automated decision-making are needed. We suggest that explanations can be relevant for most of the obligations set forth by the GDPR and can assist towards a holistic compliance strategy if used as detective controls. Automating the creation of computable explanations that can support breach detection has the power to make compliance more systematic and to facilitate monitoring and auditing. Carefully constructed explanations can empower both data subjects and controllers and should be seen as key controls in order to meet accountability and data protection-by-design obligations. This paper thus presents the work undertaken by the PLEAD project towards ‘explainable-by-design’ socio-technical systems. PLEAD acknowledges the dual function of explanations as external detective controls (to benefit data subjects) and as internal detective controls (to benefit data controllers) and leverages provenance-based technology to compute explanations and support the deployment of systematic compliance strategies.
This record has no associated files available for download.
More information
Accepted/In Press date: 3 December 2020
Venue - Dates:
Computers, Privacy & Data Protection 2021: Enforcing Rights in a Changing World, , Brussels, Belgium, 2021-01-27 - 2021-01-29
Identifiers
Local EPrints ID: 445457
URI: http://eprints.soton.ac.uk/id/eprint/445457
PURE UUID: bfa09290-4032-4f85-b4e6-361f4724c7ee
Catalogue record
Date deposited: 09 Dec 2020 17:31
Last modified: 08 Jul 2022 01:47
Export record
Contributors
Author:
Trung Dong Huynh
Author:
Luc Moreau
Author:
Ayah Helal
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics