The University of Southampton
University of Southampton Institutional Repository

Multilevel explainable artificial intelligence: visual and linguistic bonded explanations

Multilevel explainable artificial intelligence: visual and linguistic bonded explanations
Multilevel explainable artificial intelligence: visual and linguistic bonded explanations

Applications of deep neural networks (DNNs) are booming in more and more fields but lack transparency due to their black-box nature. Explainable artificial intelligence (XAI) is therefore of paramount importance, where strategies are proposed to understand how these black-box models function. The research so far mainly focuses on producing, for example, class-wise saliency maps, highlighting parts of a given image that affect the prediction the most. However, this method does not fully represent the way humans explain their reasoning and, awkwardly, validating these maps is quite complex and generally requires subjective interpretation. In this article, we conduct XAI differently by proposing a new XAI methodology in a multilevel (i.e., visual and linguistic) manner. By leveraging the interplay between the learned representations, i.e., image features and linguistic attributes, the proposed approach can provide salient attributes and attribute-wise saliency maps, which are far more intuitive than the class-wise maps, without requiring per-image ground-truth human explanations. It introduces self-interpretable attributes to overcome the current limitations in XAI and bring the XAI closer to a human-like explanation. The proposed architecture is simple in use and can reach surprisingly good performance in both prediction and explainability for deep neural networks thanks to the low-cost per-class attributes.

Annotations, black box, Closed box, Deep neural networks, explainable artificial intelligence, Feature extraction, Linguistics, Predictive models, saliency maps, Training, Visualization
Aysel, Halil Ibrahim
9db69eca-47c7-4443-86a1-33504e172d60
Cai, Xiaohao
de483445-45e9-4b21-a4e8-b0427fc72cee
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Aysel, Halil Ibrahim
9db69eca-47c7-4443-86a1-33504e172d60
Cai, Xiaohao
de483445-45e9-4b21-a4e8-b0427fc72cee
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e

Aysel, Halil Ibrahim, Cai, Xiaohao and Prugel-Bennett, Adam (2023) Multilevel explainable artificial intelligence: visual and linguistic bonded explanations. IEEE Transactions on Artificial Intelligence. (doi:10.1109/TAI.2023.3308555).

Record type: Article

Abstract

Applications of deep neural networks (DNNs) are booming in more and more fields but lack transparency due to their black-box nature. Explainable artificial intelligence (XAI) is therefore of paramount importance, where strategies are proposed to understand how these black-box models function. The research so far mainly focuses on producing, for example, class-wise saliency maps, highlighting parts of a given image that affect the prediction the most. However, this method does not fully represent the way humans explain their reasoning and, awkwardly, validating these maps is quite complex and generally requires subjective interpretation. In this article, we conduct XAI differently by proposing a new XAI methodology in a multilevel (i.e., visual and linguistic) manner. By leveraging the interplay between the learned representations, i.e., image features and linguistic attributes, the proposed approach can provide salient attributes and attribute-wise saliency maps, which are far more intuitive than the class-wise maps, without requiring per-image ground-truth human explanations. It introduces self-interpretable attributes to overcome the current limitations in XAI and bring the XAI closer to a human-like explanation. The proposed architecture is simple in use and can reach surprisingly good performance in both prediction and explainability for deep neural networks thanks to the low-cost per-class attributes.

This record has no associated files available for download.

More information

e-pub ahead of print date: 25 August 2023
Keywords: Annotations, black box, Closed box, Deep neural networks, explainable artificial intelligence, Feature extraction, Linguistics, Predictive models, saliency maps, Training, Visualization

Identifiers

Local EPrints ID: 486116
URI: http://eprints.soton.ac.uk/id/eprint/486116
PURE UUID: 3357d84e-f872-400a-b362-f18ae56e26e3
ORCID for Halil Ibrahim Aysel: ORCID iD orcid.org/0000-0002-4981-0827
ORCID for Xiaohao Cai: ORCID iD orcid.org/0000-0003-0924-2834

Catalogue record

Date deposited: 10 Jan 2024 17:30
Last modified: 18 Mar 2024 04:00

Export record

Altmetrics

Contributors

Author: Halil Ibrahim Aysel ORCID iD
Author: Xiaohao Cai ORCID iD
Author: Adam Prugel-Bennett

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×