Explainable artificial intelligence: advancements and limitations
Explainable artificial intelligence: advancements and limitations
Explainable artificial intelligence (XAI) has emerged as a crucial field for understanding and interpreting the decisions of complex machine learning models, particularly deep neural networks. This review presents a structured overview of XAI methodologies, encompassing a diverse range of techniques designed to provide explainability at different levels of abstraction. We cover pixel-level explanation strategies such as saliency maps, perturbation-based methods and gradient-based visualisations, as well as concept-based approaches that align model behaviour with human-understandable semantics. Additionally, we touch upon the relevance of XAI in the context of weakly supervised semantic segmentation. By synthesising recent developments, this paper aims to clarify the landscape of XAI methods and offer insights into their comparative utility and role in fostering trustworthy AI systems.
concept-based XAI, deep neural networks, explainable AI, post hoc explanations, saliency maps, semantatic segmentation
Aysel, Halil ibrahim
9db69eca-47c7-4443-86a1-33504e172d60
Cai, Xiaohao
de483445-45e9-4b21-a4e8-b0427fc72cee
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
27 June 2025
Aysel, Halil ibrahim
9db69eca-47c7-4443-86a1-33504e172d60
Cai, Xiaohao
de483445-45e9-4b21-a4e8-b0427fc72cee
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Aysel, Halil ibrahim, Cai, Xiaohao and Prugel-Bennett, Adam
(2025)
Explainable artificial intelligence: advancements and limitations.
Applied Sciences, 15 (13), [7261].
(doi:10.3390/app15137261).
Abstract
Explainable artificial intelligence (XAI) has emerged as a crucial field for understanding and interpreting the decisions of complex machine learning models, particularly deep neural networks. This review presents a structured overview of XAI methodologies, encompassing a diverse range of techniques designed to provide explainability at different levels of abstraction. We cover pixel-level explanation strategies such as saliency maps, perturbation-based methods and gradient-based visualisations, as well as concept-based approaches that align model behaviour with human-understandable semantics. Additionally, we touch upon the relevance of XAI in the context of weakly supervised semantic segmentation. By synthesising recent developments, this paper aims to clarify the landscape of XAI methods and offer insights into their comparative utility and role in fostering trustworthy AI systems.
Text
applsci-15-07261-v2
- Version of Record
More information
Accepted/In Press date: 25 June 2025
Published date: 27 June 2025
Keywords:
concept-based XAI, deep neural networks, explainable AI, post hoc explanations, saliency maps, semantatic segmentation
Identifiers
Local EPrints ID: 504075
URI: http://eprints.soton.ac.uk/id/eprint/504075
ISSN: 2076-3417
PURE UUID: 192e87d1-0bd3-4cfc-80b6-7b614212b7c8
Catalogue record
Date deposited: 22 Aug 2025 16:40
Last modified: 23 Aug 2025 02:19
Export record
Altmetrics
Contributors
Author:
Halil ibrahim Aysel
Author:
Xiaohao Cai
Author:
Adam Prugel-Bennett
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics