The University of Southampton
University of Southampton Institutional Repository

Multilevel explainable artificial intelligence methods and applications

Multilevel explainable artificial intelligence methods and applications
Multilevel explainable artificial intelligence methods and applications
Thanks to their astonishing prediction ability, deep neural networks (DNNs) have been deployed in various disciplines, from computer vision to natural language processing. However, their opaque decision-making mechanism makes it challenging to employ them in sensitive areas such as healthcare, legal settings, and autonomous driving. Many works have been proposed in the explainable artificial intelligence (XAI) field to overcome this issue and make DNNs more transparent, trustworthy, and deployable. However, most of these methodologies suffer from several drawbacks.

This thesis explores the current landscape of XAI and identifies critical shortcomings in the field that require urgent attention. Through a thorough examination of these limitations, we reveal key gaps that motivate our contributions. To address these challenges, we propose a novel methodology, multilevel XAI, which generates human-like explanations in the form of linguistic and visual concepts for machine learning and computer vision tasks. Our approach demonstrates that producing multilevel concept-based explanations can be both cost-effective and achieved without significantly compromising model performance.

Building on this, we introduce a novel weakly supervised semantic segmentation framework, semantic proportions-based semantic segmentation (SPSS). This approach facilitates effective semantic segmentation without the need for costly and impractical pixel-wise ground-truth segmentation maps, which are often challenging to obtain in real-world scenarios. By leveraging class proportions as the sole supervision during training, SPSS enables an intuitive and efficient generation of segmentation maps. Furthermore, this framework opens opportunities to integrate the explainability components of multilevel XAI paving the way for future research to achieve semantic segmentation with significantly reduced annotation costs.

We further identified that one of the most significant gaps in the concept-based XAI field---on which this thesis also specifically focuses---is the absence of standardised measures and benchmarks for fair evaluation and selection of the most effective methodologies. To address this, we propose three novel measures and benchmarks to advance the field. We encourage the research community to employ these measures and benchmarks for a fair comparison among concept-based XAI techniques.

Finally, we discuss the limitations of our work and possible future directions that, once realised, could significantly impact the XAI, machine learning, and computer vision communities.
University of Southampton
Aysel, Halil Ibrahim
9db69eca-47c7-4443-86a1-33504e172d60
Aysel, Halil Ibrahim
9db69eca-47c7-4443-86a1-33504e172d60
Cai, Xiaohao
de483445-45e9-4b21-a4e8-b0427fc72cee
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e

Aysel, Halil Ibrahim (2025) Multilevel explainable artificial intelligence methods and applications. University of Southampton, Doctoral Thesis, 150pp.

Record type: Thesis (Doctoral)

Abstract

Thanks to their astonishing prediction ability, deep neural networks (DNNs) have been deployed in various disciplines, from computer vision to natural language processing. However, their opaque decision-making mechanism makes it challenging to employ them in sensitive areas such as healthcare, legal settings, and autonomous driving. Many works have been proposed in the explainable artificial intelligence (XAI) field to overcome this issue and make DNNs more transparent, trustworthy, and deployable. However, most of these methodologies suffer from several drawbacks.

This thesis explores the current landscape of XAI and identifies critical shortcomings in the field that require urgent attention. Through a thorough examination of these limitations, we reveal key gaps that motivate our contributions. To address these challenges, we propose a novel methodology, multilevel XAI, which generates human-like explanations in the form of linguistic and visual concepts for machine learning and computer vision tasks. Our approach demonstrates that producing multilevel concept-based explanations can be both cost-effective and achieved without significantly compromising model performance.

Building on this, we introduce a novel weakly supervised semantic segmentation framework, semantic proportions-based semantic segmentation (SPSS). This approach facilitates effective semantic segmentation without the need for costly and impractical pixel-wise ground-truth segmentation maps, which are often challenging to obtain in real-world scenarios. By leveraging class proportions as the sole supervision during training, SPSS enables an intuitive and efficient generation of segmentation maps. Furthermore, this framework opens opportunities to integrate the explainability components of multilevel XAI paving the way for future research to achieve semantic segmentation with significantly reduced annotation costs.

We further identified that one of the most significant gaps in the concept-based XAI field---on which this thesis also specifically focuses---is the absence of standardised measures and benchmarks for fair evaluation and selection of the most effective methodologies. To address this, we propose three novel measures and benchmarks to advance the field. We encourage the research community to employ these measures and benchmarks for a fair comparison among concept-based XAI techniques.

Finally, we discuss the limitations of our work and possible future directions that, once realised, could significantly impact the XAI, machine learning, and computer vision communities.

Text
Multilevel Explainable Artificial Intelligence Methods and Applications - Version of Record
Available under License University of Southampton Thesis Licence.
Download (10MB)
Text
Final-thesis-submission-Examination-Mr-Halil-Aysel
Restricted to Repository staff only

More information

Published date: 2025

Identifiers

Local EPrints ID: 502631
URI: http://eprints.soton.ac.uk/id/eprint/502631
PURE UUID: 083fee57-bdaf-4285-bb35-5ab2e0df61ac
ORCID for Halil Ibrahim Aysel: ORCID iD orcid.org/0000-0002-4981-0827
ORCID for Xiaohao Cai: ORCID iD orcid.org/0000-0003-0924-2834

Catalogue record

Date deposited: 02 Jul 2025 16:42
Last modified: 11 Sep 2025 03:19

Export record

Contributors

Author: Halil Ibrahim Aysel ORCID iD
Thesis advisor: Xiaohao Cai ORCID iD
Thesis advisor: Adam Prugel-Bennett

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×