The University of Southampton
University of Southampton Institutional Repository

Enhancing explainability in real-world scenarios: towards a robust stability measure for local interpretability

Enhancing explainability in real-world scenarios: towards a robust stability measure for local interpretability
Enhancing explainability in real-world scenarios: towards a robust stability measure for local interpretability
Machine learning is increasingly focused on improving performance metrics and providing explanations for model decisions. However, this focus often overshadows the importance of maintaining prediction stability for similar instances, an essential factor for user trust in the system’s reliability and consistency. This study extends the ranking stability measure, by integrating SHAP (SHapley Additive exPlanations) values to enhance prediction interpretability in anomaly detection. Unlike some existing stability measures that focus on predictions or treat all features as equally important, our approach systematically evaluates the stability of local interpretability introducing a novel weighting mechanism that prioritizes variations in the top-ranked features over lower-ranked ones, our method provides a refined assessment of interpretability stability, making it particularly valuable for high-stakes domains like fraud detection and risk assessment. Our approach, designed to boost model reliability and trust, addresses the critical need for understanding decision-contributing factors in business applications. We offer a comprehensive and robust solution for organizations to effectively utilize machine learning. Through extensive and rigorous comparative evaluations using both synthetic and real-world datasets, our method demonstrates superior performance in ensuring stable and reliable feature importance rankings compared to prior approaches. Our methodology not only stands out in enhancing model performance and interpretability but also bridges the gap between complex machine learning models and practical usability. Our method undoubtedly contributes to fostering confidence and trust among non-specialists. Our code is publicly available for reproducibility and to encourage further research in this field.
0957-4174
Sepulveda, Eduardo
77a72d2c-8c26-4ef5-86fc-9e7886dd2d87
Vandervorst, Felix
af353ced-c4da-4cfe-be05-88e48170718a
Baesens, Bart
f7c6496b-aa7f-4026-8616-ca61d9e216f0
Verdonck, Tim
8558b8f8-d412-4fb9-9784-9aba1d7323b6
Sepulveda, Eduardo
77a72d2c-8c26-4ef5-86fc-9e7886dd2d87
Vandervorst, Felix
af353ced-c4da-4cfe-be05-88e48170718a
Baesens, Bart
f7c6496b-aa7f-4026-8616-ca61d9e216f0
Verdonck, Tim
8558b8f8-d412-4fb9-9784-9aba1d7323b6

Sepulveda, Eduardo, Vandervorst, Felix, Baesens, Bart and Verdonck, Tim (2025) Enhancing explainability in real-world scenarios: towards a robust stability measure for local interpretability. Expert Systems with Applications, 274, [126922]. (doi:10.1016/j.eswa.2025.126922).

Record type: Article

Abstract

Machine learning is increasingly focused on improving performance metrics and providing explanations for model decisions. However, this focus often overshadows the importance of maintaining prediction stability for similar instances, an essential factor for user trust in the system’s reliability and consistency. This study extends the ranking stability measure, by integrating SHAP (SHapley Additive exPlanations) values to enhance prediction interpretability in anomaly detection. Unlike some existing stability measures that focus on predictions or treat all features as equally important, our approach systematically evaluates the stability of local interpretability introducing a novel weighting mechanism that prioritizes variations in the top-ranked features over lower-ranked ones, our method provides a refined assessment of interpretability stability, making it particularly valuable for high-stakes domains like fraud detection and risk assessment. Our approach, designed to boost model reliability and trust, addresses the critical need for understanding decision-contributing factors in business applications. We offer a comprehensive and robust solution for organizations to effectively utilize machine learning. Through extensive and rigorous comparative evaluations using both synthetic and real-world datasets, our method demonstrates superior performance in ensuring stable and reliable feature importance rankings compared to prior approaches. Our methodology not only stands out in enhancing model performance and interpretability but also bridges the gap between complex machine learning models and practical usability. Our method undoubtedly contributes to fostering confidence and trust among non-specialists. Our code is publicly available for reproducibility and to encourage further research in this field.

Text
Interpretability_index (17) - Accepted Manuscript
Restricted to Repository staff only until 5 March 2027.
Request a copy

More information

Accepted/In Press date: 14 February 2025
e-pub ahead of print date: 24 February 2025
Published date: 5 March 2025

Identifiers

Local EPrints ID: 499079
URI: http://eprints.soton.ac.uk/id/eprint/499079
ISSN: 0957-4174
PURE UUID: 7b289adc-1121-4870-be8a-eba674be50be
ORCID for Bart Baesens: ORCID iD orcid.org/0000-0002-5831-5668

Catalogue record

Date deposited: 07 Mar 2025 17:43
Last modified: 08 Mar 2025 02:40

Export record

Altmetrics

Contributors

Author: Eduardo Sepulveda
Author: Felix Vandervorst
Author: Bart Baesens ORCID iD
Author: Tim Verdonck

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×