Leakage and interpretability in concept-based models
Leakage and interpretability in concept-based models
Concept Bottleneck Models aim to improve interpretability by predicting high-level intermediate concepts, representing a promising approach for deployment in high-risk scenarios. However, they are known to suffer from information leakage, whereby models exploit unintended information encoded within the learned concepts. We introduce an information-theoretic framework to rigorously characterise and quantify leakage, and define two complementary measures: the concepts-task leakage (CTL) and interconcept leakage (ICL) scores. We show that these measures are strongly predictive of model behaviour under interventions and outperform existing alternatives in robustness and reliability. Using this framework, we identify the primary causes of leakage and provide strong evidence that Concept Embedding Models exhibit substantial leakage regardless of the hyperparameters choice. Finally, we propose practical guidelines for designing concept-based models to reduce leakage and ensure interpretability.
cs.LG, cs.AI, stat.ML
Parisini, Enrico
b1e0f8e0-9464-4ff5-bbc5-d37eeebf4a2d
Chakraborti, Tapabrata
26a5ab6f-fd15-4be2-bc8b-ed53f8913548
Harbron, Chris
c9053c59-3f33-4842-aead-905f4a5b20ec
MacArthur, Ben D.
2c0476e7-5d3e-4064-81bb-104e8e88bb6b
Banerji, Christopher R. S.
1f2450d6-5772-46b5-a913-2333f7b53a2a
18 April 2025
Parisini, Enrico
b1e0f8e0-9464-4ff5-bbc5-d37eeebf4a2d
Chakraborti, Tapabrata
26a5ab6f-fd15-4be2-bc8b-ed53f8913548
Harbron, Chris
c9053c59-3f33-4842-aead-905f4a5b20ec
MacArthur, Ben D.
2c0476e7-5d3e-4064-81bb-104e8e88bb6b
Banerji, Christopher R. S.
1f2450d6-5772-46b5-a913-2333f7b53a2a
[Unknown type: UNSPECIFIED]
Abstract
Concept Bottleneck Models aim to improve interpretability by predicting high-level intermediate concepts, representing a promising approach for deployment in high-risk scenarios. However, they are known to suffer from information leakage, whereby models exploit unintended information encoded within the learned concepts. We introduce an information-theoretic framework to rigorously characterise and quantify leakage, and define two complementary measures: the concepts-task leakage (CTL) and interconcept leakage (ICL) scores. We show that these measures are strongly predictive of model behaviour under interventions and outperform existing alternatives in robustness and reliability. Using this framework, we identify the primary causes of leakage and provide strong evidence that Concept Embedding Models exhibit substantial leakage regardless of the hyperparameters choice. Finally, we propose practical guidelines for designing concept-based models to reduce leakage and ensure interpretability.
Text
2504.14094v1
- Author's Original
Available under License Other.
More information
Accepted/In Press date: 18 April 2025
Published date: 18 April 2025
Additional Information:
38 pages, 27 figures
Keywords:
cs.LG, cs.AI, stat.ML
Identifiers
Local EPrints ID: 502025
URI: http://eprints.soton.ac.uk/id/eprint/502025
PURE UUID: e1469aa6-6dbf-4e90-83a7-7fd52260fb4b
Catalogue record
Date deposited: 13 Jun 2025 17:01
Last modified: 19 Sep 2025 01:38
Export record
Altmetrics
Contributors
Author:
Enrico Parisini
Author:
Tapabrata Chakraborti
Author:
Chris Harbron
Author:
Christopher R. S. Banerji
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics