Model agnostic interpretability for multiple instance learning
Model agnostic interpretability for multiple instance learning
In Multiple Instance Learning (MIL), models are trained using bags of instances, where only a single label is provided for each bag. A bag label is often only determined by a handful of key instances within a bag, making it difficult to interpret what information a classifier is using to make decisions. In this work, we establish the key requirements for interpreting MIL models. We then go on to develop several model-agnostic approaches that meet these requirements. Our methods are compared against existing inherently interpretable MIL models on several datasets, and achieve an increase in interpretability accuracy of up to 30%. We also examine the ability of the methods to identify interactions between instances and scale to larger datasets, improving their applicability to real-world problems.
cs.LG, cs.AI
Early, Joseph
fd4e9e4c-9251-474d-a9cf-12157a9f2f73
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b
Ramchurn, Sarvapali
1d62ae2a-a498-444e-912d-a6082d3aaea3
27 January 2022
Early, Joseph
fd4e9e4c-9251-474d-a9cf-12157a9f2f73
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b
Ramchurn, Sarvapali
1d62ae2a-a498-444e-912d-a6082d3aaea3
Early, Joseph, Evers, Christine and Ramchurn, Sarvapali
(2022)
Model agnostic interpretability for multiple instance learning.
International Conference on Learning Representations 2022.
25 - 29 Apr 2022.
25 pp
.
Record type:
Conference or Workshop Item
(Paper)
Abstract
In Multiple Instance Learning (MIL), models are trained using bags of instances, where only a single label is provided for each bag. A bag label is often only determined by a handful of key instances within a bag, making it difficult to interpret what information a classifier is using to make decisions. In this work, we establish the key requirements for interpreting MIL models. We then go on to develop several model-agnostic approaches that meet these requirements. Our methods are compared against existing inherently interpretable MIL models on several datasets, and achieve an increase in interpretability accuracy of up to 30%. We also examine the ability of the methods to identify interactions between instances and scale to larger datasets, improving their applicability to real-world problems.
Text
2201.11701v2
- Author's Original
More information
e-pub ahead of print date: 27 January 2022
Published date: 27 January 2022
Additional Information:
25 pages (9 content, 2 acknowledgement + references, 14 appendix). 16 figures (3 main content, 13 appendix). Submitted and accepted to ICLR 22, see http://openreview.net/forum?id=KSSfF5lMIAg . Revision: added additional acknowledgements
ACKNOWLEDGEMENTS
This work was funded by AXA Research Fund and the UKRI Trustworthy Autonomous Systems
Hub (EP/V00784X/1). We would also like to thank the University of Southampton and the Alan
Turing Institute for their support.
The authors acknowledge the use of the IRIDIS High Performance Computing Facility and associated support services at the University of Southampton in the completion of this work. IRIDIS-5
GPU-enabled compute nodes were used for the long running experiments in this work.
We also acknowledge the use of SciencePlots (Garrett, 2021) for formatting our Matplotlib figures
Venue - Dates:
International Conference on Learning Representations 2022, 2022-04-25 - 2022-04-29
Keywords:
cs.LG, cs.AI
Identifiers
Local EPrints ID: 454952
URI: http://eprints.soton.ac.uk/id/eprint/454952
PURE UUID: 81761ee1-a106-4470-ae26-1db2ba99bc7b
Catalogue record
Date deposited: 02 Mar 2022 17:53
Last modified: 07 Jun 2024 01:57
Export record
Contributors
Author:
Joseph Early
Author:
Christine Evers
Author:
Sarvapali Ramchurn
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics