Model agnostic interpretability for multiple instance learning
Model agnostic interpretability for multiple instance learning
In Multiple Instance Learning (MIL), models are trained using bags of instances, where only a single label is provided for each bag. A bag label is often only determined by a handful of key instances within a bag, making it difficult to interpret what information a classifier is using to make decisions. In this work, we establish the key requirements for interpreting MIL models. We then go on to develop several model-agnostic approaches that meet these requirements. Our methods are compared against existing inherently interpretable MIL models on several datasets, and achieve an increase in interpretability accuracy of up to 30%. We also examine the ability of the methods to identify interactions between instances and scale to larger datasets, improving their applicability to real-world problems.
cs.LG, cs.AI
Early, Joseph
fd4e9e4c-9251-474d-a9cf-12157a9f2f73
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b
Ramchurn, Sarvapali
1d62ae2a-a498-444e-912d-a6082d3aaea3
Early, Joseph
fd4e9e4c-9251-474d-a9cf-12157a9f2f73
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b
Ramchurn, Sarvapali
1d62ae2a-a498-444e-912d-a6082d3aaea3
Early, Joseph, Evers, Christine and Ramchurn, Sarvapali
(2022)
Model agnostic interpretability for multiple instance learning.
International Conference on Learning Representations 2022.
25 - 29 Apr 2022.
25 pp
.
Record type:
Conference or Workshop Item
(Paper)
Abstract
In Multiple Instance Learning (MIL), models are trained using bags of instances, where only a single label is provided for each bag. A bag label is often only determined by a handful of key instances within a bag, making it difficult to interpret what information a classifier is using to make decisions. In this work, we establish the key requirements for interpreting MIL models. We then go on to develop several model-agnostic approaches that meet these requirements. Our methods are compared against existing inherently interpretable MIL models on several datasets, and achieve an increase in interpretability accuracy of up to 30%. We also examine the ability of the methods to identify interactions between instances and scale to larger datasets, improving their applicability to real-world problems.
Text
2201.11701v2
- Accepted Manuscript
More information
e-pub ahead of print date: 27 January 2022
Additional Information:
25 pages (9 content, 2 acknowledgement + references, 14 appendix). 16 figures (3 main content, 13 appendix). Submitted and accepted to ICLR 22, see http://openreview.net/forum?id=KSSfF5lMIAg . Revision: added additional acknowledgements
Venue - Dates:
International Conference on Learning Representations 2022, 2022-04-25 - 2022-04-29
Keywords:
cs.LG, cs.AI
Identifiers
Local EPrints ID: 454952
URI: http://eprints.soton.ac.uk/id/eprint/454952
PURE UUID: 81761ee1-a106-4470-ae26-1db2ba99bc7b
Catalogue record
Date deposited: 02 Mar 2022 17:53
Last modified: 03 Mar 2022 02:58
Export record
Contributors
Author:
Joseph Early
Author:
Christine Evers
Author:
Sarvapali Ramchurn
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics