Interpretable multiple instance learning
Interpretable multiple instance learning
With the rising use of Artificial Intelligence (AI) and Machine Learning (ML) methods, there comes an increasing need to understand how automated systems make decisions. Interpretable ML provides insight into the underlying reasoning behind AI and ML models while not stifling their predictive performance. Doing so is important for many reasons, such as facilitating trust, increasing transparency, and providing improved collaboration and control through a better understanding of automated decision-making. Interpretability is very relevant across many ML paradigms and application domains. Multiple Instance Learning (MIL) is an ML paradigm where data are grouped into bags of instances, and only the bags are labelled (rather than each instance). This is beneficial in alleviating expensive labelling procedures and can be used to exploit the underlying structure of data. This thesis investigates how interpretability can be achieved within MIL. It begins with a formalisation of interpretable MIL, and then proposes a suite of model-agnostic post-hoc methods. This work is then extended to the specific application domain of high-resolution satellite imagery, using novel inherently interpretable MIL approaches that operate at multiple resolutions. Following on from work in the vision domain, new methods for interpretable MIL are developed for sequential data. First, it is explored in the domain of Reward Modelling (RM) for Reinforcement Learning (RL), demonstrating that interpretable MIL can be used to not only understand a model but also improve its predictive performance. This is mirrored in the application of interpretable MIL to Time Series Classification (TSC), where it is integrated into state-of-the-art methods and is able to improve both their interpretability and predictive performance. The integration into existing models to provide inherent interpretability means these benefits are delivered with little additional computational cost.
University of Southampton
Early, Joseph Arthur
fd4e9e4c-9251-474d-a9cf-12157a9f2f73
June 2024
Early, Joseph Arthur
fd4e9e4c-9251-474d-a9cf-12157a9f2f73
Ramchurn, Gopal
1d62ae2a-a498-444e-912d-a6082d3aaea3
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b
Early, Joseph Arthur
(2024)
Interpretable multiple instance learning.
University of Southampton, Doctoral Thesis, 219pp.
Record type:
Thesis
(Doctoral)
Abstract
With the rising use of Artificial Intelligence (AI) and Machine Learning (ML) methods, there comes an increasing need to understand how automated systems make decisions. Interpretable ML provides insight into the underlying reasoning behind AI and ML models while not stifling their predictive performance. Doing so is important for many reasons, such as facilitating trust, increasing transparency, and providing improved collaboration and control through a better understanding of automated decision-making. Interpretability is very relevant across many ML paradigms and application domains. Multiple Instance Learning (MIL) is an ML paradigm where data are grouped into bags of instances, and only the bags are labelled (rather than each instance). This is beneficial in alleviating expensive labelling procedures and can be used to exploit the underlying structure of data. This thesis investigates how interpretability can be achieved within MIL. It begins with a formalisation of interpretable MIL, and then proposes a suite of model-agnostic post-hoc methods. This work is then extended to the specific application domain of high-resolution satellite imagery, using novel inherently interpretable MIL approaches that operate at multiple resolutions. Following on from work in the vision domain, new methods for interpretable MIL are developed for sequential data. First, it is explored in the domain of Reward Modelling (RM) for Reinforcement Learning (RL), demonstrating that interpretable MIL can be used to not only understand a model but also improve its predictive performance. This is mirrored in the application of interpretable MIL to Time Series Classification (TSC), where it is integrated into state-of-the-art methods and is able to improve both their interpretability and predictive performance. The integration into existing models to provide inherent interpretability means these benefits are delivered with little additional computational cost.
Text
Joseph_Early_Doctoral_Thesis_PDFA
- Version of Record
Text
Final-thesis-submission-Examination-Mr-Joseph-Early
Restricted to Repository staff only
More information
Published date: June 2024
Identifiers
Local EPrints ID: 490767
URI: http://eprints.soton.ac.uk/id/eprint/490767
PURE UUID: 45e03798-926d-4be4-b0d1-d66a997a141d
Catalogue record
Date deposited: 06 Jun 2024 16:39
Last modified: 15 Aug 2024 02:13
Export record
Contributors
Author:
Joseph Arthur Early
Thesis advisor:
Gopal Ramchurn
Thesis advisor:
Christine Evers
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics