The University of Southampton
University of Southampton Institutional Repository

Interpretable multiple instance learning

Interpretable multiple instance learning
Interpretable multiple instance learning
With the rising use of Artificial Intelligence (AI) and Machine Learning (ML) methods, there comes an increasing need to understand how automated systems make decisions. Interpretable ML provides insight into the underlying reasoning behind AI and ML models while not stifling their predictive performance. Doing so is important for many reasons, such as facilitating trust, increasing transparency, and providing improved collaboration and control through a better understanding of automated decision-making. Interpretability is very relevant across many ML paradigms and application domains. Multiple Instance Learning (MIL) is an ML paradigm where data are grouped into bags of instances, and only the bags are labelled (rather than each instance). This is beneficial in alleviating expensive labelling procedures and can be used to exploit the underlying structure of data. This thesis investigates how interpretability can be achieved within MIL. It begins with a formalisation of interpretable MIL, and then proposes a suite of model-agnostic post-hoc methods. This work is then extended to the specific application domain of high-resolution satellite imagery, using novel inherently interpretable MIL approaches that operate at multiple resolutions. Following on from work in the vision domain, new methods for interpretable MIL are developed for sequential data. First, it is explored in the domain of Reward Modelling (RM) for Reinforcement Learning (RL), demonstrating that interpretable MIL can be used to not only understand a model but also improve its predictive performance. This is mirrored in the application of interpretable MIL to Time Series Classification (TSC), where it is integrated into state-of-the-art methods and is able to improve both their interpretability and predictive performance. The integration into existing models to provide inherent interpretability means these benefits are delivered with little additional computational cost.
University of Southampton
Early, Joseph Arthur
fd4e9e4c-9251-474d-a9cf-12157a9f2f73
Early, Joseph Arthur
fd4e9e4c-9251-474d-a9cf-12157a9f2f73
Ramchurn, Gopal
1d62ae2a-a498-444e-912d-a6082d3aaea3
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b

Early, Joseph Arthur (2024) Interpretable multiple instance learning. University of Southampton, Doctoral Thesis, 219pp.

Record type: Thesis (Doctoral)

Abstract

With the rising use of Artificial Intelligence (AI) and Machine Learning (ML) methods, there comes an increasing need to understand how automated systems make decisions. Interpretable ML provides insight into the underlying reasoning behind AI and ML models while not stifling their predictive performance. Doing so is important for many reasons, such as facilitating trust, increasing transparency, and providing improved collaboration and control through a better understanding of automated decision-making. Interpretability is very relevant across many ML paradigms and application domains. Multiple Instance Learning (MIL) is an ML paradigm where data are grouped into bags of instances, and only the bags are labelled (rather than each instance). This is beneficial in alleviating expensive labelling procedures and can be used to exploit the underlying structure of data. This thesis investigates how interpretability can be achieved within MIL. It begins with a formalisation of interpretable MIL, and then proposes a suite of model-agnostic post-hoc methods. This work is then extended to the specific application domain of high-resolution satellite imagery, using novel inherently interpretable MIL approaches that operate at multiple resolutions. Following on from work in the vision domain, new methods for interpretable MIL are developed for sequential data. First, it is explored in the domain of Reward Modelling (RM) for Reinforcement Learning (RL), demonstrating that interpretable MIL can be used to not only understand a model but also improve its predictive performance. This is mirrored in the application of interpretable MIL to Time Series Classification (TSC), where it is integrated into state-of-the-art methods and is able to improve both their interpretability and predictive performance. The integration into existing models to provide inherent interpretability means these benefits are delivered with little additional computational cost.

Text
Joseph_Early_Doctoral_Thesis_PDFA - Version of Record
Available under License University of Southampton Thesis Licence.
Download (33MB)
Text
Final-thesis-submission-Examination-Mr-Joseph-Early
Restricted to Repository staff only
Available under License University of Southampton Thesis Licence.

More information

Published date: June 2024

Identifiers

Local EPrints ID: 490767
URI: http://eprints.soton.ac.uk/id/eprint/490767
PURE UUID: 45e03798-926d-4be4-b0d1-d66a997a141d
ORCID for Joseph Arthur Early: ORCID iD orcid.org/0000-0001-7748-9340
ORCID for Gopal Ramchurn: ORCID iD orcid.org/0000-0001-9686-4302
ORCID for Christine Evers: ORCID iD orcid.org/0000-0003-0757-5504

Catalogue record

Date deposited: 06 Jun 2024 16:39
Last modified: 15 Aug 2024 02:13

Export record

Contributors

Author: Joseph Arthur Early ORCID iD
Thesis advisor: Gopal Ramchurn ORCID iD
Thesis advisor: Christine Evers ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×