The University of Southampton
University of Southampton Institutional Repository

Fully convolutional neural network based histopathology image segmentation

Fully convolutional neural network based histopathology image segmentation
Fully convolutional neural network based histopathology image segmentation
The examination of histopathology images is generally recognised as the \textit{gold standard} for the diagnosis of diseases. Clinical diagnostic practice requires pathologists to follow a descriptive set of guidelines and therefore is prone to suffer from inter-observer variability due to differences in the interpretation of histological patterns. Furthermore, the extremely large size of histopathology images makes it impossible for pathologists to thoroughly inspect every detail of a whole slide image (WSI). Such that the risk of misdiagnosis arises. In this thesis, we aim to develop automatic tools that can objectively analyse and quantify the vast amount of pixel information contained in histopathology images. More specifically, we focus on the automatic tissue type/component segmentation of histopathology images.

To fulfil our objective, we first revisit the UNet family architectures that are widely used in medical imaging and then suggest a novel ensemble framework to tackle the shortcomings existing in UNet++-like architectures. We present a novel stage-wise additive training algorithm that, using ideas from boosting, incorporates resource-efficient deep supervision in shallower layers and takes performance-weighted combinations of the sub-UNets to create the segmentation model. To ensure the effectiveness of the ensemble, we designed a scheme in which the diversity of features is guaranteed.

On the other hand, we identify the loss of magnification information as a key barrier to the application of automatic computational pathology tools. In this regard, we explore scale equivariant methods that possess the capability of achieving consistent segmentation to bypass the loss of magnification information. We present the Scale-Equivariant UNet (SEUNet) for image segmentation by building on scale-space theory. The SEUNet contains groups of filters that are linear combinations of Gaussian basis filters, whose scale parameters are trainable but constrained to span disjoint scales through the layers of the network. By encoding scale equivariance into CNNs, histopathology images presented at different scales are more likely to be consistently segmented.

Furthermore, due to the inherent rotation symmetry of histopathology images, it is desirable for CNNs to be rotation-equivariant. This guarantees that features transform as expected with the rotation of the input; thus a consistent segmentation can be produced regardless of the presenting angle of the image. To leverage this prior knowledge, we extend our proposed scale equivariant UNet to a joint rotation-scale equivariant model. By introducing weight-sharing between multi-scale and multi-orientation filters, the joint equivariance of rotation and scale is achieved, yet the number of trainable parameters is dramatically decreased when compared with conventional CNN filters. The proposed rotation-scale equivariant method shows the state-of-the-art generalisation performance in scenarios wherein scale and orientation variations of images exist.
Deep Learning, Segmentation, Equivariant, Convolutional Neural Network, Histopathology, Rotation, Scale
University of Southampton
Yang, Yilong
a0d162d2-c118-40be-b724-d2e03bffc026
Yang, Yilong
a0d162d2-c118-40be-b724-d2e03bffc026
Mahmoodi, Sasan
91ca8da4-95dc-4c1e-ac0e-f2c08d6ac7cf
Dasmahapatra, Srinandan
eb5fd76f-4335-4ae9-a88a-20b9e2b3f698

Yang, Yilong (2024) Fully convolutional neural network based histopathology image segmentation. School of Electronics and Computer Science, Doctoral Thesis, 114pp.

Record type: Thesis (Doctoral)

Abstract

The examination of histopathology images is generally recognised as the \textit{gold standard} for the diagnosis of diseases. Clinical diagnostic practice requires pathologists to follow a descriptive set of guidelines and therefore is prone to suffer from inter-observer variability due to differences in the interpretation of histological patterns. Furthermore, the extremely large size of histopathology images makes it impossible for pathologists to thoroughly inspect every detail of a whole slide image (WSI). Such that the risk of misdiagnosis arises. In this thesis, we aim to develop automatic tools that can objectively analyse and quantify the vast amount of pixel information contained in histopathology images. More specifically, we focus on the automatic tissue type/component segmentation of histopathology images.

To fulfil our objective, we first revisit the UNet family architectures that are widely used in medical imaging and then suggest a novel ensemble framework to tackle the shortcomings existing in UNet++-like architectures. We present a novel stage-wise additive training algorithm that, using ideas from boosting, incorporates resource-efficient deep supervision in shallower layers and takes performance-weighted combinations of the sub-UNets to create the segmentation model. To ensure the effectiveness of the ensemble, we designed a scheme in which the diversity of features is guaranteed.

On the other hand, we identify the loss of magnification information as a key barrier to the application of automatic computational pathology tools. In this regard, we explore scale equivariant methods that possess the capability of achieving consistent segmentation to bypass the loss of magnification information. We present the Scale-Equivariant UNet (SEUNet) for image segmentation by building on scale-space theory. The SEUNet contains groups of filters that are linear combinations of Gaussian basis filters, whose scale parameters are trainable but constrained to span disjoint scales through the layers of the network. By encoding scale equivariance into CNNs, histopathology images presented at different scales are more likely to be consistently segmented.

Furthermore, due to the inherent rotation symmetry of histopathology images, it is desirable for CNNs to be rotation-equivariant. This guarantees that features transform as expected with the rotation of the input; thus a consistent segmentation can be produced regardless of the presenting angle of the image. To leverage this prior knowledge, we extend our proposed scale equivariant UNet to a joint rotation-scale equivariant model. By introducing weight-sharing between multi-scale and multi-orientation filters, the joint equivariance of rotation and scale is achieved, yet the number of trainable parameters is dramatically decreased when compared with conventional CNN filters. The proposed rotation-scale equivariant method shows the state-of-the-art generalisation performance in scenarios wherein scale and orientation variations of images exist.

Text
31065139-YilongYang-Final3b - Version of Record
Available under License University of Southampton Thesis Licence.
Download (53MB)
Text
Final-thesis-submission-Examination-Mr-Yilong-Yang
Restricted to Repository staff only

More information

Published date: 2024
Keywords: Deep Learning, Segmentation, Equivariant, Convolutional Neural Network, Histopathology, Rotation, Scale

Identifiers

Local EPrints ID: 486111
URI: http://eprints.soton.ac.uk/id/eprint/486111
PURE UUID: 93f1fcbb-f302-4321-8d1d-681d1188ab44

Catalogue record

Date deposited: 09 Jan 2024 17:54
Last modified: 19 Mar 2024 18:39

Export record

Contributors

Author: Yilong Yang
Thesis advisor: Sasan Mahmoodi
Thesis advisor: Srinandan Dasmahapatra

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×