The University of Southampton
University of Southampton Institutional Repository

Modelling of 3D placental cell features using deep learning

Modelling of 3D placental cell features using deep learning
Modelling of 3D placental cell features using deep learning
Serial block-face scanning electron microscopy (SBFSEM) is a well-established technique for producing a sequential series of high-resolution images of a material, from which three-dimensional structures can be determined. This is achieved by incorporating a diamond knife microtome within a scanning electron microscope (SEM). The result is a "stack" of sequential SEM images that correspond to the 3D volume of the sample. Typically, a SBFSEM stack might include ~500-2000 SEM images, each with ~10nm spatial resolution in x and y, and separated in height by ~50nm in z.
In our work imaging human placental tissue using SBFSEM, we currently employ human labelling of features within the SEM image stacks, which allows 3D reconstruction and analysis. Whilst human labelling can be very accurate, the amount of time needed to label a single feature in a SBFSEM stack can be several months, and hence there is a practical limit to how many image stacks can be analysed. There is therefore a clear need for alternative methods for analysis that do not require human labelling.
Here, for the first time, we use a neural network to generate an SEM image when we provide the overall shape of the structures within the placental tissue. This is based on recent work in the field of generative neural networks that can be used to generate “fake” images of human faces [1] (also see the interactive website [2]). The analogy is that we are using a neural network to generate “fake” SEM images of placenta cells.
This process can be repeated for any 3D cell shape, and the results so far are shown to be statistically consistent with experimental data. This early result therefore offers the tantalising prospect of using a generative neural network for data-driven modelling of 3D cell biology.
The work presented here takes several novel steps. Firstly, we use a custom-written algorithm to label the boundary of the placenta cell in every SEM image in a stack. For each image, this boundary is used to make a “mask” that is labelled green outside the cell and red inside the cell. Secondly, we train a neural network to transform a green/red mask into a “fake” SEM image, meaning we can therefore generate an SEM image for any placental cell shape. Thirdly, we extend this capability into 3D, and use the neural network to generate a “fake” stack of SEM images for any chosen 3D placental cell shape. As a demonstration of a potential application of this approach, we label the position of generated blood cells in each of the generated SEM images in the “fake” stack, and our next aim is to visualise a predicted 3D capillary network for a placental villi.
This work builds upon our previous work [3] (presented at BIOSTEC 2020), where we demonstrated that a neural network can be used to automatically label features of placenta cell types, such as the endothelial cells and pericytes. This was achieved through training a neural network on SEM images of placenta cells that had the features of interest already labelled by a human.
[1] Karras, T. et al, “A style-based generator architecture for generative adversarial networks”, IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, 4401-4410.
[2] https://thispersondoesnotexist.com/
[3] Mackay B.S. et al, “Deep Learning for the Automated Feature Labelling of 3-Dimensional Imaged Placenta”, BIOSTEC 2020, doi:10.1007/978-3-030-72379-8_6
Mills, Benjamin
05f1886e-96ef-420f-b856-4115f4ab36d0
Grant-Jacob, James
c5d144d8-3c43-4195-8e80-edd96bfda91b
MacKay, Benita, Scout
318d298f-5b38-43d7-b30d-8cd07f69acd4
Lewis, Rohan
caaeb97d-ea69-4f7b-8adb-5fa25e2d3502
Sengers, Bram
d6b771b1-4ede-48c5-9644-fa86503941aa
Mills, Benjamin
05f1886e-96ef-420f-b856-4115f4ab36d0
Grant-Jacob, James
c5d144d8-3c43-4195-8e80-edd96bfda91b
MacKay, Benita, Scout
318d298f-5b38-43d7-b30d-8cd07f69acd4
Lewis, Rohan
caaeb97d-ea69-4f7b-8adb-5fa25e2d3502
Sengers, Bram
d6b771b1-4ede-48c5-9644-fa86503941aa

Mills, Benjamin, Grant-Jacob, James, MacKay, Benita, Scout, Lewis, Rohan and Sengers, Bram (2022) Modelling of 3D placental cell features using deep learning. In BIOIMAGING22-20.

Record type: Conference or Workshop Item (Paper)

Abstract

Serial block-face scanning electron microscopy (SBFSEM) is a well-established technique for producing a sequential series of high-resolution images of a material, from which three-dimensional structures can be determined. This is achieved by incorporating a diamond knife microtome within a scanning electron microscope (SEM). The result is a "stack" of sequential SEM images that correspond to the 3D volume of the sample. Typically, a SBFSEM stack might include ~500-2000 SEM images, each with ~10nm spatial resolution in x and y, and separated in height by ~50nm in z.
In our work imaging human placental tissue using SBFSEM, we currently employ human labelling of features within the SEM image stacks, which allows 3D reconstruction and analysis. Whilst human labelling can be very accurate, the amount of time needed to label a single feature in a SBFSEM stack can be several months, and hence there is a practical limit to how many image stacks can be analysed. There is therefore a clear need for alternative methods for analysis that do not require human labelling.
Here, for the first time, we use a neural network to generate an SEM image when we provide the overall shape of the structures within the placental tissue. This is based on recent work in the field of generative neural networks that can be used to generate “fake” images of human faces [1] (also see the interactive website [2]). The analogy is that we are using a neural network to generate “fake” SEM images of placenta cells.
This process can be repeated for any 3D cell shape, and the results so far are shown to be statistically consistent with experimental data. This early result therefore offers the tantalising prospect of using a generative neural network for data-driven modelling of 3D cell biology.
The work presented here takes several novel steps. Firstly, we use a custom-written algorithm to label the boundary of the placenta cell in every SEM image in a stack. For each image, this boundary is used to make a “mask” that is labelled green outside the cell and red inside the cell. Secondly, we train a neural network to transform a green/red mask into a “fake” SEM image, meaning we can therefore generate an SEM image for any placental cell shape. Thirdly, we extend this capability into 3D, and use the neural network to generate a “fake” stack of SEM images for any chosen 3D placental cell shape. As a demonstration of a potential application of this approach, we label the position of generated blood cells in each of the generated SEM images in the “fake” stack, and our next aim is to visualise a predicted 3D capillary network for a placental villi.
This work builds upon our previous work [3] (presented at BIOSTEC 2020), where we demonstrated that a neural network can be used to automatically label features of placenta cell types, such as the endothelial cells and pericytes. This was achieved through training a neural network on SEM images of placenta cells that had the features of interest already labelled by a human.
[1] Karras, T. et al, “A style-based generator architecture for generative adversarial networks”, IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, 4401-4410.
[2] https://thispersondoesnotexist.com/
[3] Mackay B.S. et al, “Deep Learning for the Automated Feature Labelling of 3-Dimensional Imaged Placenta”, BIOSTEC 2020, doi:10.1007/978-3-030-72379-8_6

Image
bioimaging 2022 poster v08 - Author's Original
Available under License Creative Commons Attribution.
Download (6MB)
Video
bioimaging 2022 v4 - VIDEO - Author's Original
Available under License Creative Commons Attribution.
Download (130MB)

More information

Published date: 9 February 2022
Venue - Dates: 15th International Joint Conference on Biomedical Engineering Systems and Technologies, , Vienna, Austria, 2022-02-09 - 2022-02-11

Identifiers

Local EPrints ID: 456279
URI: http://eprints.soton.ac.uk/id/eprint/456279
PURE UUID: a0fb4bf7-5ed7-4679-9e96-5f47db4d9f6b
ORCID for Benjamin Mills: ORCID iD orcid.org/0000-0002-1784-1012
ORCID for James Grant-Jacob: ORCID iD orcid.org/0000-0002-4270-4247
ORCID for Benita, Scout MacKay: ORCID iD orcid.org/0000-0003-2050-8912
ORCID for Rohan Lewis: ORCID iD orcid.org/0000-0003-4044-9104
ORCID for Bram Sengers: ORCID iD orcid.org/0000-0001-5859-6984

Catalogue record

Date deposited: 27 Apr 2022 00:59
Last modified: 17 Mar 2024 03:22

Export record

Contributors

Author: Benjamin Mills ORCID iD
Author: Benita, Scout MacKay ORCID iD
Author: Rohan Lewis ORCID iD
Author: Bram Sengers ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×