The University of Southampton
University of Southampton Institutional Repository

Cross-modality multi-atlas segmentation via deep registration and label fusion

Cross-modality multi-atlas segmentation via deep registration and label fusion
Cross-modality multi-atlas segmentation via deep registration and label fusion

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target image; and the transformed atlas labels can be combined to generate target segmentation via label fusion schemes. Many conventional MAS methods employed the atlases from the same modality as the target image. However, the number of atlases with the same modality may be limited or even missing in many clinical applications. Besides, conventional MAS methods suffer from the computational burden of registration or label fusion procedures. In this work, we design a novel cross-modality MAS framework, which uses available atlases from a certain modality to segment a target image from another modality. To boost the computational efficiency of the framework, both the image registration and label fusion are achieved by well-designed deep neural networks. For the atlas-to-target image registration, we propose a bi-directional registration network (BiRegNet), which can efficiently align images from different modalities. For the label fusion, we design a similarity estimation network (SimNet), which estimates the fusion weight of each atlas by measuring its similarity to the target image. SimNet can learn multi-scale information for similarity estimation to improve the performance of label fusion. The proposed framework was evaluated by the left ventricle and liver segmentation tasks on the MM-WHS and CHAOS datasets, respectively. Results have shown that the framework is effective for cross-modality MAS in both registration and label fusion https://github.com/NanYoMy/cmmas.

Cross-modality atlas, label fusion, multi-atlas segmentation, registration
2168-2194
3104-3115
Ding, Wangbin
ce6c8e72-9208-49fc-b79d-780d4293bc15
Li, Lei
2da88502-0bd8-4e6b-8f7d-0c01a48b399e
Zhuang, Xiahai
c58e977b-e70e-4b37-9acd-b7f8070d98a8
Huang, Liqin
4565d61e-d3eb-4ffd-97af-731ae41e5335
Ding, Wangbin
ce6c8e72-9208-49fc-b79d-780d4293bc15
Li, Lei
2da88502-0bd8-4e6b-8f7d-0c01a48b399e
Zhuang, Xiahai
c58e977b-e70e-4b37-9acd-b7f8070d98a8
Huang, Liqin
4565d61e-d3eb-4ffd-97af-731ae41e5335

Ding, Wangbin, Li, Lei, Zhuang, Xiahai and Huang, Liqin (2022) Cross-modality multi-atlas segmentation via deep registration and label fusion. IEEE Journal of Biomedical and Health Informatics, 26 (7), 3104-3115. (doi:10.1109/JBHI.2022.3149114).

Record type: Article

Abstract

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target image; and the transformed atlas labels can be combined to generate target segmentation via label fusion schemes. Many conventional MAS methods employed the atlases from the same modality as the target image. However, the number of atlases with the same modality may be limited or even missing in many clinical applications. Besides, conventional MAS methods suffer from the computational burden of registration or label fusion procedures. In this work, we design a novel cross-modality MAS framework, which uses available atlases from a certain modality to segment a target image from another modality. To boost the computational efficiency of the framework, both the image registration and label fusion are achieved by well-designed deep neural networks. For the atlas-to-target image registration, we propose a bi-directional registration network (BiRegNet), which can efficiently align images from different modalities. For the label fusion, we design a similarity estimation network (SimNet), which estimates the fusion weight of each atlas by measuring its similarity to the target image. SimNet can learn multi-scale information for similarity estimation to improve the performance of label fusion. The proposed framework was evaluated by the left ventricle and liver segmentation tasks on the MM-WHS and CHAOS datasets, respectively. Results have shown that the framework is effective for cross-modality MAS in both registration and label fusion https://github.com/NanYoMy/cmmas.

This record has no associated files available for download.

More information

Published date: 7 February 2022
Additional Information: Publisher Copyright: © 2013 IEEE.
Keywords: Cross-modality atlas, label fusion, multi-atlas segmentation, registration

Identifiers

Local EPrints ID: 488960
URI: http://eprints.soton.ac.uk/id/eprint/488960
ISSN: 2168-2194
PURE UUID: 3a277406-9147-4bef-a573-a45c14653c3f
ORCID for Lei Li: ORCID iD orcid.org/0000-0003-1281-6472

Catalogue record

Date deposited: 09 Apr 2024 17:35
Last modified: 10 Apr 2024 02:14

Export record

Altmetrics

Contributors

Author: Wangbin Ding
Author: Lei Li ORCID iD
Author: Xiahai Zhuang
Author: Liqin Huang

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×