The University of Southampton
University of Southampton Institutional Repository

CAM-SegNet: A context-aware dense material segmentation network for sparsely labelled datasets

CAM-SegNet: A context-aware dense material segmentation network for sparsely labelled datasets
CAM-SegNet: A context-aware dense material segmentation network for sparsely labelled datasets
Contextual information reduces the uncertainty in the dense material segmentation task to improve segmentation quality. Typical contextual information includes object, place labels or extracted feature maps by a neural
network. Existing methods typically adopt a pre-trained network to generate contextual feature maps without fine-tuning since dedicated material datasets do not contain contextual labels. As a consequence, these contextual
features may not improve the material segmentation performance. In consideration of this problem, this paper proposes a hybrid network architecture, the CAM-SegNet, to learn from contextual and material features
during training jointly without extra contextual labels. The utility of our CAM-SegNet is demonstrated by guiding the network to learn boundary-related contextual features with the help of a self-training approach.
Experiments show that CAM-SegNet can recognise materials that have similar appearances, achieving an improvement of 3-20% on accuracy and 6-28% on Mean IoU.
Heng, Yuwen
a3edf9da-2d3b-450c-8d6d-85f76c861849
Wu, Yihong
2876bede-25f1-47a5-9e08-b98be99b2d31
Dasmahapatra, Srinandan
eb5fd76f-4335-4ae9-a88a-20b9e2b3f698
Kim, Hansung
2c7c135c-f00b-4409-acb2-85b3a9e8225f
Heng, Yuwen
a3edf9da-2d3b-450c-8d6d-85f76c861849
Wu, Yihong
2876bede-25f1-47a5-9e08-b98be99b2d31
Dasmahapatra, Srinandan
eb5fd76f-4335-4ae9-a88a-20b9e2b3f698
Kim, Hansung
2c7c135c-f00b-4409-acb2-85b3a9e8225f

Heng, Yuwen, Wu, Yihong, Dasmahapatra, Srinandan and Kim, Hansung (2022) CAM-SegNet: A context-aware dense material segmentation network for sparsely labelled datasets. International Conference on Computer Vision Theory and Applications, Online. 06 - 08 Feb 2022. (doi:10.5220/0010853200003124).

Record type: Conference or Workshop Item (Paper)

Abstract

Contextual information reduces the uncertainty in the dense material segmentation task to improve segmentation quality. Typical contextual information includes object, place labels or extracted feature maps by a neural
network. Existing methods typically adopt a pre-trained network to generate contextual feature maps without fine-tuning since dedicated material datasets do not contain contextual labels. As a consequence, these contextual
features may not improve the material segmentation performance. In consideration of this problem, this paper proposes a hybrid network architecture, the CAM-SegNet, to learn from contextual and material features
during training jointly without extra contextual labels. The utility of our CAM-SegNet is demonstrated by guiding the network to learn boundary-related contextual features with the help of a self-training approach.
Experiments show that CAM-SegNet can recognise materials that have similar appearances, achieving an improvement of 3-20% on accuracy and 6-28% on Mean IoU.

This record has no associated files available for download.

More information

Published date: 6 February 2022
Venue - Dates: International Conference on Computer Vision Theory and Applications, Online, 2022-02-06 - 2022-02-08

Identifiers

Local EPrints ID: 455294
URI: http://eprints.soton.ac.uk/id/eprint/455294
PURE UUID: 8702a3db-1b36-4021-a5f5-beaa6250ef18
ORCID for Yuwen Heng: ORCID iD orcid.org/0000-0003-3793-4811
ORCID for Hansung Kim: ORCID iD orcid.org/0000-0003-4907-0491

Catalogue record

Date deposited: 16 Mar 2022 18:01
Last modified: 17 Mar 2022 02:59

Export record

Altmetrics

Contributors

Author: Yuwen Heng ORCID iD
Author: Yihong Wu
Author: Srinandan Dasmahapatra
Author: Hansung Kim ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×