CAM-SegNet: a context-aware dense material segmentation network for sparsely labelled datasets
CAM-SegNet: a context-aware dense material segmentation network for sparsely labelled datasets
Contextual information reduces the uncertainty in the dense material segmentation task to improve segmentation quality. Typical contextual information includes object, place labels or extracted feature maps by a neural network. Existing methods typically adopt a pre-trained network to generate contextual feature maps without fine-tuning since dedicated material datasets do not contain contextual labels. As a consequence, these contextual features may not improve the material segmentation performance. In consideration of this problem, this paper proposes a hybrid network architecture, the CAM-SegNet, to learn from contextual and material features during training jointly without extra contextual labels. The utility of our CAM-SegNet is demonstrated by guiding the network to learn boundary-related contextual features with the help of a self-training approach. Experiments show that CAM-SegNet can recognise materials that have similar appearances, achieving an improvement of 3-20% on accuracy and 6-28% on Mean IoU.
Deep Learning, Dense Material Segmentation, Image Segmentation, Material Recognition, Scene Understanding
190-201
Heng, Yuwen
a3edf9da-2d3b-450c-8d6d-85f76c861849
Wu, Yihong
2876bede-25f1-47a5-9e08-b98be99b2d31
Dasmahapatra, Srinandan
eb5fd76f-4335-4ae9-a88a-20b9e2b3f698
Kim, Hansung
2c7c135c-f00b-4409-acb2-85b3a9e8225f
2022
Heng, Yuwen
a3edf9da-2d3b-450c-8d6d-85f76c861849
Wu, Yihong
2876bede-25f1-47a5-9e08-b98be99b2d31
Dasmahapatra, Srinandan
eb5fd76f-4335-4ae9-a88a-20b9e2b3f698
Kim, Hansung
2c7c135c-f00b-4409-acb2-85b3a9e8225f
Heng, Yuwen, Wu, Yihong, Dasmahapatra, Srinandan and Kim, Hansung
(2022)
CAM-SegNet: a context-aware dense material segmentation network for sparsely labelled datasets.
17th International Conference on Computer Vision Theory and Applications, Virtual.
06 - 08 Feb 2022.
.
(doi:10.5220/0010853200003124).
Record type:
Conference or Workshop Item
(Paper)
Abstract
Contextual information reduces the uncertainty in the dense material segmentation task to improve segmentation quality. Typical contextual information includes object, place labels or extracted feature maps by a neural network. Existing methods typically adopt a pre-trained network to generate contextual feature maps without fine-tuning since dedicated material datasets do not contain contextual labels. As a consequence, these contextual features may not improve the material segmentation performance. In consideration of this problem, this paper proposes a hybrid network architecture, the CAM-SegNet, to learn from contextual and material features during training jointly without extra contextual labels. The utility of our CAM-SegNet is demonstrated by guiding the network to learn boundary-related contextual features with the help of a self-training approach. Experiments show that CAM-SegNet can recognise materials that have similar appearances, achieving an improvement of 3-20% on accuracy and 6-28% on Mean IoU.
This record has no associated files available for download.
More information
Published date: 2022
Venue - Dates:
17th International Conference on Computer Vision Theory and Applications, Virtual, 2022-02-06 - 2022-02-08
Keywords:
Deep Learning, Dense Material Segmentation, Image Segmentation, Material Recognition, Scene Understanding
Identifiers
Local EPrints ID: 455294
URI: http://eprints.soton.ac.uk/id/eprint/455294
PURE UUID: 8702a3db-1b36-4021-a5f5-beaa6250ef18
Catalogue record
Date deposited: 16 Mar 2022 18:01
Last modified: 17 Mar 2024 04:04
Export record
Altmetrics
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics