Semantic segmentation by semantic proportions
Semantic segmentation by semantic proportions
Semantic segmentation is a critical task in computer vision aiming to identify and classify individual pixels in an image, with numerous applications in for example autonomous driving and medical image analysis. However, semantic segmentation can be highly challenging particularly due to the need for large amounts of annotated data. Annotating images is a time-consuming and costly process, often requiring expert knowledge and significant effort; moreover, saving the annotated images could dramatically increase the storage space. In this paper, we propose a novel approach for semantic segmentation, requiring the rough information of individual semantic class proportions, shortened as semantic proportions, rather than the necessity of ground-truth segmentation maps. This greatly simplifies the data annotation process and thus will significantly reduce the annotation time, cost and storage space, opening up new possibilities for semantic segmentation tasks where obtaining the full ground-truth segmentation maps may not be feasible or practical. Our proposed method of utilising semantic proportions can (i) further be utilised as a booster in the presence of ground-truth segmentation maps to gain performance without extra data and model complexity, and (ii) also be seen as a parameter-free plug-and-play module, which can be attached to existing deep neural networks designed for semantic segmentation. Extensive experimental results demonstrate the good performance of our method compared to benchmark methods that rely on ground-truth segmentation maps. Utilising semantic proportions suggested in this work offers a promising direction for future semantic segmentation research.
cs.CV, cs.AI
Aysel, Halil Ibrahim
9db69eca-47c7-4443-86a1-33504e172d60
Cai, Xiaohao
de483445-45e9-4b21-a4e8-b0427fc72cee
Prügel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
24 May 2023
Aysel, Halil Ibrahim
9db69eca-47c7-4443-86a1-33504e172d60
Cai, Xiaohao
de483445-45e9-4b21-a4e8-b0427fc72cee
Prügel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
[Unknown type: UNSPECIFIED]
Abstract
Semantic segmentation is a critical task in computer vision aiming to identify and classify individual pixels in an image, with numerous applications in for example autonomous driving and medical image analysis. However, semantic segmentation can be highly challenging particularly due to the need for large amounts of annotated data. Annotating images is a time-consuming and costly process, often requiring expert knowledge and significant effort; moreover, saving the annotated images could dramatically increase the storage space. In this paper, we propose a novel approach for semantic segmentation, requiring the rough information of individual semantic class proportions, shortened as semantic proportions, rather than the necessity of ground-truth segmentation maps. This greatly simplifies the data annotation process and thus will significantly reduce the annotation time, cost and storage space, opening up new possibilities for semantic segmentation tasks where obtaining the full ground-truth segmentation maps may not be feasible or practical. Our proposed method of utilising semantic proportions can (i) further be utilised as a booster in the presence of ground-truth segmentation maps to gain performance without extra data and model complexity, and (ii) also be seen as a parameter-free plug-and-play module, which can be attached to existing deep neural networks designed for semantic segmentation. Extensive experimental results demonstrate the good performance of our method compared to benchmark methods that rely on ground-truth segmentation maps. Utilising semantic proportions suggested in this work offers a promising direction for future semantic segmentation research.
Text
2305.15608v2
- Author's Original
More information
Published date: 24 May 2023
Keywords:
cs.CV, cs.AI
Identifiers
Local EPrints ID: 503864
URI: http://eprints.soton.ac.uk/id/eprint/503864
PURE UUID: dc541b9e-b233-4287-8000-3dec27357389
Catalogue record
Date deposited: 15 Aug 2025 16:38
Last modified: 16 Aug 2025 02:02
Export record
Altmetrics
Contributors
Author:
Halil Ibrahim Aysel
Author:
Xiaohao Cai
Author:
Adam Prügel-Bennett
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics