The University of Southampton
University of Southampton Institutional Repository

Hi-ResNet: edge detail enhancement for high-resolution remote sensing segmentation

Hi-ResNet: edge detail enhancement for high-resolution remote sensing segmentation
Hi-ResNet: edge detail enhancement for high-resolution remote sensing segmentation
High-resolution remote sensing (HRS) semantic segmentation extracts key objects from high-resolution coverage areas. However, objects of the same category within HRS images generally show significant differences in scale and shape across diverse geographical environments, making it difficult to fit the data distribution. In addition, a complex background environment causes similar appearances of objects of different categories, which precipitates a substantial number of objects into misclassification as background. These issues make existing learning algorithms suboptimal. In this work, we solve the abovementioned problems by proposing a high-resolution remote sensing network (Hi-ResNet) with efficient network structure designs, which consists of a funnel module, a multibranch module with stacks of information aggregation (IA) blocks, and a feature refinement module, sequentially, and class-agnostic edge-aware (CEA) loss. Specifically, we propose a funnel module to downsample, which reduces the computational cost and extracts high-resolution semantic information from the initial input image. Then, we downsample the processed feature images into multiresolution branches incrementally to capture image features at different scales. Furthermore, with the design of the window multihead self-attention, squeeze-and-excitation attention, and depthwise convolution, the light-efficient IA blocks are utilized to distinguish image features of the same class with variant scales and shapes. Finally, our feature refinement module integrates the CEA loss function, which disambiguates interclass objects with similar shapes and increases the data distribution distance for correct predictions. With effective pretraining strategies, we demonstrate the superiority of Hi-ResNet over the existing prevalent methods on three HRS segmentation benchmarks.
1939-1404
15024-15040
Chen, Yuxia
95cf44fa-5048-4bdc-9a68-a275338e632d
Fang, Pengcheng
7f3b5cc1-6fd3-4e94-8338-0820f3fbd189
Zhong, Xiaoling
2e844983-bc49-4300-b2e1-d31f31add7d9
Yu, Jianhui
f732d2f2-b321-4242-b7f7-08c31e62f81f
Zhang, Xiaoming
5872a667-47a8-415d-9e68-1e97a38bc67e
Li, Tianrui
a4b26bd5-9715-4411-89b5-23b04bf5f421
Chen, Yuxia
95cf44fa-5048-4bdc-9a68-a275338e632d
Fang, Pengcheng
7f3b5cc1-6fd3-4e94-8338-0820f3fbd189
Zhong, Xiaoling
2e844983-bc49-4300-b2e1-d31f31add7d9
Yu, Jianhui
f732d2f2-b321-4242-b7f7-08c31e62f81f
Zhang, Xiaoming
5872a667-47a8-415d-9e68-1e97a38bc67e
Li, Tianrui
a4b26bd5-9715-4411-89b5-23b04bf5f421

Chen, Yuxia, Fang, Pengcheng, Zhong, Xiaoling, Yu, Jianhui, Zhang, Xiaoming and Li, Tianrui (2024) Hi-ResNet: edge detail enhancement for high-resolution remote sensing segmentation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 17, 15024-15040. (doi:10.1109/JSTARS.2024.3444773).

Record type: Article

Abstract

High-resolution remote sensing (HRS) semantic segmentation extracts key objects from high-resolution coverage areas. However, objects of the same category within HRS images generally show significant differences in scale and shape across diverse geographical environments, making it difficult to fit the data distribution. In addition, a complex background environment causes similar appearances of objects of different categories, which precipitates a substantial number of objects into misclassification as background. These issues make existing learning algorithms suboptimal. In this work, we solve the abovementioned problems by proposing a high-resolution remote sensing network (Hi-ResNet) with efficient network structure designs, which consists of a funnel module, a multibranch module with stacks of information aggregation (IA) blocks, and a feature refinement module, sequentially, and class-agnostic edge-aware (CEA) loss. Specifically, we propose a funnel module to downsample, which reduces the computational cost and extracts high-resolution semantic information from the initial input image. Then, we downsample the processed feature images into multiresolution branches incrementally to capture image features at different scales. Furthermore, with the design of the window multihead self-attention, squeeze-and-excitation attention, and depthwise convolution, the light-efficient IA blocks are utilized to distinguish image features of the same class with variant scales and shapes. Finally, our feature refinement module integrates the CEA loss function, which disambiguates interclass objects with similar shapes and increases the data distribution distance for correct predictions. With effective pretraining strategies, we demonstrate the superiority of Hi-ResNet over the existing prevalent methods on three HRS segmentation benchmarks.

Text
Hi-ResNet_Edge_Detail_Enhancement_for_High-Resolution_Remote_Sensing_Segmentation - Version of Record
Download (23MB)

More information

Accepted/In Press date: 13 August 2024
Published date: 16 August 2024

Identifiers

Local EPrints ID: 503225
URI: http://eprints.soton.ac.uk/id/eprint/503225
ISSN: 1939-1404
PURE UUID: 853b3f77-3a6d-41f0-a0f9-693fb4b3f780
ORCID for Pengcheng Fang: ORCID iD orcid.org/0009-0008-6215-4335

Catalogue record

Date deposited: 24 Jul 2025 16:39
Last modified: 22 Aug 2025 02:43

Export record

Altmetrics

Contributors

Author: Yuxia Chen
Author: Pengcheng Fang ORCID iD
Author: Xiaoling Zhong
Author: Jianhui Yu
Author: Xiaoming Zhang
Author: Tianrui Li

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×