The University of Southampton
University of Southampton Institutional Repository

An object-based convolutional neural network (OCNN) for urban land use classification

An object-based convolutional neural network (OCNN) for urban land use classification
An object-based convolutional neural network (OCNN) for urban land use classification
Urban land use information is essential for a variety of urban-related applications such as urban planning and regional administration. The extraction of urban land use from very fine spatial resolution (VFSR) remotely sensed imagery has, therefore, drawn much attention in the remote sensing community. Nevertheless, classifying urban land use from VFSR images remains a challenging task, due to the extreme difficulties in differentiating complex spatial patterns to derive high-level semantic labels. Deep convolutional neural networks (CNNs) offer great potential to extract high-level spatial features, thanks to its hierarchical nature with multiple levels of abstraction. However, blurred object boundaries and geometric distortion, as well as huge computational redundancy, severely restrict the potential application of CNN for the classification of urban land use. In this paper, a novel object-based convolutional neural network (OCNN) is proposed for urban land use classification using VFSR images. Rather than pixel-wise convolutional processes, the OCNN relies on segmented objects as its functional units, and CNN networks are used to analyse and label objects such as to partition within-object and between-object variation. Two CNN networks with different model structures and window sizes are developed to predict linearly shaped objects (e.g. Highway, Canal) and general (other non-linearly shaped) objects. Then a rule-based decision fusion is performed to integrate the class-specific classification results. The effectiveness of the proposed OCNN method was tested on aerial photography of two large urban scenes in Southampton and Manchester in Great Britain. The OCNN combined with large and small window sizes achieved excellent classification accuracy and computational efficiency, consistently outperforming its sub-modules, as well as other benchmark comparators, including the pixel-wise CNN, contextual-based MRF and object-based OBIA-SVM methods. The proposed method provides the first object-based CNN framework to effectively and efficiently address the complicated problem of urban land use classification from VFSR images.
0034-4257
57-70
Zhang, Ce
72e137e7-06c5-483e-bdc7-21629e03bb5b
Sargent, Isabel
3df2050d-b24e-4f60-bc6e-8b1fafdb3f5a
Pan, Xin
387a1d0d-63a4-432a-a443-0654cfcc9321
Li, Huapeng
9e72ecd5-9964-4038-ab71-9bc7a2fb0510
Gardiner, A.
0c87983a-019a-4206-842b-ad12603b99e4
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
Atkinson, Peter M.
29ab8d8a-31cb-4a19-b0fb-f0558a1f110a
Zhang, Ce
72e137e7-06c5-483e-bdc7-21629e03bb5b
Sargent, Isabel
3df2050d-b24e-4f60-bc6e-8b1fafdb3f5a
Pan, Xin
387a1d0d-63a4-432a-a443-0654cfcc9321
Li, Huapeng
9e72ecd5-9964-4038-ab71-9bc7a2fb0510
Gardiner, A.
0c87983a-019a-4206-842b-ad12603b99e4
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
Atkinson, Peter M.
29ab8d8a-31cb-4a19-b0fb-f0558a1f110a

Zhang, Ce, Sargent, Isabel, Pan, Xin, Li, Huapeng, Gardiner, A., Hare, Jonathon and Atkinson, Peter M. (2018) An object-based convolutional neural network (OCNN) for urban land use classification. Remote Sensing of Environment, 216, 57-70. (doi:10.1016/j.rse.2018.06.034).

Record type: Article

Abstract

Urban land use information is essential for a variety of urban-related applications such as urban planning and regional administration. The extraction of urban land use from very fine spatial resolution (VFSR) remotely sensed imagery has, therefore, drawn much attention in the remote sensing community. Nevertheless, classifying urban land use from VFSR images remains a challenging task, due to the extreme difficulties in differentiating complex spatial patterns to derive high-level semantic labels. Deep convolutional neural networks (CNNs) offer great potential to extract high-level spatial features, thanks to its hierarchical nature with multiple levels of abstraction. However, blurred object boundaries and geometric distortion, as well as huge computational redundancy, severely restrict the potential application of CNN for the classification of urban land use. In this paper, a novel object-based convolutional neural network (OCNN) is proposed for urban land use classification using VFSR images. Rather than pixel-wise convolutional processes, the OCNN relies on segmented objects as its functional units, and CNN networks are used to analyse and label objects such as to partition within-object and between-object variation. Two CNN networks with different model structures and window sizes are developed to predict linearly shaped objects (e.g. Highway, Canal) and general (other non-linearly shaped) objects. Then a rule-based decision fusion is performed to integrate the class-specific classification results. The effectiveness of the proposed OCNN method was tested on aerial photography of two large urban scenes in Southampton and Manchester in Great Britain. The OCNN combined with large and small window sizes achieved excellent classification accuracy and computational efficiency, consistently outperforming its sub-modules, as well as other benchmark comparators, including the pixel-wise CNN, contextual-based MRF and object-based OBIA-SVM methods. The proposed method provides the first object-based CNN framework to effectively and efficiently address the complicated problem of urban land use classification from VFSR images.

Text
OCNN_Manuscript_RSE_Ce_Accepted - Accepted Manuscript
Download (971kB)

More information

Accepted/In Press date: 24 June 2018
e-pub ahead of print date: 3 July 2018
Published date: October 2018

Identifiers

Local EPrints ID: 422083
URI: http://eprints.soton.ac.uk/id/eprint/422083
ISSN: 0034-4257
PURE UUID: 2e444b60-0519-4817-998f-87c3bc441846
ORCID for Jonathon Hare: ORCID iD orcid.org/0000-0003-2921-4283

Catalogue record

Date deposited: 16 Jul 2018 16:30
Last modified: 16 Mar 2024 06:49

Export record

Altmetrics

Contributors

Author: Ce Zhang
Author: Isabel Sargent
Author: Xin Pan
Author: Huapeng Li
Author: A. Gardiner
Author: Jonathon Hare ORCID iD
Author: Peter M. Atkinson

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×