The University of Southampton
University of Southampton Institutional Repository
Warning ePrints Soton is experiencing an issue with some file downloads not being available. We are working hard to fix this. Please bear with us.

Metadata enhanced feature learning for efficient interpretation of AUV gathered seafloor visual imagery

Metadata enhanced feature learning for efficient interpretation of AUV gathered seafloor visual imagery
Metadata enhanced feature learning for efficient interpretation of AUV gathered seafloor visual imagery
Camera equipped Autonomous Underwater Vehicles (AUVs) typically gather tens to hundreds of thousands of georeferenced seafloor images in a single deployment. However, taking full advantage of this growing repository of data is a major challenge for the scientific progress. Although modern machine learning techniques, e.g. Deep Learning, are potentially useful to interpret these images, much of the progress in this field has been driven by the availability of large training datasets of expert human annotations that are available for terrestrial ad satellite imaging applications. Such datasets do not currently exist in the marine domain, and even if they did, it is not clear if the sensitivity of marine images to observation conditions, such as altitude, water turbidity and different illumination sources will limit the utility of such initiatives. Most applications of deep learning to marine imagery have used training datasets that have been specifically generated on a per-survey basis, and although the results are encouraging, the high workload involved in generating dataset specific expert training labels is unlikely to be justifiable in most applications.
To address this issue, we investigate the use of unsupervised feature learning (or representation learning), where lower-dimensional feature vectors are derived from original high-dimensional image data through Convolutional Neural Networks (CNN) without any human annotations. Once the feature vectors of the original images, which keep only useful information to distinguish the habitats and substrates, are obtained, various interpretation techniques such as clustering, contents retrieval, few-shot learning can be efficiently applied.
In this work, we demonstrate autoencoder and contrastive learning-based feature learning techniques specially designed for seafloor visual imagery [1]. The proposed methods can leverage the metadata gathered with the images by AUV, e.g. georeference, water-temperature, saliency. We confirm that metadata-guiding significantly improves the feature learning and demonstrate applications to unsupervised and semi-supervised mapping of habitat, substrate and infrastructure distribution at the Southern Hydrate Ridge (Oregon, USA, 12k images), Darwin Mounds (UK, 20k images) and Tasmania (Australia, 110k images) datasets with validation against human annotation results.

[1] Takaki Yamada, Adam Prugel-Bennet, Blair Thornton, Learning Features from Georeferenced Seafloor Imagery with Location Guided Autoencoders, Journal of Field Robotics 38, 52-67, DOI: 10.1002/rob.21961
Yamada, Takaki
81c66c35-0e2b-4342-80fa-cbee6ff9ce5f
Massot Campos, Miguel
a55d7b32-c097-4adf-9483-16bbf07f9120
Curtis, Emma, Juliet
e07ed097-26f2-4a6d-94d3-84be8d4c66cf
Pizarro, Oscar
a9ed2c7e-ae8d-4c92-bd02-7e9981e4d4f1
Williams, Stefan B.
c9477238-5139-4b74-804c-3b9b464f6949
Huvenne, Veerle
f22be3e2-708c-491b-b985-a438470fa053
Thornton, Blair
8293beb5-c083-47e3-b5f0-d9c3cee14be9
Yamada, Takaki
81c66c35-0e2b-4342-80fa-cbee6ff9ce5f
Massot Campos, Miguel
a55d7b32-c097-4adf-9483-16bbf07f9120
Curtis, Emma, Juliet
e07ed097-26f2-4a6d-94d3-84be8d4c66cf
Pizarro, Oscar
a9ed2c7e-ae8d-4c92-bd02-7e9981e4d4f1
Williams, Stefan B.
c9477238-5139-4b74-804c-3b9b464f6949
Huvenne, Veerle
f22be3e2-708c-491b-b985-a438470fa053
Thornton, Blair
8293beb5-c083-47e3-b5f0-d9c3cee14be9

Yamada, Takaki, Massot Campos, Miguel, Curtis, Emma, Juliet, Pizarro, Oscar, Williams, Stefan B., Huvenne, Veerle and Thornton, Blair (2021) Metadata enhanced feature learning for efficient interpretation of AUV gathered seafloor visual imagery.

Record type: Conference or Workshop Item (Other)

Abstract

Camera equipped Autonomous Underwater Vehicles (AUVs) typically gather tens to hundreds of thousands of georeferenced seafloor images in a single deployment. However, taking full advantage of this growing repository of data is a major challenge for the scientific progress. Although modern machine learning techniques, e.g. Deep Learning, are potentially useful to interpret these images, much of the progress in this field has been driven by the availability of large training datasets of expert human annotations that are available for terrestrial ad satellite imaging applications. Such datasets do not currently exist in the marine domain, and even if they did, it is not clear if the sensitivity of marine images to observation conditions, such as altitude, water turbidity and different illumination sources will limit the utility of such initiatives. Most applications of deep learning to marine imagery have used training datasets that have been specifically generated on a per-survey basis, and although the results are encouraging, the high workload involved in generating dataset specific expert training labels is unlikely to be justifiable in most applications.
To address this issue, we investigate the use of unsupervised feature learning (or representation learning), where lower-dimensional feature vectors are derived from original high-dimensional image data through Convolutional Neural Networks (CNN) without any human annotations. Once the feature vectors of the original images, which keep only useful information to distinguish the habitats and substrates, are obtained, various interpretation techniques such as clustering, contents retrieval, few-shot learning can be efficiently applied.
In this work, we demonstrate autoencoder and contrastive learning-based feature learning techniques specially designed for seafloor visual imagery [1]. The proposed methods can leverage the metadata gathered with the images by AUV, e.g. georeference, water-temperature, saliency. We confirm that metadata-guiding significantly improves the feature learning and demonstrate applications to unsupervised and semi-supervised mapping of habitat, substrate and infrastructure distribution at the Southern Hydrate Ridge (Oregon, USA, 12k images), Darwin Mounds (UK, 20k images) and Tasmania (Australia, 110k images) datasets with validation against human annotation results.

[1] Takaki Yamada, Adam Prugel-Bennet, Blair Thornton, Learning Features from Georeferenced Seafloor Imagery with Location Guided Autoencoders, Journal of Field Robotics 38, 52-67, DOI: 10.1002/rob.21961

This record has no associated files available for download.

More information

Published date: 5 May 2021

Identifiers

Local EPrints ID: 451155
URI: http://eprints.soton.ac.uk/id/eprint/451155
PURE UUID: 9474db93-1a26-4ebb-aa1b-355b5f33b2ab
ORCID for Takaki Yamada: ORCID iD orcid.org/0000-0002-5090-7239
ORCID for Miguel Massot Campos: ORCID iD orcid.org/0000-0002-1202-0362
ORCID for Veerle Huvenne: ORCID iD orcid.org/0000-0001-7135-6360

Catalogue record

Date deposited: 14 Sep 2021 15:30
Last modified: 16 Sep 2021 01:59

Export record

Contributors

Author: Takaki Yamada ORCID iD
Author: Oscar Pizarro
Author: Stefan B. Williams
Author: Veerle Huvenne ORCID iD
Author: Blair Thornton

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×