Automated interpretation of seafloor visual maps obtained using underwater robots
Automated interpretation of seafloor visual maps obtained using underwater robots
Scientific surveys using underwater robots can recover a huge volume of seafloor imagery. For mapping applications, these images can be packaged into vast, seamless and georeferenced seafloor visual reconstructions in a routine way, however interpreting this data to extract useful quantitative information typically relies on the manual effort of expert human annotators. This process is often slow and is a bottleneck in the flow of information. This work explores the feasibility of using Machine Learning tools, specifically Convolutional Neural Networks (CNNs) to at least partially automate the annotation process. A CNN was constructed to identify Shinkaia Crosnieri galetheid crabs and Bathymodiolus mussels, which are two distinct megabenthic taxa found in vast numbers in hydrothermally active regions of the seafloor. The CNN was trained with varying numbers of annotated data, where each annotation consisted of a small region surrounding a positive label at the centre of each individual within a seamless seafloor image reconstruction. The performance was assessed using an independent set of annotated data, taken from a separate reconstruction located approximately 500 m away. While the results show that the trained network can be used to classify new datasets at well characterized levels of uncertainty, the performance was found to vary between the different taxa and with a control dataset that showed only unpopulated regions of the seafloor. The analysis suggests that the number of training examples required to achieve a given level of accuracy is subject dependent, and this should be considered by humans when devising annotation strategies that make best use of their efforts to leverage the advantages offered by CNNs.
Lim, Jin Wei
60abf549-c9e8-45a7-a922-0d42b778a7ea
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Thornton, Blair
8293beb5-c083-47e3-b5f0-d9c3cee14be9
May 2018
Lim, Jin Wei
60abf549-c9e8-45a7-a922-0d42b778a7ea
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Thornton, Blair
8293beb5-c083-47e3-b5f0-d9c3cee14be9
Lim, Jin Wei, Prugel-Bennett, Adam and Thornton, Blair
(2018)
Automated interpretation of seafloor visual maps obtained using underwater robots.
2018 OCEANS - MTS/IEEE Kobe Techno-Oceans (OTO), Kobe Convention Center, Kobe, Japan.
28 - 31 May 2018.
8 pp
.
(doi:10.1109/OCEANSKOBE.2018.8559247).
Record type:
Conference or Workshop Item
(Paper)
Abstract
Scientific surveys using underwater robots can recover a huge volume of seafloor imagery. For mapping applications, these images can be packaged into vast, seamless and georeferenced seafloor visual reconstructions in a routine way, however interpreting this data to extract useful quantitative information typically relies on the manual effort of expert human annotators. This process is often slow and is a bottleneck in the flow of information. This work explores the feasibility of using Machine Learning tools, specifically Convolutional Neural Networks (CNNs) to at least partially automate the annotation process. A CNN was constructed to identify Shinkaia Crosnieri galetheid crabs and Bathymodiolus mussels, which are two distinct megabenthic taxa found in vast numbers in hydrothermally active regions of the seafloor. The CNN was trained with varying numbers of annotated data, where each annotation consisted of a small region surrounding a positive label at the centre of each individual within a seamless seafloor image reconstruction. The performance was assessed using an independent set of annotated data, taken from a separate reconstruction located approximately 500 m away. While the results show that the trained network can be used to classify new datasets at well characterized levels of uncertainty, the performance was found to vary between the different taxa and with a control dataset that showed only unpopulated regions of the seafloor. The analysis suggests that the number of training examples required to achieve a given level of accuracy is subject dependent, and this should be considered by humans when devising annotation strategies that make best use of their efforts to leverage the advantages offered by CNNs.
Text
Lim_2018_Oceans
- Accepted Manuscript
Restricted to Registered users only
Request a copy
More information
Published date: May 2018
Venue - Dates:
2018 OCEANS - MTS/IEEE Kobe Techno-Oceans (OTO), Kobe Convention Center, Kobe, Japan, 2018-05-28 - 2018-05-31
Identifiers
Local EPrints ID: 419235
URI: http://eprints.soton.ac.uk/id/eprint/419235
PURE UUID: 3d7e2238-7341-4b38-935a-8eb0d3985f0f
Catalogue record
Date deposited: 09 Apr 2018 16:30
Last modified: 15 Mar 2024 19:05
Export record
Altmetrics
Contributors
Author:
Jin Wei Lim
Author:
Adam Prugel-Bennett
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics