The University of Southampton
University of Southampton Institutional Repository

Comparison of image annotation data generated by multiple investigators for benthic ecology

Comparison of image annotation data generated by multiple investigators for benthic ecology
Comparison of image annotation data generated by multiple investigators for benthic ecology
Multiple investigators often generate data from seabed images within a single image set to reduce the time burden, particularly with the large photographic surveys now available to ecological studies. These data (annotations) are known to vary as a result of differences in investigator opinion on specimen classification, and human factors such as fatigue and cognition. These variations are rarely recorded or quantified, nor are their impacts on derived ecological metrics (density, diversity, composition). We compared the annotations of three investigators of 73 megafaunal morphotypes in ~28,000 images, including 650 common images. Successful annotation was defined as both detecting and correctly classifying a specimen. Estimated specimen detection success was 77%, and classification success was 95%, giving an annotation success rate of 73%. Specimen detection success varied substantially by morphotype (12-100%). Variation in the detection of common taxa resulted in significant differences in apparent faunal density and community composition among investigators. Such bias has the potential to produce spurious ecological interpretations if not appropriately controlled or accounted for. We recommend that photographic studies document the use of multiple annotators, and quantify potential inter-investigator bias. Randomisation of the sampling unit (photograph or video clip) is clearly critical to the effective removal of human annotation bias in multiple annotator studies (and indeed single annotator works).
Expert knowledge, Scoring, Visual imaging, Multiple investigators, Data quality, Quality assurance/quality control
61-70
Durden, Jennifer M.
d7101246-b76b-44bc-8956-8ca4ae62ae1f
Bett, Brian J.
61342990-13be-45ae-9f5c-9540114335d9
Schoening, Timm
76c160ff-472f-41bb-ba72-ba7388fde000
Morris, Kirsty J.
4640fbf5-0c92-476c-a35f-281ccf41d6b0
Nattkemper, Tim W.
a6f7cd11-5871-4aa9-b781-049a392de4a6
Ruhl, Henry A.
177608ef-7793-4911-86cf-cd9960ff22b6
Durden, Jennifer M.
d7101246-b76b-44bc-8956-8ca4ae62ae1f
Bett, Brian J.
61342990-13be-45ae-9f5c-9540114335d9
Schoening, Timm
76c160ff-472f-41bb-ba72-ba7388fde000
Morris, Kirsty J.
4640fbf5-0c92-476c-a35f-281ccf41d6b0
Nattkemper, Tim W.
a6f7cd11-5871-4aa9-b781-049a392de4a6
Ruhl, Henry A.
177608ef-7793-4911-86cf-cd9960ff22b6

Durden, Jennifer M., Bett, Brian J., Schoening, Timm, Morris, Kirsty J., Nattkemper, Tim W. and Ruhl, Henry A. (2016) Comparison of image annotation data generated by multiple investigators for benthic ecology. Marine Ecology Progress Series, 552, 61-70. (doi:10.3354/meps11775).

Record type: Article

Abstract

Multiple investigators often generate data from seabed images within a single image set to reduce the time burden, particularly with the large photographic surveys now available to ecological studies. These data (annotations) are known to vary as a result of differences in investigator opinion on specimen classification, and human factors such as fatigue and cognition. These variations are rarely recorded or quantified, nor are their impacts on derived ecological metrics (density, diversity, composition). We compared the annotations of three investigators of 73 megafaunal morphotypes in ~28,000 images, including 650 common images. Successful annotation was defined as both detecting and correctly classifying a specimen. Estimated specimen detection success was 77%, and classification success was 95%, giving an annotation success rate of 73%. Specimen detection success varied substantially by morphotype (12-100%). Variation in the detection of common taxa resulted in significant differences in apparent faunal density and community composition among investigators. Such bias has the potential to produce spurious ecological interpretations if not appropriately controlled or accounted for. We recommend that photographic studies document the use of multiple annotators, and quantify potential inter-investigator bias. Randomisation of the sampling unit (photograph or video clip) is clearly critical to the effective removal of human annotation bias in multiple annotator studies (and indeed single annotator works).

Text
MEPS201512030 Postprint version.pdf - Accepted Manuscript
Download (1MB)
Text
m552p061.pdf - Version of Record
Available under License Other.
Download (429kB)
Text
m552p061_supp.pdf - Other
Available under License Other.
Download (208kB)

More information

Accepted/In Press date: May 2016
Published date: 23 June 2016
Keywords: Expert knowledge, Scoring, Visual imaging, Multiple investigators, Data quality, Quality assurance/quality control
Organisations: Ocean and Earth Science, Marine Biogeochemistry

Identifiers

Local EPrints ID: 394653
URI: http://eprints.soton.ac.uk/id/eprint/394653
PURE UUID: b7c509d2-0495-43e6-a5eb-330eba216427

Catalogue record

Date deposited: 23 May 2016 15:55
Last modified: 15 Mar 2024 05:35

Export record

Altmetrics

Contributors

Author: Jennifer M. Durden
Author: Brian J. Bett
Author: Timm Schoening
Author: Kirsty J. Morris
Author: Tim W. Nattkemper
Author: Henry A. Ruhl

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×