The University of Southampton
University of Southampton Institutional Repository

An efficient and scalable collection of fly-inspired voting units for visual place recognition in changing environments

An efficient and scalable collection of fly-inspired voting units for visual place recognition in changing environments
An efficient and scalable collection of fly-inspired voting units for visual place recognition in changing environments

State-of-the-art visual place recognition performance is currently being achieved utilizing deep learning based approaches. Despite the recent efforts in designing lightweight convolutional neural network based models, these can still be too expensive for the most hardware restricted robot applications. Low-overhead visual place recognition techniques would not only enable platforms equipped with low-end, cheap hardware but also reduce computation on more powerful systems, allowing these resources to be allocated for other navigation tasks. In this work, our goal is to provide an algorithm of extreme compactness and efficiency while achieving state-of-the-art robustness to appearance changes and small point-of-view variations. Our first contribution is DrosoNet, an exceptionally compact model inspired by the odor processing abilities of the fruit fly, Drosophila melanogaster. Our second and main contribution is a voting mechanism that leverages multiple small and efficient classifiers to achieve more robust and consistent visual place recognition compared to a single one. We use DrosoNet as the baseline classifier for the voting mechanism and evaluate our models on five benchmark datasets, assessing moderate to extreme appearance changes and small to moderate viewpoint variations. We then compare the proposed algorithms to state-of-the-art methods, both in terms ofarea under the precision-recall curve results and computational efficiency.

Computational modeling, Convolutional neural networks, Feature extraction, Hardware, Navigation, Robots, Visualization
2377-3766
2527-2534
Arcanjo, Bruno
8ecc115c-83a1-42ea-97ab-0625dfcb7f32
Ferrarini, Bruno
a93ab204-5ccf-4b6d-a7c2-e02e65729924
Milford, Michael
9edf5ef3-4a6a-4d05-aec2-6146c00cd407
McDonald-Maier, Klaus D.
d35c2e77-744a-4318-9d9d-726459e64db9
Ehsan, Shoaib
ae8922f0-dbe0-4b22-8474-98e84d852de7
Arcanjo, Bruno
8ecc115c-83a1-42ea-97ab-0625dfcb7f32
Ferrarini, Bruno
a93ab204-5ccf-4b6d-a7c2-e02e65729924
Milford, Michael
9edf5ef3-4a6a-4d05-aec2-6146c00cd407
McDonald-Maier, Klaus D.
d35c2e77-744a-4318-9d9d-726459e64db9
Ehsan, Shoaib
ae8922f0-dbe0-4b22-8474-98e84d852de7

Arcanjo, Bruno, Ferrarini, Bruno, Milford, Michael, McDonald-Maier, Klaus D. and Ehsan, Shoaib (2022) An efficient and scalable collection of fly-inspired voting units for visual place recognition in changing environments. IEEE Robotics and Automation Letters, 7 (2), 2527-2534. (doi:10.1109/LRA.2022.3140827).

Record type: Article

Abstract

State-of-the-art visual place recognition performance is currently being achieved utilizing deep learning based approaches. Despite the recent efforts in designing lightweight convolutional neural network based models, these can still be too expensive for the most hardware restricted robot applications. Low-overhead visual place recognition techniques would not only enable platforms equipped with low-end, cheap hardware but also reduce computation on more powerful systems, allowing these resources to be allocated for other navigation tasks. In this work, our goal is to provide an algorithm of extreme compactness and efficiency while achieving state-of-the-art robustness to appearance changes and small point-of-view variations. Our first contribution is DrosoNet, an exceptionally compact model inspired by the odor processing abilities of the fruit fly, Drosophila melanogaster. Our second and main contribution is a voting mechanism that leverages multiple small and efficient classifiers to achieve more robust and consistent visual place recognition compared to a single one. We use DrosoNet as the baseline classifier for the voting mechanism and evaluate our models on five benchmark datasets, assessing moderate to extreme appearance changes and small to moderate viewpoint variations. We then compare the proposed algorithms to state-of-the-art methods, both in terms ofarea under the precision-recall curve results and computational efficiency.

Text
An_Efficient_and_Scalable_Collection_of_Fly-Inspired_Voting_Units_for_Visual_Place_Recognition_in_Changing_Environments - Version of Record
Available under License Creative Commons Attribution.
Download (1MB)

More information

e-pub ahead of print date: 6 January 2022
Published date: 1 April 2022
Keywords: Computational modeling, Convolutional neural networks, Feature extraction, Hardware, Navigation, Robots, Visualization

Identifiers

Local EPrints ID: 473469
URI: http://eprints.soton.ac.uk/id/eprint/473469
ISSN: 2377-3766
PURE UUID: 825bb3c0-bf32-4d10-b755-a98b34c5cb93
ORCID for Shoaib Ehsan: ORCID iD orcid.org/0000-0001-9631-1898

Catalogue record

Date deposited: 19 Jan 2023 17:34
Last modified: 17 Mar 2024 04:16

Export record

Altmetrics

Contributors

Author: Bruno Arcanjo
Author: Bruno Ferrarini
Author: Michael Milford
Author: Klaus D. McDonald-Maier
Author: Shoaib Ehsan ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×