The University of Southampton
University of Southampton Institutional Repository

Aggregating multiple bio-inspired image region classifiers for effective and lightweight visual place recognition

Aggregating multiple bio-inspired image region classifiers for effective and lightweight visual place recognition
Aggregating multiple bio-inspired image region classifiers for effective and lightweight visual place recognition

Visual place recognition (VPR) enables autonomous systems to localize themselves within an environment using image information. While VPR techniques built upon a Convolutional Neural Network (CNN) backbone dominate state-of-the-art VPR performance, their high computational requirements make them unsuitable for platforms equipped with low-end hardware. Recently, a lightweight VPR system based on multiple bio-inspired classifiers, dubbed DrosoNets, has been proposed, achieving great computational efficiency at the cost of reduced absolute place retrieval performance. In this letter, we propose a novel multi-DrosoNet localization system, dubbed RegionDrosoNet, with significantly improved VPR performance, while preserving a low-computational profile. Our approach relies on specializing distinct groups of DrosoNets on differently sliced partitions of the original images, increasing model differentiation. Furthermore, we introduce a novel voting module to combine the outputs of all DrosoNets into the final place prediction which considers multiple top reference candidates from each DrosoNet. RegionDrosoNet outperforms other lightweight VPR techniques when dealing with both appearance changes and viewpoint variations. Moreover, it competes with computationally expensive methods on some benchmark datasets at a small fraction of their online inference time.

bioinspired robot learning, localization, Vision-based navigation
2377-3766
3315-3322
Arcanjo, Bruno
d0af4e09-60ab-42bd-a51a-4e4deb2daba8
Ferrarini, Bruno
a93ab204-5ccf-4b6d-a7c2-e02e65729924
Fasli, Maria
0628512e-ac16-48a8-b679-b940f61dd45e
Milford, Michael
9edf5ef3-4a6a-4d05-aec2-6146c00cd407
Mcdonald-Maier, Klaus D.
d35c2e77-744a-4318-9d9d-726459e64db9
Ehsan, Shoaib
ae8922f0-dbe0-4b22-8474-98e84d852de7
Arcanjo, Bruno
d0af4e09-60ab-42bd-a51a-4e4deb2daba8
Ferrarini, Bruno
a93ab204-5ccf-4b6d-a7c2-e02e65729924
Fasli, Maria
0628512e-ac16-48a8-b679-b940f61dd45e
Milford, Michael
9edf5ef3-4a6a-4d05-aec2-6146c00cd407
Mcdonald-Maier, Klaus D.
d35c2e77-744a-4318-9d9d-726459e64db9
Ehsan, Shoaib
ae8922f0-dbe0-4b22-8474-98e84d852de7

Arcanjo, Bruno, Ferrarini, Bruno, Fasli, Maria, Milford, Michael, Mcdonald-Maier, Klaus D. and Ehsan, Shoaib (2024) Aggregating multiple bio-inspired image region classifiers for effective and lightweight visual place recognition. IEEE Robotics and Automation Letters, 9 (4), 3315-3322. (doi:10.1109/LRA.2024.3367275).

Record type: Article

Abstract

Visual place recognition (VPR) enables autonomous systems to localize themselves within an environment using image information. While VPR techniques built upon a Convolutional Neural Network (CNN) backbone dominate state-of-the-art VPR performance, their high computational requirements make them unsuitable for platforms equipped with low-end hardware. Recently, a lightweight VPR system based on multiple bio-inspired classifiers, dubbed DrosoNets, has been proposed, achieving great computational efficiency at the cost of reduced absolute place retrieval performance. In this letter, we propose a novel multi-DrosoNet localization system, dubbed RegionDrosoNet, with significantly improved VPR performance, while preserving a low-computational profile. Our approach relies on specializing distinct groups of DrosoNets on differently sliced partitions of the original images, increasing model differentiation. Furthermore, we introduce a novel voting module to combine the outputs of all DrosoNets into the final place prediction which considers multiple top reference candidates from each DrosoNet. RegionDrosoNet outperforms other lightweight VPR techniques when dealing with both appearance changes and viewpoint variations. Moreover, it competes with computationally expensive methods on some benchmark datasets at a small fraction of their online inference time.

This record has no associated files available for download.

More information

Accepted/In Press date: 6 February 2024
e-pub ahead of print date: 19 February 2024
Published date: 1 April 2024
Keywords: bioinspired robot learning, localization, Vision-based navigation

Identifiers

Local EPrints ID: 503028
URI: http://eprints.soton.ac.uk/id/eprint/503028
ISSN: 2377-3766
PURE UUID: 447667f0-50d4-4f21-9b9f-7d24282dd43b
ORCID for Bruno Arcanjo: ORCID iD orcid.org/0000-0003-0783-8394
ORCID for Shoaib Ehsan: ORCID iD orcid.org/0000-0001-9631-1898

Catalogue record

Date deposited: 16 Jul 2025 16:50
Last modified: 17 Jul 2025 02:27

Export record

Altmetrics

Contributors

Author: Bruno Arcanjo ORCID iD
Author: Bruno Ferrarini
Author: Maria Fasli
Author: Michael Milford
Author: Klaus D. Mcdonald-Maier
Author: Shoaib Ehsan ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×