The University of Southampton
University of Southampton Institutional Repository

Computationally efficient visual-inertial sensor fusion for Global Positioning System-denied navigation on a small quadrotor

Computationally efficient visual-inertial sensor fusion for Global Positioning System-denied navigation on a small quadrotor
Computationally efficient visual-inertial sensor fusion for Global Positioning System-denied navigation on a small quadrotor
Because of the complementary nature of visual and inertial sensors, the combination of both is able to provide fast and accurate 6 degree-of-freedom state estimation, which is the fundamental requirement for robotic (especially, unmanned aerial vehicle) navigation tasks in Global Positioning System–denied environments. This article presents a computationally efficient visual–inertial fusion algorithm, by separating orientation fusion from the position fusion process. The algorithm is designed to perform 6 degree-of-freedom state estimation, based on a gyroscope, an accelerometer and a monocular visual-based simultaneous localisation and mapping algorithm measurement. It also recovers the visual scale for the monocular visual-based simultaneous localisation and mapping. In particular, the fusion algorithm treats the orientation fusion and position fusion as two separate processes, where the orientation fusion is based on a very efficient gradient descent algorithm, whereas the position fusion is based on a 13-state linear Kalman filter. The elimination of the magnetometer sensor avoids the problem of magnetic distortion, which makes it a power-on-and-go system once the accelerometer is factory calibrated. The resulting algorithm shows a significant computational reduction over the conventional extended Kalman filter, with competitive accuracy. Moreover, the separation between orientation and position fusion processes enables the algorithm to be easily implemented onto two individual hardware elements and thus allows the two fusion processes to be executed concurrently.
1687-8132
1-11
Liu, Chang
f85ea407-c8dd-4bdb-9a54-5a936bb6eac8
Prior, Stephen D.
9c753e49-092a-4dc5-b4cd-6d5ff77e9ced
Teacy, W.T. Luke
21811cc0-c46f-4296-b3bf-7ed8fdd0e1b3
Warner, Martin
f4dce73d-fb87-4f71-a3f0-078123aa040c
Liu, Chang
f85ea407-c8dd-4bdb-9a54-5a936bb6eac8
Prior, Stephen D.
9c753e49-092a-4dc5-b4cd-6d5ff77e9ced
Teacy, W.T. Luke
21811cc0-c46f-4296-b3bf-7ed8fdd0e1b3
Warner, Martin
f4dce73d-fb87-4f71-a3f0-078123aa040c

Liu, Chang, Prior, Stephen D., Teacy, W.T. Luke and Warner, Martin (2016) Computationally efficient visual-inertial sensor fusion for Global Positioning System-denied navigation on a small quadrotor. Advances in Mechanical Engineering, 8 (3), 1-11. (doi:10.1177/1687814016640996).

Record type: Article

Abstract

Because of the complementary nature of visual and inertial sensors, the combination of both is able to provide fast and accurate 6 degree-of-freedom state estimation, which is the fundamental requirement for robotic (especially, unmanned aerial vehicle) navigation tasks in Global Positioning System–denied environments. This article presents a computationally efficient visual–inertial fusion algorithm, by separating orientation fusion from the position fusion process. The algorithm is designed to perform 6 degree-of-freedom state estimation, based on a gyroscope, an accelerometer and a monocular visual-based simultaneous localisation and mapping algorithm measurement. It also recovers the visual scale for the monocular visual-based simultaneous localisation and mapping. In particular, the fusion algorithm treats the orientation fusion and position fusion as two separate processes, where the orientation fusion is based on a very efficient gradient descent algorithm, whereas the position fusion is based on a 13-state linear Kalman filter. The elimination of the magnetometer sensor avoids the problem of magnetic distortion, which makes it a power-on-and-go system once the accelerometer is factory calibrated. The resulting algorithm shows a significant computational reduction over the conventional extended Kalman filter, with competitive accuracy. Moreover, the separation between orientation and position fusion processes enables the algorithm to be easily implemented onto two individual hardware elements and thus allows the two fusion processes to be executed concurrently.

Text
Advances in Mechanical Engineering-2016-Liu-.pdf - Version of Record
Available under License Other.
Download (1MB)

More information

Accepted/In Press date: 1 March 2016
Published date: 30 March 2016
Organisations: Computational Engineering & Design Group, Education Hub

Identifiers

Local EPrints ID: 393763
URI: http://eprints.soton.ac.uk/id/eprint/393763
ISSN: 1687-8132
PURE UUID: 6aea836e-520c-4cf8-948e-94cd40e165a8
ORCID for Stephen D. Prior: ORCID iD orcid.org/0000-0002-4993-4942
ORCID for Martin Warner: ORCID iD orcid.org/0000-0002-1483-0561

Catalogue record

Date deposited: 04 May 2016 10:45
Last modified: 18 Feb 2021 17:20

Export record

Altmetrics

Contributors

Author: Chang Liu
Author: W.T. Luke Teacy
Author: Martin Warner ORCID iD

University divisions

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×