Vision-Based Localization Algorithm Based on Landmark Matching, Triangulation, Reconstruction, and Comparison
Vision-Based Localization Algorithm Based on Landmark Matching, Triangulation, Reconstruction, and Comparison
Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRC) global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data.
217-226
Yuen, D.C.K.
81f5adac-bf24-4bcc-bcf4-86f0646155ce
MacDonald, B.A.
a9a6245b-e200-4ec6-ae60-ed69d9a415ca
2005
Yuen, D.C.K.
81f5adac-bf24-4bcc-bcf4-86f0646155ce
MacDonald, B.A.
a9a6245b-e200-4ec6-ae60-ed69d9a415ca
Yuen, D.C.K. and MacDonald, B.A.
(2005)
Vision-Based Localization Algorithm Based on Landmark Matching, Triangulation, Reconstruction, and Comparison.
IEEE Transactions on Robotics, 21 (2), .
Abstract
Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRC) global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data.
Text
dybm-visionloc-2k5.pdf
- Other
More information
Published date: 2005
Organisations:
Electronics & Computer Science
Identifiers
Local EPrints ID: 262758
URI: http://eprints.soton.ac.uk/id/eprint/262758
PURE UUID: 390c2bf1-4cea-488d-830e-28dccdafa326
Catalogue record
Date deposited: 28 Jun 2006
Last modified: 14 Mar 2024 07:17
Export record
Contributors
Author:
D.C.K. Yuen
Author:
B.A. MacDonald
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics