The University of Southampton
University of Southampton Institutional Repository

Force field feature extraction for ear biometrics

Force field feature extraction for ear biometrics
Force field feature extraction for ear biometrics
The overall objective in defining feature space is to reduce the dimensionality of the original pattern space, whilst maintaining discriminatory power for classification. To meet this objective in the context of ear biometrics a new force field transformation treats the image as an array of mutually attracting particles that act as the source of a Gaussian force field. Underlying the force field there is a scalar potential energy field, which in the case of an ear takes the form of a smooth surface that resembles a small mountain with a number of peaks joined by ridges. The peaks correspond to potential energy wells and to extend the analogy the ridges correspond to potential energy channels. Since the transform also turns out to be invertible, and since the surface is otherwise smooth, information theory suggests that much of the information is transferred to these features, thus confirming their efficacy. We previously described how field line feature extraction, using an algorithm similar to gradient descent, exploits the directional properties of the force field to automatically locate these channels and wells, which then form the basis of characteristic ear features. We now show how an analysis of the mechanism of this algorithmic approach leads to a closed analytical description based on the divergence of force direction, which reveals that channels and wells are really manifestations of the same phenomenon. We further show that this new operator, with its own distinct advantages, has a striking similarity to the Marr-Hildreth operator, but with the important difference that it is non-linear. As well as addressing faster implementation, invertibility, and brightness sensitivity, the technique is also validated by performing recognition on a database of ears selected from the XM2VTS face database, and by comparing the results with the more established technique of Principal Components Analysis. This confirms not only that ears do indeed appear to have potential as a biometric, but also that the new approach is well suited to their description, being robust especially in the presence of noise, and having the advantage that the ear does not need to be explicitly extracted from the background.
491-512
Hurley, David J.
48c2987f-9096-4f86-93f2-148cb4daaaf2
Nixon, Mark S.
2b5b9804-5a81-462a-82e6-92ee5fa74e12
Carter, John N.
e05be2f9-991d-4476-bb50-ae91606389da
Hurley, David J.
48c2987f-9096-4f86-93f2-148cb4daaaf2
Nixon, Mark S.
2b5b9804-5a81-462a-82e6-92ee5fa74e12
Carter, John N.
e05be2f9-991d-4476-bb50-ae91606389da

Hurley, David J., Nixon, Mark S. and Carter, John N. (2005) Force field feature extraction for ear biometrics. Computer Vision and Image Understanding, 98 (3), 491-512. (doi:10.1016/j.cviu.2004.11.001).

Record type: Article

Abstract

The overall objective in defining feature space is to reduce the dimensionality of the original pattern space, whilst maintaining discriminatory power for classification. To meet this objective in the context of ear biometrics a new force field transformation treats the image as an array of mutually attracting particles that act as the source of a Gaussian force field. Underlying the force field there is a scalar potential energy field, which in the case of an ear takes the form of a smooth surface that resembles a small mountain with a number of peaks joined by ridges. The peaks correspond to potential energy wells and to extend the analogy the ridges correspond to potential energy channels. Since the transform also turns out to be invertible, and since the surface is otherwise smooth, information theory suggests that much of the information is transferred to these features, thus confirming their efficacy. We previously described how field line feature extraction, using an algorithm similar to gradient descent, exploits the directional properties of the force field to automatically locate these channels and wells, which then form the basis of characteristic ear features. We now show how an analysis of the mechanism of this algorithmic approach leads to a closed analytical description based on the divergence of force direction, which reveals that channels and wells are really manifestations of the same phenomenon. We further show that this new operator, with its own distinct advantages, has a striking similarity to the Marr-Hildreth operator, but with the important difference that it is non-linear. As well as addressing faster implementation, invertibility, and brightness sensitivity, the technique is also validated by performing recognition on a database of ears selected from the XM2VTS face database, and by comparing the results with the more established technique of Principal Components Analysis. This confirms not only that ears do indeed appear to have potential as a biometric, but also that the new approach is well suited to their description, being robust especially in the presence of noise, and having the advantage that the ear does not need to be explicitly extracted from the background.

Text
hurley_cviu.pdf - Other
Download (1MB)

More information

Published date: June 2005
Organisations: Southampton Wireless Group

Identifiers

Local EPrints ID: 260242
URI: http://eprints.soton.ac.uk/id/eprint/260242
PURE UUID: 133c5bcb-f135-4359-b6ae-01befd41b243
ORCID for Mark S. Nixon: ORCID iD orcid.org/0000-0002-9174-5934

Catalogue record

Date deposited: 24 Jun 2005
Last modified: 15 Mar 2024 02:35

Export record

Altmetrics

Contributors

Author: David J. Hurley
Author: Mark S. Nixon ORCID iD
Author: John N. Carter

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×