On guided model-based analysis for ear biometrics


Arbab-Zavar, Banafshe (2009) On guided model-based analysis for ear biometrics. University of Southampton, School of Electronics and Computer Science, Doctoral Thesis , 131pp.

Download

[img] PDF
Download (19Mb)

Description/Abstract

Ears are a new biometric with major advantage in that they appear to maintain their structure with increasing age. Current approaches have exploited 2D and 3D images of the ear in human identification. Contending that the ear is mainly a planar shape we use 2D images, which are consistent with deployment in surveillance and other planar-image scenarios. So far ear biometric approaches have mostly used general properties and overall appearance of ear images in recognition, while the structure of the ear has not been discussed. In this thesis, we propose a new model-based approach to ear biometrics. Our model is a part-wise description of the ear structure. By embryological evidence of ear development, we shall show that the ear is indeed a composite structure of individual components. Our model parts are derived by a stochastic clustering method on a set of scale invariant features on a training set. We shall review different accounts of ear formation and consider some research into congenital ear anomalies which discuss apportioning various components to the ear's complex structure. We demonstrate that our model description is in accordance with these accounts. We extend our model description, by proposing a new wavelet-based analysis with a specific aim of capturing information in the ear's outer structures. We shall show that this section of the ear is not sufficiently explored by the model, while given that it exhibits large variations in shape, intuitively, it is significant to the recognition process. In this new analysis, log-Gabor filters exploit the frequency content of the ear's outer structures.

In recognition, ears are automatically enrolled via our new enrolment algorithm, which is based on the elliptical shape of ears in head profile images. These samples are then recognized via the parts selected by the model. The incorporation of the wavelet-based analysis of the outer ear structures forms an extended or hybrid method. The performance is evaluated on test sets selected from the XM2VTS database. By results, bothin modelling and recognition, our new model-based approach does indeed appear to be a promising new approach to ear biometrics. In this, the recognition performance has improved notably by the incorporation of our new wavelet-based analysis. The main obstacle hindering the deployment of ear biometrics is the potential occlusion by hair. A model-based approach has a further attraction, since it has an advantage in handling noise and occlusion. Also, by localization, a wavelet can offer performance advantages when handling occluded data. A robust matching technique is also added to restrict the influence of corrupted wavelet projections.

Furthermore, our automatic enrolment is tolerant of occlusion in ear samples. We shall present a thorough evaluation of performance in occlusion, using PCA and a robust PCA for comparison purposes. Our hybrid method obtains promising results recognizing occluded ears. Our results have confirmed the validity of this approach both in modelling and recognition. Our new hybrid method does indeed appear to be a promising new approach to ear biometrics, by guiding a model-based analysis via anatomical knowledge.

Item Type: Thesis (Doctoral)
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Q Science > QH Natural history > QH301 Biology
Q Science > QM Human anatomy
Divisions: University Structure - Pre August 2011 > School of Electronics and Computer Science > Information - Signals, Images, Systems
ePrint ID: 72062
Date Deposited: 18 Jan 2010
Last Modified: 27 Mar 2014 18:51
URI: http://eprints.soton.ac.uk/id/eprint/72062

Actions (login required)

View Item View Item

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics