The University of Southampton
University of Southampton Institutional Repository

On distinctiveness in ear biometrics

On distinctiveness in ear biometrics
On distinctiveness in ear biometrics
Ear biometrics have developed rapidly in last decade. Ears have distinct advantages over face and fingerprint, such as invariant structure over time, and ear images can be captured without subject’s participation. There are also some considerations when ears are used as a biometric, such as rotation variance, varying illumination and occlusion by hair. Wider application has been hindered by these problems. Previous works show that human ears can be used for identification, gender classification, and age classification. In this thesis, we propose a new model-based approach to ear biometrics, which contains geometric features based on ear anatomy. The keypoints of our model are determined by scale-invariant feature transform (SIFT), and we consider the rotation of ear images under an affine transformation, by modelling the ear as a flat plane attached to the head. Then, we extend our model with image pre-processing step that using the force field transform to remove the noise.
We apply the model and fine-tuned convolutional neural networks on ear recognition, gender classification and ear symmetry. In ear symmetry, we address the question as to whether it is possible that given an image of one ear, a person can then be recognized from his/her other ear. Such a symmetry-based strategy could reduce constraints on applications of ear biometrics. To investigate symmetry, we compare one ear with a mirrored version of the other ear.
In addition, we consider the important parts of ear recognition, gender classification and ear bilateral symmetry on ear images, in these three cases we aim to determine the ear parts from which recognition is derived. For analysing the model-based, we use accuracies of different ear regions to evaluate the significant parts for ear recognition, gender classification and ear symmetry. Moreover, we are the first to apply the heatmaps on ear images to determine the contributions of different parts of ear, and this is the first study to analyse the differences between male and female. Also, we have compared the model-based method with deep learning, and the contributions of different parts based on different approaches.
Furthermore, we are the first to exploit ear for kinship verification, and we collect SOTEAR dataset for the kinship verification experiments. We compare the influence of father with that of mother by the accuracies of kinship verification.
University of Southampton
Meng, Di
ec8d62a6-c99c-4fbf-93e3-ff705c6a8279
Meng, Di
ec8d62a6-c99c-4fbf-93e3-ff705c6a8279
Mahmoodi, Sasan
91ca8da4-95dc-4c1e-ac0e-f2c08d6ac7cf
Nixon, Mark
2b5b9804-5a81-462a-82e6-92ee5fa74e12

Meng, Di (2021) On distinctiveness in ear biometrics. University of Southampton, Doctoral Thesis, 133pp.

Record type: Thesis (Doctoral)

Abstract

Ear biometrics have developed rapidly in last decade. Ears have distinct advantages over face and fingerprint, such as invariant structure over time, and ear images can be captured without subject’s participation. There are also some considerations when ears are used as a biometric, such as rotation variance, varying illumination and occlusion by hair. Wider application has been hindered by these problems. Previous works show that human ears can be used for identification, gender classification, and age classification. In this thesis, we propose a new model-based approach to ear biometrics, which contains geometric features based on ear anatomy. The keypoints of our model are determined by scale-invariant feature transform (SIFT), and we consider the rotation of ear images under an affine transformation, by modelling the ear as a flat plane attached to the head. Then, we extend our model with image pre-processing step that using the force field transform to remove the noise.
We apply the model and fine-tuned convolutional neural networks on ear recognition, gender classification and ear symmetry. In ear symmetry, we address the question as to whether it is possible that given an image of one ear, a person can then be recognized from his/her other ear. Such a symmetry-based strategy could reduce constraints on applications of ear biometrics. To investigate symmetry, we compare one ear with a mirrored version of the other ear.
In addition, we consider the important parts of ear recognition, gender classification and ear bilateral symmetry on ear images, in these three cases we aim to determine the ear parts from which recognition is derived. For analysing the model-based, we use accuracies of different ear regions to evaluate the significant parts for ear recognition, gender classification and ear symmetry. Moreover, we are the first to apply the heatmaps on ear images to determine the contributions of different parts of ear, and this is the first study to analyse the differences between male and female. Also, we have compared the model-based method with deep learning, and the contributions of different parts based on different approaches.
Furthermore, we are the first to exploit ear for kinship verification, and we collect SOTEAR dataset for the kinship verification experiments. We compare the influence of father with that of mother by the accuracies of kinship verification.

Text
Di_Meng_PhD_Thesis - Version of Record
Available under License University of Southampton Thesis Licence.
Download (4MB)
Text
Permission_to_deposit_thesis_-_form
Restricted to Repository staff only

More information

Published date: 2021

Identifiers

Local EPrints ID: 498219
URI: http://eprints.soton.ac.uk/id/eprint/498219
PURE UUID: edb1a99e-4b1b-4c01-82d5-12bb98e76bb0
ORCID for Mark Nixon: ORCID iD orcid.org/0000-0002-9174-5934

Catalogue record

Date deposited: 12 Feb 2025 17:47
Last modified: 22 Aug 2025 01:33

Export record

Contributors

Author: Di Meng
Thesis advisor: Sasan Mahmoodi
Thesis advisor: Mark Nixon ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×