The University of Southampton
University of Southampton Institutional Repository

Model-based feature extraction and classification for automatic face recognition

Model-based feature extraction and classification for automatic face recognition
Model-based feature extraction and classification for automatic face recognition

Recognising faces through model-based feature extraction and description currently appears to be less popular than statistical or face-based recognition approaches. Certainly there is concern that model-based approaches might not prove reliable in practice. Accordingly, this thesis describes a programme of research for improving model-based recognition through robust feature extraction, selection and combination. First we present a new two stage process for finding eyes. A reformulated evidence gathering process is used to determine the rough location of the eyes by exploiting their natural concentricity. Their location was refined by an improved deformable eye template which does not require internal energy terms and uses few parameters. These parameters were best optimised using a genetic algorithm. The technique produced 91% and 93% successful location rates on face databases of 1000, and 88 faces, respectively. A feature vector composed of 29 geometric, 6 colour and 55 forehead contour measures, was extracted from 44 faces from XM2VTS database. To achieve this, the skin boundary was extracted by region growing using a sample of skin below the eyes. Other features such as the nose, mouth and eyebrows were then located by noting that these features are enclosed by skin but exhibit different statistical properties.

A new method, based on intrinsic feature variance, is presented for combining and selecting features which are of potentially disparate magnitude and/or independent sources. Our method provided an increase variance in the classification matrix and facilitated identification of the most discriminating features. Surprisingly, although the eyes were a good initialiser in the search for other face features, their template parameters offered low discriminatory power. Much higher discriminatory power was available through the normalised Fourier descriptors of the forehead contour. We simulated the effect of measurement noise on classification performance and found that errors of 6 pixels on the geometric features resulted in up to 43% classification error. Recognition rates of 77% and 72% were experienced using manual and automatic geometric measures. However, when we combined the geometric measures with perfectly extracted contour measures from its first eight Fourier descriptors we achieved 100% classification.

University of Southampton
Benn, David A
94de5b05-38c8-457b-8165-5f625e78d38d
Benn, David A
94de5b05-38c8-457b-8165-5f625e78d38d

Benn, David A (2000) Model-based feature extraction and classification for automatic face recognition. University of Southampton, Doctoral Thesis.

Record type: Thesis (Doctoral)

Abstract

Recognising faces through model-based feature extraction and description currently appears to be less popular than statistical or face-based recognition approaches. Certainly there is concern that model-based approaches might not prove reliable in practice. Accordingly, this thesis describes a programme of research for improving model-based recognition through robust feature extraction, selection and combination. First we present a new two stage process for finding eyes. A reformulated evidence gathering process is used to determine the rough location of the eyes by exploiting their natural concentricity. Their location was refined by an improved deformable eye template which does not require internal energy terms and uses few parameters. These parameters were best optimised using a genetic algorithm. The technique produced 91% and 93% successful location rates on face databases of 1000, and 88 faces, respectively. A feature vector composed of 29 geometric, 6 colour and 55 forehead contour measures, was extracted from 44 faces from XM2VTS database. To achieve this, the skin boundary was extracted by region growing using a sample of skin below the eyes. Other features such as the nose, mouth and eyebrows were then located by noting that these features are enclosed by skin but exhibit different statistical properties.

A new method, based on intrinsic feature variance, is presented for combining and selecting features which are of potentially disparate magnitude and/or independent sources. Our method provided an increase variance in the classification matrix and facilitated identification of the most discriminating features. Surprisingly, although the eyes were a good initialiser in the search for other face features, their template parameters offered low discriminatory power. Much higher discriminatory power was available through the normalised Fourier descriptors of the forehead contour. We simulated the effect of measurement noise on classification performance and found that errors of 6 pixels on the geometric features resulted in up to 43% classification error. Recognition rates of 77% and 72% were experienced using manual and automatic geometric measures. However, when we combined the geometric measures with perfectly extracted contour measures from its first eight Fourier descriptors we achieved 100% classification.

Text
755024.pdf - Version of Record
Available under License University of Southampton Thesis Licence.
Download (24MB)

More information

Published date: 2000

Identifiers

Local EPrints ID: 464142
URI: http://eprints.soton.ac.uk/id/eprint/464142
PURE UUID: b7054979-fba3-40cf-8d94-87f482be505f

Catalogue record

Date deposited: 04 Jul 2022 21:20
Last modified: 16 Mar 2024 19:18

Export record

Contributors

Author: David A Benn

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×