The University of Southampton
University of Southampton Institutional Repository

An investigation into facial depth data for audio-visual speech recognition

An investigation into facial depth data for audio-visual speech recognition
An investigation into facial depth data for audio-visual speech recognition
Recent SOTA (state of the art) AVSR (Audio Visual Speech Recognition) systems such as Meta’s AV-Hubert have highlighted the superior efficacy of multi-modal speech recognition when compared to audio-based implementations, especially in noisy conditions. However, planar feature extraction methods are still susceptible to variable lighting conditions and skin tones. Moreover, these AVSR systems are currently unable to accurately classify visemes (visual phonemes) to phonemes with a one-one taxonomy. One potential avenue of research to prevent both these shortcomings is the application of newer RGB-D cameras (analogous to the Microsoft’s Kinect Sensor) to extract more comprehensive facial speech data that is both lighting and skin tone invariant. Depth data also includes additional more differentiable speech information pertaining to phonemes that involve lip protrusion, such as rounded vowels, that may allow for a more accurate discrimination between visemes. The current RGB-D AVSR literature has yet to thoroughly explore the applicability of the depth modality in more challenging classification tasks, such as continuous and free speech and has been limited to mostly smaller speaker-dependent datasets containing only individual words or phrases. This study will investigate the depth modality's influence on speech classification, using a bespoke broadly generalisable multi-modal speaker-independent dataset. This will contain both continuous and free speech, in a rigorous attempt to assess the depth modality's robustness against these more challenging classification tasks. This paper will then compare the proposed RGB-D facial dataset with current planar AVSR implementations and robustly evaluate the inherent benefits and potential shortcomings of each multi-modal AVSR system.
Bleeck, Stefan
c888ccba-e64c-47bf-b8fa-a687e87ec16c
Ralph-Donaldson, Travis James Francis Paul
fb25bb6d-735c-481a-b1b6-1fa9cd9a7715
Bleeck, Stefan
c888ccba-e64c-47bf-b8fa-a687e87ec16c
Ralph-Donaldson, Travis James Francis Paul
fb25bb6d-735c-481a-b1b6-1fa9cd9a7715

Bleeck, Stefan and Ralph-Donaldson, Travis James Francis Paul (2022) An investigation into facial depth data for audio-visual speech recognition. Hearing, Audio and Audiology Sciences Meeting, , Southampton, United Kingdom. 12 - 13 Sep 2022.

Record type: Conference or Workshop Item (Paper)

Abstract

Recent SOTA (state of the art) AVSR (Audio Visual Speech Recognition) systems such as Meta’s AV-Hubert have highlighted the superior efficacy of multi-modal speech recognition when compared to audio-based implementations, especially in noisy conditions. However, planar feature extraction methods are still susceptible to variable lighting conditions and skin tones. Moreover, these AVSR systems are currently unable to accurately classify visemes (visual phonemes) to phonemes with a one-one taxonomy. One potential avenue of research to prevent both these shortcomings is the application of newer RGB-D cameras (analogous to the Microsoft’s Kinect Sensor) to extract more comprehensive facial speech data that is both lighting and skin tone invariant. Depth data also includes additional more differentiable speech information pertaining to phonemes that involve lip protrusion, such as rounded vowels, that may allow for a more accurate discrimination between visemes. The current RGB-D AVSR literature has yet to thoroughly explore the applicability of the depth modality in more challenging classification tasks, such as continuous and free speech and has been limited to mostly smaller speaker-dependent datasets containing only individual words or phrases. This study will investigate the depth modality's influence on speech classification, using a bespoke broadly generalisable multi-modal speaker-independent dataset. This will contain both continuous and free speech, in a rigorous attempt to assess the depth modality's robustness against these more challenging classification tasks. This paper will then compare the proposed RGB-D facial dataset with current planar AVSR implementations and robustly evaluate the inherent benefits and potential shortcomings of each multi-modal AVSR system.

This record has no associated files available for download.

More information

Published date: 12 September 2022
Venue - Dates: Hearing, Audio and Audiology Sciences Meeting, , Southampton, United Kingdom, 2022-09-12 - 2022-09-13

Identifiers

Local EPrints ID: 477139
URI: http://eprints.soton.ac.uk/id/eprint/477139
PURE UUID: ad248cb6-fa8e-4a96-8632-0964681eb539
ORCID for Stefan Bleeck: ORCID iD orcid.org/0000-0003-4378-3394

Catalogue record

Date deposited: 30 May 2023 16:36
Last modified: 31 May 2023 01:38

Export record

Contributors

Author: Stefan Bleeck ORCID iD
Author: Travis James Francis Paul Ralph-Donaldson

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×