The University of Southampton
University of Southampton Institutional Repository

Domain adversarial neural networks for dysarthric speech recognition

Domain adversarial neural networks for dysarthric speech recognition
Domain adversarial neural networks for dysarthric speech recognition
Speech recognition systems have improved dramatically over the last few years, however, their performance is significantly degraded for the cases of accented or impaired speech. This work explores domain adversarial neural networks (DANN) for speaker-independent speech recognition on the UAS dataset of dysarthric speech. The classification task on 10 spoken digits is performed using an end-to-end CNN taking raw audio as input.The results are compared to a speaker-adaptive (SA) model as well as speaker-dependent (SD) and multi-task learning models (MTL). The experiments conducted in this paper show that DANN achieves an absolute recognition rate of 74.91% and outperforms the baseline by 12.18%. Additionally, the DANN model achieves comparable results to the SA model’s recognition rate of 77.65%. We also observe that when labelled dysarthric speech data is available DANN and MTL perform similarly, but when they are not DANN performs better thanMTL
Millard, David
4f19bca5-80dc-4533-a101-89a5a0e3b372
Woszczyk, Dominika
e77e4413-5aad-4f46-b396-530502ab6f86
Petridis, Stavros
922b0dcf-62b3-4bf6-be78-819f5535b92c
Millard, David
4f19bca5-80dc-4533-a101-89a5a0e3b372
Woszczyk, Dominika
e77e4413-5aad-4f46-b396-530502ab6f86
Petridis, Stavros
922b0dcf-62b3-4bf6-be78-819f5535b92c

Millard, David, Woszczyk, Dominika and Petridis, Stavros (2020) Domain adversarial neural networks for dysarthric speech recognition. (In Press)

Record type: Conference or Workshop Item (Paper)

Abstract

Speech recognition systems have improved dramatically over the last few years, however, their performance is significantly degraded for the cases of accented or impaired speech. This work explores domain adversarial neural networks (DANN) for speaker-independent speech recognition on the UAS dataset of dysarthric speech. The classification task on 10 spoken digits is performed using an end-to-end CNN taking raw audio as input.The results are compared to a speaker-adaptive (SA) model as well as speaker-dependent (SD) and multi-task learning models (MTL). The experiments conducted in this paper show that DANN achieves an absolute recognition rate of 74.91% and outperforms the baseline by 12.18%. Additionally, the DANN model achieves comparable results to the SA model’s recognition rate of 77.65%. We also observe that when labelled dysarthric speech data is available DANN and MTL perform similarly, but when they are not DANN performs better thanMTL

Text
Domain adversarial neural networks for dysarthric speech recognition - Accepted Manuscript
Download (497kB)

More information

Accepted/In Press date: 7 October 2020

Identifiers

Local EPrints ID: 446332
URI: http://eprints.soton.ac.uk/id/eprint/446332
PURE UUID: 8e26723a-e3da-4ef0-ac86-dcb5e27254f0
ORCID for David Millard: ORCID iD orcid.org/0000-0002-7512-2710

Catalogue record

Date deposited: 04 Feb 2021 17:33
Last modified: 17 Mar 2024 02:46

Export record

Contributors

Author: David Millard ORCID iD
Author: Dominika Woszczyk
Author: Stavros Petridis

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×