The University of Southampton
University of Southampton Institutional Repository

End-to-end classification of reverberant rooms using DNNs

End-to-end classification of reverberant rooms using DNNs
End-to-end classification of reverberant rooms using DNNs

Reverberation is present in our workplaces, our homes, concert halls and theatres. This article investigates how deep learning can use the effect of reverberation on speech to classify a recording in terms of the room in which it was recorded. Existing approaches in the literature rely on domain expertise to manually select acoustic parameters as inputs to classifiers. Estimation of these parameters from reverberant speech is adversely affected by estimation errors, impacting the classification accuracy. In order to overcome the limitations of previously proposed methods, this paper shows how DNNs can perform the classification by operating directly on reverberant speech spectra and a CRNN with an attention-mechanism is proposed for the task. The relationship is investigated between the reverberant speech representations learned by the DNNs and acoustic parameters. For evaluation, AIRs are used from the ACE-challenge dataset that were measured in 7 real rooms. The classification accuracy of the CRNN classifier in the experiments is 78% when using 5 hours of training data and 90% when using 10 hours.

Attention mechanisms, convolutional recurrent neural networks, deep neural networks, reverberant speech classification, reverberation, room acoustics, room classification
2329-9304
3010-3017
Papayiannis, Constantinos
eb7beecd-5217-4171-8c45-ce853dbd03f5
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b
Naylor, Patrick A.
13079486-664a-414c-a1a2-01a30bf0997b
Papayiannis, Constantinos
eb7beecd-5217-4171-8c45-ce853dbd03f5
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b
Naylor, Patrick A.
13079486-664a-414c-a1a2-01a30bf0997b

Papayiannis, Constantinos, Evers, Christine and Naylor, Patrick A. (2020) End-to-end classification of reverberant rooms using DNNs. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28, 3010-3017, [9239871]. (doi:10.1109/TASLP.2020.3033628).

Record type: Article

Abstract

Reverberation is present in our workplaces, our homes, concert halls and theatres. This article investigates how deep learning can use the effect of reverberation on speech to classify a recording in terms of the room in which it was recorded. Existing approaches in the literature rely on domain expertise to manually select acoustic parameters as inputs to classifiers. Estimation of these parameters from reverberant speech is adversely affected by estimation errors, impacting the classification accuracy. In order to overcome the limitations of previously proposed methods, this paper shows how DNNs can perform the classification by operating directly on reverberant speech spectra and a CRNN with an attention-mechanism is proposed for the task. The relationship is investigated between the reverberant speech representations learned by the DNNs and acoustic parameters. For evaluation, AIRs are used from the ACE-challenge dataset that were measured in 7 real rooms. The classification accuracy of the CRNN classifier in the experiments is 78% when using 5 hours of training data and 90% when using 10 hours.

Text
jrnl - Accepted Manuscript
Download (995kB)

More information

Accepted/In Press date: 27 September 2020
e-pub ahead of print date: 26 October 2020
Published date: 2020
Additional Information: Funding Information: Manuscript received November 23, 2019; revised June 28, 2020 and September 26, 2020; accepted September 27, 2020. Date of publication October 26, 2020; date of current version November 21, 2020. This work received support from the UK EPSRC Fellowship Grant EP/P001017/1, awarded to C. Evers while at Imperial College London. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Stefan Bilbao (Corresponding author: Constantinos Papayiannis.) Constantinos Papayiannis was with the Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, United Kingdom of Great Britain and Northern Ireland. He is now with Amazon Alexa, Cambridge, MA 02138 USA (e-mail: papayiac@amazon.com). Publisher Copyright: © 2014 IEEE.
Keywords: Attention mechanisms, convolutional recurrent neural networks, deep neural networks, reverberant speech classification, reverberation, room acoustics, room classification

Identifiers

Local EPrints ID: 444923
URI: http://eprints.soton.ac.uk/id/eprint/444923
ISSN: 2329-9304
PURE UUID: 2475ba50-7f6a-4f17-96e1-1fcb86717a1c
ORCID for Christine Evers: ORCID iD orcid.org/0000-0003-0757-5504

Catalogue record

Date deposited: 12 Nov 2020 17:30
Last modified: 17 Mar 2024 04:01

Export record

Altmetrics

Contributors

Author: Constantinos Papayiannis
Author: Christine Evers ORCID iD
Author: Patrick A. Naylor

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×