The University of Southampton
University of Southampton Institutional Repository

End-to-end classification of reverberant rooms using DNNs

End-to-end classification of reverberant rooms using DNNs
End-to-end classification of reverberant rooms using DNNs
Reverberation is present in our workplaces, our homes, concert halls and theatres. This paper investigates how deep learning can use the effect of reverberation on speech to classify a recording in terms of the room in which it was recorded. Existing approaches in the literature rely on domain expertise to manually select acoustic parameters as inputs to classifiers. Estimation of these parameters from reverberant speech is adversely affected by estimation errors, impacting the classification accuracy. In order to overcome the limitations of previously proposed methods, this paper shows how DNNs can perform the classification by operating directly on reverberant speech spectra and a CRNN with an attention-mechanism is proposed for the task. The relationship is investigated between the reverberant speech representations learned by the DNN and acoustic parameters. For evaluation, AIRs are used from the ACE-challenge dataset that were measured in 7 real rooms. The classification accuracy of the CRNN classifier in the experiments is 78% when using 5 hours of training data and 90% when using 10 hours.
2329-9304
1-8
Papayiannis, Constantinos
eb7beecd-5217-4171-8c45-ce853dbd03f5
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b
Naylor, Patrick A.
13079486-664a-414c-a1a2-01a30bf0997b
Papayiannis, Constantinos
eb7beecd-5217-4171-8c45-ce853dbd03f5
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b
Naylor, Patrick A.
13079486-664a-414c-a1a2-01a30bf0997b

Papayiannis, Constantinos, Evers, Christine and Naylor, Patrick A. (2020) End-to-end classification of reverberant rooms using DNNs. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 1-8. (doi:10.1109/TASLP.2020.3033628).

Record type: Article

Abstract

Reverberation is present in our workplaces, our homes, concert halls and theatres. This paper investigates how deep learning can use the effect of reverberation on speech to classify a recording in terms of the room in which it was recorded. Existing approaches in the literature rely on domain expertise to manually select acoustic parameters as inputs to classifiers. Estimation of these parameters from reverberant speech is adversely affected by estimation errors, impacting the classification accuracy. In order to overcome the limitations of previously proposed methods, this paper shows how DNNs can perform the classification by operating directly on reverberant speech spectra and a CRNN with an attention-mechanism is proposed for the task. The relationship is investigated between the reverberant speech representations learned by the DNN and acoustic parameters. For evaluation, AIRs are used from the ACE-challenge dataset that were measured in 7 real rooms. The classification accuracy of the CRNN classifier in the experiments is 78% when using 5 hours of training data and 90% when using 10 hours.

Text
jrnl - Accepted Manuscript
Download (995kB)

More information

Accepted/In Press date: 27 September 2020
e-pub ahead of print date: 26 October 2020

Identifiers

Local EPrints ID: 444923
URI: http://eprints.soton.ac.uk/id/eprint/444923
ISSN: 2329-9304
PURE UUID: 2475ba50-7f6a-4f17-96e1-1fcb86717a1c
ORCID for Christine Evers: ORCID iD orcid.org/0000-0003-0757-5504

Catalogue record

Date deposited: 12 Nov 2020 17:30
Last modified: 18 Feb 2021 17:41

Export record

Altmetrics

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×