The University of Southampton
University of Southampton Institutional Repository

Data Corpus for the IEEE-AASP Challenge on Acoustic Source Localization and Tracking (LOCATA)

Data Corpus for the IEEE-AASP Challenge on Acoustic Source Localization and Tracking (LOCATA)
Data Corpus for the IEEE-AASP Challenge on Acoustic Source Localization and Tracking (LOCATA)
The Zenodo repository contains the final release of the development and evaluation datasets for the LOCATA Challenge. The challenge of sound source localization in realistic environments has attracted widespread attention in the Audio and Acoustic Signal Processing (AASP) community in recent years. Source localization approaches in the literature address the estimation of positional information about acoustic sources using a pair of microphones, microphone arrays, or networks with distributed acoustic sensors. The IEEE AASP Challenge on acoustic source LOCalization And TrAcking (LOCATA) aimed at providing researchers in source localization and tracking with a framework to objectively benchmark results against competing algorithms using a common, publicly released data corpus that encompasses a range of realistic scenarios in an enclosed acoustic environment. Four different microphone arrays were used for the recordings, namely: Planar array with 15 channels (DICIT array) containing uniform linear sub-arrays Spherical array with 32 channels (Eigenmike) Pseudo-spherical array with 12-channels (robot head) Hearing aid dummies on a dummy head (2-channel per hearing aid). An optical tracking system (OptiTrack) was used to record the positions and orientations of talker, loudspeakers and microphone arrays. Moreover, the emitted source signals were recorded to determine voice activity periods in the recorded signals for each source separately. The ground truth values are compared to the estimated values submitted by the participants using several criteria to evaluate the accuracy of the estimated directions of arrival and track-to-source association. The datasets encompass the following six, increasingly challenging, scenarios: Task 1: Localization of a single, static loudspeaker using static microphones arrays Task 2: Multi-source localization of static loudspeakers using static microphone arrays Task 3: Localization of a single, moving talker using static microphone arrays Task 4: Localization of multiple, moving talkers using static microphone arrays Task 5: Localization of a single, moving talker using moving microphone arrays Task 6: Multi-source localization of moving talkers using moving microphone arrays. The development and evaluation datasets in this repository contain the following data: Close-talking speech signals for human talkers, recorded use DPA microphones Distant-talking recordings using four microphone arrays: Spherical Eigenmike (32 channels) Pseudo-spherical prototype NAO robot (12 channels) Planar DICIT array (15 channels) Hearing aids installed in a head-torso simulator (4 channels) Ground-truth annotations of all source and microphone positions, obtained using an OptiTrack system of infrared cameras. The ground-truth positions are provided at the frame rate of the optical tracking system The following software is provided with the data: Matlab code to read the datasets: github.com/cevers/sap_locata_io Matlab code for performance evaluation of localization and tracking algorithms: github.com/cevers/sap_locata_eval For further information, see: C. Evers, H. W. Löllmann, H. Mellmann, A. Schmidt, H. Barfuss, P. A. Naylor, W. Kellermann The LOCATA Challenge: Acoustic Source Localization and Tracking, submitted to IEEE/ACM Transactions on Audio, Speech and Language Processing. arXiv: https://arxiv.org/abs/1909.01008 Documentation: locata.lms.tf.fau.de/wp-content/uploads/sites/10/2020/01/Documentation_LOCATA_final_release_V1.pdf
Zenodo
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b
Loellmann, Heinrich W.
51140377-306a-4259-89a1-21dbb851cf20
Mellmann, Heinrich
c09d4614-32c3-412c-bfd7-771e26ec85e7
Schmidt, Alexander
364b69e1-ac6c-4528-979c-861419bb16ec
Barfuss, Hendrik
c68aea09-f353-48a5-ba52-ff876c8f6b78
Naylor, Patrick A.
000bc536-bdd1-4379-8323-c52844ff4cd5
Kellermann, Walter
4b3c33f7-9451-4559-896a-01aa917b056a
Evers, Christine
93090c84-e984-4cc3-9363-fbf3f3639c4b
Loellmann, Heinrich W.
51140377-306a-4259-89a1-21dbb851cf20
Mellmann, Heinrich
c09d4614-32c3-412c-bfd7-771e26ec85e7
Schmidt, Alexander
364b69e1-ac6c-4528-979c-861419bb16ec
Barfuss, Hendrik
c68aea09-f353-48a5-ba52-ff876c8f6b78
Naylor, Patrick A.
000bc536-bdd1-4379-8323-c52844ff4cd5
Kellermann, Walter
4b3c33f7-9451-4559-896a-01aa917b056a

Evers, Christine, Loellmann, Heinrich W., Mellmann, Heinrich, Schmidt, Alexander, Barfuss, Hendrik, Naylor, Patrick A. and Kellermann, Walter (2020) Data Corpus for the IEEE-AASP Challenge on Acoustic Source Localization and Tracking (LOCATA). Zenodo doi:10.5281/zenodo.3630471 [Dataset]

Record type: Dataset

Abstract

The Zenodo repository contains the final release of the development and evaluation datasets for the LOCATA Challenge. The challenge of sound source localization in realistic environments has attracted widespread attention in the Audio and Acoustic Signal Processing (AASP) community in recent years. Source localization approaches in the literature address the estimation of positional information about acoustic sources using a pair of microphones, microphone arrays, or networks with distributed acoustic sensors. The IEEE AASP Challenge on acoustic source LOCalization And TrAcking (LOCATA) aimed at providing researchers in source localization and tracking with a framework to objectively benchmark results against competing algorithms using a common, publicly released data corpus that encompasses a range of realistic scenarios in an enclosed acoustic environment. Four different microphone arrays were used for the recordings, namely: Planar array with 15 channels (DICIT array) containing uniform linear sub-arrays Spherical array with 32 channels (Eigenmike) Pseudo-spherical array with 12-channels (robot head) Hearing aid dummies on a dummy head (2-channel per hearing aid). An optical tracking system (OptiTrack) was used to record the positions and orientations of talker, loudspeakers and microphone arrays. Moreover, the emitted source signals were recorded to determine voice activity periods in the recorded signals for each source separately. The ground truth values are compared to the estimated values submitted by the participants using several criteria to evaluate the accuracy of the estimated directions of arrival and track-to-source association. The datasets encompass the following six, increasingly challenging, scenarios: Task 1: Localization of a single, static loudspeaker using static microphones arrays Task 2: Multi-source localization of static loudspeakers using static microphone arrays Task 3: Localization of a single, moving talker using static microphone arrays Task 4: Localization of multiple, moving talkers using static microphone arrays Task 5: Localization of a single, moving talker using moving microphone arrays Task 6: Multi-source localization of moving talkers using moving microphone arrays. The development and evaluation datasets in this repository contain the following data: Close-talking speech signals for human talkers, recorded use DPA microphones Distant-talking recordings using four microphone arrays: Spherical Eigenmike (32 channels) Pseudo-spherical prototype NAO robot (12 channels) Planar DICIT array (15 channels) Hearing aids installed in a head-torso simulator (4 channels) Ground-truth annotations of all source and microphone positions, obtained using an OptiTrack system of infrared cameras. The ground-truth positions are provided at the frame rate of the optical tracking system The following software is provided with the data: Matlab code to read the datasets: github.com/cevers/sap_locata_io Matlab code for performance evaluation of localization and tracking algorithms: github.com/cevers/sap_locata_eval For further information, see: C. Evers, H. W. Löllmann, H. Mellmann, A. Schmidt, H. Barfuss, P. A. Naylor, W. Kellermann The LOCATA Challenge: Acoustic Source Localization and Tracking, submitted to IEEE/ACM Transactions on Audio, Speech and Language Processing. arXiv: https://arxiv.org/abs/1909.01008 Documentation: locata.lms.tf.fau.de/wp-content/uploads/sites/10/2020/01/Documentation_LOCATA_final_release_V1.pdf

This record has no associated files available for download.

More information

Published date: 31 January 2020

Identifiers

Local EPrints ID: 438554
URI: http://eprints.soton.ac.uk/id/eprint/438554
PURE UUID: 156e4897-825c-48fd-90f4-a8b82ccdc40a
ORCID for Christine Evers: ORCID iD orcid.org/0000-0003-0757-5504

Catalogue record

Date deposited: 16 Mar 2020 17:36
Last modified: 20 Jan 2024 03:09

Export record

Altmetrics

Contributors

Creator: Christine Evers ORCID iD
Creator: Heinrich W. Loellmann
Creator: Heinrich Mellmann
Creator: Alexander Schmidt
Creator: Hendrik Barfuss
Creator: Patrick A. Naylor
Creator: Walter Kellermann

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×