The University of Southampton
University of Southampton Institutional Repository

Audiogram estimation performance using Auditory Evoked Potentials and Gaussian Processes

Audiogram estimation performance using Auditory Evoked Potentials and Gaussian Processes
Audiogram estimation performance using Auditory Evoked Potentials and Gaussian Processes
Objectives: auditory evoked potentials (AEPs) play an important role in evaluating hearing in infants and others who are unable to participate reliably in behavioral testing. Discriminating the AEP from the much larger background activity, however, can be challenging and time-consuming, especially when several AEP measurements are needed, as is the case for audiogram estimation. This task is usually entrusted to clinicians, who visually inspect the AEP waveforms to determine if a response is present or absent. The drawback is that this introduces a subjective element to the test, compromising quality control of the examination. Various objective methods have therefore been developed to aid clinicians with response detection. In recent work, the authors introduced Gaussian processes (GPs) with active learning for hearing threshold estimation using auditory brainstem responses (ABRs). The GP is attractive for this task, as it can exploit the correlation structure underlying AEP waveforms across different stimulus levels and frequencies, which is often overlooked by conventional detection methods. GPs with active learning previously proved effective for ABR hearing threshold estimation in simulations, but have not yet been evaluated for audiogram estimation in subject data. The present work evaluates GPs with active learning for ABR audiogram estimation in a sample of normal-hearing and hearing-impaired adults. This involves introducing an additional dimension to the GP (i.e., stimulus frequency) along with real-time implementations and active learning rules for automated stimulus selection.

Methods: the GP’s accuracy was evaluated using the “hearing threshold estimation error,” defined as the difference between the GP-estimated hearing threshold and the behavioral hearing threshold to the same stimuli. Test time was evaluated using the number of preprocessed and artifact-free epochs (i.e., the sample size) required for locating hearing threshold at each frequency. Comparisons were drawn with visual inspection by examiners who followed strict guidelines provided by the British Society of Audiology. Twenty-two normal hearing and nine hearing-impaired adults were tested (one ear per subject). For each subject, the audiogram was estimated three times: once using the GP approach, once using visual inspection by examiners, and once using a standard behavioral hearing test.

Results: the GP’s median estimation error was approximately 0 dB hearing level (dB HL), demonstrating an unbiased test performance relative to the behavioral hearing thresholds. The GP additionally reduced test time by approximately 50% relative to the examiners. The hearing thresholds estimated by the examiners were 5 to 15 dB HL higher than the behavioral thresholds, which was consistent with the literature. Further testing is still needed to determine the extent to which these results generalize to the clinic.

Conclusions: GPs with active learning enable automatic, real-time ABR audiogram estimation with relatively low test time and high accuracy. The GP could be used to automate ABR audiogram estimation or to guide clinicians with this task, who may choose to override the GP’s decisions if deemed necessary. Results suggest that GPs hold potential for next-generation ABR hearing threshold and audiogram-seeking devices.
Active learning, Audiogram estimation, Auditory brainstem responses, Gaussian processes
0196-0202
Chesnaye, Michael Alexander
5f337509-3255-4322-b1bf-d4d3836b36ec
Simpson, David Martin
53674880-f381-4cc9-8505-6a97eeac3c2a
Schlittenlacher, Josef
4aa19f82-bc26-4c29-b52d-0be449ad91d2
Laugesen, Søren
6d308686-7faa-44a5-8aab-7e144a2ba919
Bell, Steve
91de0801-d2b7-44ba-8e8e-523e672aed8a
Chesnaye, Michael Alexander
5f337509-3255-4322-b1bf-d4d3836b36ec
Simpson, David Martin
53674880-f381-4cc9-8505-6a97eeac3c2a
Schlittenlacher, Josef
4aa19f82-bc26-4c29-b52d-0be449ad91d2
Laugesen, Søren
6d308686-7faa-44a5-8aab-7e144a2ba919
Bell, Steve
91de0801-d2b7-44ba-8e8e-523e672aed8a

Chesnaye, Michael Alexander, Simpson, David Martin, Schlittenlacher, Josef, Laugesen, Søren and Bell, Steve (2024) Audiogram estimation performance using Auditory Evoked Potentials and Gaussian Processes. Ear and Hearing. (doi:10.1097/AUD.0000000000001570).

Record type: Article

Abstract

Objectives: auditory evoked potentials (AEPs) play an important role in evaluating hearing in infants and others who are unable to participate reliably in behavioral testing. Discriminating the AEP from the much larger background activity, however, can be challenging and time-consuming, especially when several AEP measurements are needed, as is the case for audiogram estimation. This task is usually entrusted to clinicians, who visually inspect the AEP waveforms to determine if a response is present or absent. The drawback is that this introduces a subjective element to the test, compromising quality control of the examination. Various objective methods have therefore been developed to aid clinicians with response detection. In recent work, the authors introduced Gaussian processes (GPs) with active learning for hearing threshold estimation using auditory brainstem responses (ABRs). The GP is attractive for this task, as it can exploit the correlation structure underlying AEP waveforms across different stimulus levels and frequencies, which is often overlooked by conventional detection methods. GPs with active learning previously proved effective for ABR hearing threshold estimation in simulations, but have not yet been evaluated for audiogram estimation in subject data. The present work evaluates GPs with active learning for ABR audiogram estimation in a sample of normal-hearing and hearing-impaired adults. This involves introducing an additional dimension to the GP (i.e., stimulus frequency) along with real-time implementations and active learning rules for automated stimulus selection.

Methods: the GP’s accuracy was evaluated using the “hearing threshold estimation error,” defined as the difference between the GP-estimated hearing threshold and the behavioral hearing threshold to the same stimuli. Test time was evaluated using the number of preprocessed and artifact-free epochs (i.e., the sample size) required for locating hearing threshold at each frequency. Comparisons were drawn with visual inspection by examiners who followed strict guidelines provided by the British Society of Audiology. Twenty-two normal hearing and nine hearing-impaired adults were tested (one ear per subject). For each subject, the audiogram was estimated three times: once using the GP approach, once using visual inspection by examiners, and once using a standard behavioral hearing test.

Results: the GP’s median estimation error was approximately 0 dB hearing level (dB HL), demonstrating an unbiased test performance relative to the behavioral hearing thresholds. The GP additionally reduced test time by approximately 50% relative to the examiners. The hearing thresholds estimated by the examiners were 5 to 15 dB HL higher than the behavioral thresholds, which was consistent with the literature. Further testing is still needed to determine the extent to which these results generalize to the clinic.

Conclusions: GPs with active learning enable automatic, real-time ABR audiogram estimation with relatively low test time and high accuracy. The GP could be used to automate ABR audiogram estimation or to guide clinicians with this task, who may choose to override the GP’s decisions if deemed necessary. Results suggest that GPs hold potential for next-generation ABR hearing threshold and audiogram-seeking devices.

Text
audiogram_estimation_performance_using_auditory.344 - Version of Record
Available under License Creative Commons Attribution.
Download (1MB)

More information

e-pub ahead of print date: 12 September 2024
Keywords: Active learning, Audiogram estimation, Auditory brainstem responses, Gaussian processes

Identifiers

Local EPrints ID: 495042
URI: http://eprints.soton.ac.uk/id/eprint/495042
ISSN: 0196-0202
PURE UUID: 2b2db2d3-3a51-4225-b194-f65bd43665cd
ORCID for David Martin Simpson: ORCID iD orcid.org/0000-0001-9072-5088

Catalogue record

Date deposited: 28 Oct 2024 17:47
Last modified: 12 Nov 2024 02:39

Export record

Altmetrics

Contributors

Author: Michael Alexander Chesnaye
Author: Josef Schlittenlacher
Author: Søren Laugesen
Author: Steve Bell

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×