The University of Southampton
University of Southampton Institutional Repository

Factors affecting speech recognition in noise and hearing loss in adults with a wide variety of auditory capabilities

Factors affecting speech recognition in noise and hearing loss in adults with a wide variety of auditory capabilities
Factors affecting speech recognition in noise and hearing loss in adults with a wide variety of auditory capabilities
Studies concerning speech recognition in noise constitute a very broad spectrum of work
including aspects like the cocktail party effect or observing performance of individuals in
different types of speech-signal or noise as well as benefit and improvement with hearing aids.
Another important area that has received much attention is investigating the inter-relations
among various auditory and non-auditory capabilities affecting speech intelligibility. Those
studies have focussed on the relationship between auditory threshold (hearing sensitivity) and a
number of suprathreshold abilities like speech recognition in quiet and noise, frequency
resolution, temporal resolution and the non-auditory ability of cognition.
There is considerable discrepancy regarding the relationship between speech recognition in
noise and hearing threshold level. Some studies conclude that speech recognition performance
in noise can be predicted solely from an individual’s hearing threshold level while others
conclude that other supra-threshold factors such as frequency and/or temporal resolution must
also play a role. Hearing loss involves more than deficits in recognising speech in noise, raising
the question whether hearing impairment is a uni- or multi-dimensional construct. Moreover,
different extents of hearing loss may display different relationships among measures of hearing
ability, or different dimensionality.
The present thesis attempts to address these three issues, by examining a wide range of hearing
abilities in large samples of participants having a range of hearing ability from normal to
moderate-severe impairment. The research extends previous work by including larger samples
of participants, a wider range of measures of hearing ability and by differentiating among levels
of hearing impairment.
Method: Two large multi-centre studies were conducted, involving 103 and 128 participants
respectively. A large battery of tests was devised and refined prior to the main studies and
implemented on a common PC-based platform. The test domains included measurement of
hearing sensitivity, speech recognition in quiet and noise, loudness perception, frequency
resolution, temporal resolution, binaural hearing and localization, cognition and subjective
measures like listening effort and self-report of hearing disability. Performance tests involved
presentation of sounds via circum-aural earphones to one or both ears, as required, at intensities
matched to individual hearing impairments to ensure audibility. Most tests involved
measurements centred on a low frequency (500 Hz), high frequency (3000 Hz) and broadband.
The second study included some refinements based on analysis of the first study. Analyses
included multiple regression for prediction of speech recognition in stationary or fluctuating
noise and factor analysis to explore the dimensionality of the data. Speech recognition
performance was also compared with that predicted using the Speech Intelligibility Index (SII).
iii
Findings: Findings from regression analysis pooled across the two studies showed that speech
recognition in noise can be predicted from a combination of hearing threshold at higher
frequencies (3000/4000 Hz) and frequency resolution at low frequency (500 Hz). This supports
previous studies that conclude that resolution is important in addition to hearing sensitivity. This
was also confirmed by the fact that SII (representing sensitivity rather than resolution) underpredicted
difficulties observed in hearing-impaired ears for speech recognition in noise. Speech
recognition in stationary noise was predicted mainly by auditory threshold while speech
recognition in fluctuating noise was predicted by a combination having a larger contribution
from frequency resolution. In mild hearing losses (below 40 dB), speech recognition in noise
was predicted mainly by hearing threshold, in moderate hearing losses (above 40 dB) it was
predicted mainly by frequency resolution when combined for two studies. Thus it can be
observed that the importance of auditory resolution (in this case frequency resolution) increases
and the importance of the audiogram decreases as the degree of hearing loss increases, provided
speech is presented at audible levels. However, for all degrees of hearing impairment included
in the study, prediction based solely on hearing thresholds was not much worse than prediction
based on a combination of thresholds and frequency resolution. Lastly, hearing impairment was
shown to be multi-dimensional; main factors included hearing threshold, speech recognition in
stationary and fluctuating noise, frequency and temporal resolution, binaural processing,
loudness perception, cognition and self-reported hearing difficulties. A clinical test protocol for
defining an individual auditory profile is suggested based on these findings.
Conclusions: Speech recognition in noise depends on a combination of audibility of the speech
components (hearing threshold) and frequency resolution. Models such as SII that do not
include resolution tend to over-predict somewhat speech recognition performance in noise,
especially for more severe hearing impairments. However, the over-prediction is not great. It
follows that for clinical purposes there is not much to be gained from more complex
psychoacoustic characterisation of sensorineural hearing impairment, when the purpose is to
predict or explain difficulty understanding speech in noise. A conventional audiogram and
possibly measurement of frequency resolution at 500 Hz is sufficient. However, if the purpose
is to acquire a detailed individual auditory profile, the multidimensional nature of hearing loss
should not be ignored. Findings from the present study show that, along with loss of sensitivity
and reduced frequency resolution ability, binaural processing, loudness perception, cognition
and self-report measures help to characterize this multi-dimensionality. Detailed studies should
hence focus on these multiple dimensions of hearing loss and incorporate measuring a wide
variety of different auditory capabilities, rather than inclusion of just a few, in order gain a
complete picture of auditory functioning.
Frequency resolution at low frequency (500 Hz) as a predictive factor for speech recognition in
noise is a new finding. Few previous studies have included low-frequency measures of hearing,
which may explain why it has not emerged previously. Yet this finding appears to be robust, as
it was consistent across both of the present studies. It may relate to differentiation of vowel
components of speech. The present work was unable to confirm the suggestion from previous
studies that measures of temporal resolution help to predict speech recognition in fluctuating
noise, possibly because few participants had extremely poor temporal resolution ability.
Athalye, S.A.
cc164b67-884e-4fc2-8750-d8206e16dac5
Athalye, S.A.
cc164b67-884e-4fc2-8750-d8206e16dac5
Lutman, M.E.

Athalye, S.A. (2010) Factors affecting speech recognition in noise and hearing loss in adults with a wide variety of auditory capabilities. University of Southampton, Institute of Sound and Vibration Research, Doctoral Thesis, 289pp.

Record type: Thesis (Doctoral)

Abstract

Studies concerning speech recognition in noise constitute a very broad spectrum of work
including aspects like the cocktail party effect or observing performance of individuals in
different types of speech-signal or noise as well as benefit and improvement with hearing aids.
Another important area that has received much attention is investigating the inter-relations
among various auditory and non-auditory capabilities affecting speech intelligibility. Those
studies have focussed on the relationship between auditory threshold (hearing sensitivity) and a
number of suprathreshold abilities like speech recognition in quiet and noise, frequency
resolution, temporal resolution and the non-auditory ability of cognition.
There is considerable discrepancy regarding the relationship between speech recognition in
noise and hearing threshold level. Some studies conclude that speech recognition performance
in noise can be predicted solely from an individual’s hearing threshold level while others
conclude that other supra-threshold factors such as frequency and/or temporal resolution must
also play a role. Hearing loss involves more than deficits in recognising speech in noise, raising
the question whether hearing impairment is a uni- or multi-dimensional construct. Moreover,
different extents of hearing loss may display different relationships among measures of hearing
ability, or different dimensionality.
The present thesis attempts to address these three issues, by examining a wide range of hearing
abilities in large samples of participants having a range of hearing ability from normal to
moderate-severe impairment. The research extends previous work by including larger samples
of participants, a wider range of measures of hearing ability and by differentiating among levels
of hearing impairment.
Method: Two large multi-centre studies were conducted, involving 103 and 128 participants
respectively. A large battery of tests was devised and refined prior to the main studies and
implemented on a common PC-based platform. The test domains included measurement of
hearing sensitivity, speech recognition in quiet and noise, loudness perception, frequency
resolution, temporal resolution, binaural hearing and localization, cognition and subjective
measures like listening effort and self-report of hearing disability. Performance tests involved
presentation of sounds via circum-aural earphones to one or both ears, as required, at intensities
matched to individual hearing impairments to ensure audibility. Most tests involved
measurements centred on a low frequency (500 Hz), high frequency (3000 Hz) and broadband.
The second study included some refinements based on analysis of the first study. Analyses
included multiple regression for prediction of speech recognition in stationary or fluctuating
noise and factor analysis to explore the dimensionality of the data. Speech recognition
performance was also compared with that predicted using the Speech Intelligibility Index (SII).
iii
Findings: Findings from regression analysis pooled across the two studies showed that speech
recognition in noise can be predicted from a combination of hearing threshold at higher
frequencies (3000/4000 Hz) and frequency resolution at low frequency (500 Hz). This supports
previous studies that conclude that resolution is important in addition to hearing sensitivity. This
was also confirmed by the fact that SII (representing sensitivity rather than resolution) underpredicted
difficulties observed in hearing-impaired ears for speech recognition in noise. Speech
recognition in stationary noise was predicted mainly by auditory threshold while speech
recognition in fluctuating noise was predicted by a combination having a larger contribution
from frequency resolution. In mild hearing losses (below 40 dB), speech recognition in noise
was predicted mainly by hearing threshold, in moderate hearing losses (above 40 dB) it was
predicted mainly by frequency resolution when combined for two studies. Thus it can be
observed that the importance of auditory resolution (in this case frequency resolution) increases
and the importance of the audiogram decreases as the degree of hearing loss increases, provided
speech is presented at audible levels. However, for all degrees of hearing impairment included
in the study, prediction based solely on hearing thresholds was not much worse than prediction
based on a combination of thresholds and frequency resolution. Lastly, hearing impairment was
shown to be multi-dimensional; main factors included hearing threshold, speech recognition in
stationary and fluctuating noise, frequency and temporal resolution, binaural processing,
loudness perception, cognition and self-reported hearing difficulties. A clinical test protocol for
defining an individual auditory profile is suggested based on these findings.
Conclusions: Speech recognition in noise depends on a combination of audibility of the speech
components (hearing threshold) and frequency resolution. Models such as SII that do not
include resolution tend to over-predict somewhat speech recognition performance in noise,
especially for more severe hearing impairments. However, the over-prediction is not great. It
follows that for clinical purposes there is not much to be gained from more complex
psychoacoustic characterisation of sensorineural hearing impairment, when the purpose is to
predict or explain difficulty understanding speech in noise. A conventional audiogram and
possibly measurement of frequency resolution at 500 Hz is sufficient. However, if the purpose
is to acquire a detailed individual auditory profile, the multidimensional nature of hearing loss
should not be ignored. Findings from the present study show that, along with loss of sensitivity
and reduced frequency resolution ability, binaural processing, loudness perception, cognition
and self-report measures help to characterize this multi-dimensionality. Detailed studies should
hence focus on these multiple dimensions of hearing loss and incorporate measuring a wide
variety of different auditory capabilities, rather than inclusion of just a few, in order gain a
complete picture of auditory functioning.
Frequency resolution at low frequency (500 Hz) as a predictive factor for speech recognition in
noise is a new finding. Few previous studies have included low-frequency measures of hearing,
which may explain why it has not emerged previously. Yet this finding appears to be robust, as
it was consistent across both of the present studies. It may relate to differentiation of vowel
components of speech. The present work was unable to confirm the suggestion from previous
studies that measures of temporal resolution help to predict speech recognition in fluctuating
noise, possibly because few participants had extremely poor temporal resolution ability.

Text
P2782.pdf - Other
Download (3MB)

More information

Published date: October 2010
Organisations: University of Southampton, Human Sciences Group

Identifiers

Local EPrints ID: 191083
URI: http://eprints.soton.ac.uk/id/eprint/191083
PURE UUID: 4e520663-9bef-429a-bbe0-6f60afdd90e3

Catalogue record

Date deposited: 16 Jun 2011 14:20
Last modified: 14 Mar 2024 03:43

Export record

Contributors

Author: S.A. Athalye
Thesis advisor: M.E. Lutman

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×