Monash University
Department of Physiology

Author Of 1 Presentation

Neuropsychology and Cognition Poster Presentation

P0803 - Discriminating Spatialised Speech in Complex Environments in Multiple Sclerosis (ID 1538)

Speakers
Presentation Number
P0803
Presentation Topic
Neuropsychology and Cognition

Abstract

Background

Multiple Sclerosis (MS) is a multi-component disease where inflammatory and neurodegenerative processes disrupt wide-ranging cerebral systems, including auditory networks. Although cochlear hearing loss is uncommon, people with MS (pwMS) frequently report deficits in binaural hearing, which involves integration of sound inputs to both ears, for using acoustic spatial localization and disambiguating important signals from competing sounds. Spatial processing deficits have been described in pwMS using localization tasks of simple tones presented in silence but have yet to be evaluated in realistic listening situations, such as speech emanating from various spatial locations within a noisy environment.

Objectives

To investigate how pwMS discriminate speech appearing to emanate from different spatial positions, in background competing conversation.

Methods

Pre-recorded everyday sentences from a standard list (Bamford-Kowal-Bench sentences) were presented via headphones with virtual acoustic techniques used to simulate as if they originated from 0⁰, 20⁰ and 50⁰ on the azimuth plane around the listener. Simultaneous eight talker babble was presented as if emanating from 0⁰. Controls (n=20) and age-matched pwMS with mild (Expanded Disability Status Scale (EDSS) score < 2; n = 23), moderate (EDSS 2.5 – 4.5; n = 16) and advanced disability (EDSS 5 – 7; n = 8) were required to repeat the target sentence. Mild pwMS also completed the Paced Serial Addition Test (PASAT) and a basic three alternative forced-choice spatial task of detecting interaural time differences (a binaural spatial cue) in noise bursts. All participants passed a standard hearing evaluation.

Results

Sentence intelligibility increased for all listeners when speech was spatially separated from noise at 20⁰ and 50⁰ azimuth compared to when stimuli was colocalized at 0⁰ with the noise. A mixed-effects model confirmed that a one-unit increase in spatial separation increased the odds of discriminating the correct sentence for controls by 5%, but only 3% for moderate and advanced pwMS. Spatial processing in mild pwMS was comparable to controls in both the complex babble environment and the basic three-alternative noise burst task. PASAT scores moderately correlated with discrimination scores in colocalized conditions (0⁰) (r = 0.5, p < 0.01) and strongly in the largest separated condition (50⁰) (r = 0.7, p < 0.0001).

Conclusions

Knowing the spatial location of a sound is particularly critical in a complex noisy environment, as spatial cues help to group ambiguous sound elements into coherent streams. Although pwMS were able to use spatial cues, those with moderate and advanced disability did not receive the same spatial release from noise as controls. As spatial perception has largely been studied only in the visual domain, this is the first study to investigate how pwMS navigate their acoustic surroundings and communicate in noisy social environments.

Collapse

Presenter Of 1 Presentation

Neuropsychology and Cognition Poster Presentation

P0803 - Discriminating Spatialised Speech in Complex Environments in Multiple Sclerosis (ID 1538)

Speakers
Presentation Number
P0803
Presentation Topic
Neuropsychology and Cognition

Abstract

Background

Multiple Sclerosis (MS) is a multi-component disease where inflammatory and neurodegenerative processes disrupt wide-ranging cerebral systems, including auditory networks. Although cochlear hearing loss is uncommon, people with MS (pwMS) frequently report deficits in binaural hearing, which involves integration of sound inputs to both ears, for using acoustic spatial localization and disambiguating important signals from competing sounds. Spatial processing deficits have been described in pwMS using localization tasks of simple tones presented in silence but have yet to be evaluated in realistic listening situations, such as speech emanating from various spatial locations within a noisy environment.

Objectives

To investigate how pwMS discriminate speech appearing to emanate from different spatial positions, in background competing conversation.

Methods

Pre-recorded everyday sentences from a standard list (Bamford-Kowal-Bench sentences) were presented via headphones with virtual acoustic techniques used to simulate as if they originated from 0⁰, 20⁰ and 50⁰ on the azimuth plane around the listener. Simultaneous eight talker babble was presented as if emanating from 0⁰. Controls (n=20) and age-matched pwMS with mild (Expanded Disability Status Scale (EDSS) score < 2; n = 23), moderate (EDSS 2.5 – 4.5; n = 16) and advanced disability (EDSS 5 – 7; n = 8) were required to repeat the target sentence. Mild pwMS also completed the Paced Serial Addition Test (PASAT) and a basic three alternative forced-choice spatial task of detecting interaural time differences (a binaural spatial cue) in noise bursts. All participants passed a standard hearing evaluation.

Results

Sentence intelligibility increased for all listeners when speech was spatially separated from noise at 20⁰ and 50⁰ azimuth compared to when stimuli was colocalized at 0⁰ with the noise. A mixed-effects model confirmed that a one-unit increase in spatial separation increased the odds of discriminating the correct sentence for controls by 5%, but only 3% for moderate and advanced pwMS. Spatial processing in mild pwMS was comparable to controls in both the complex babble environment and the basic three-alternative noise burst task. PASAT scores moderately correlated with discrimination scores in colocalized conditions (0⁰) (r = 0.5, p < 0.01) and strongly in the largest separated condition (50⁰) (r = 0.7, p < 0.0001).

Conclusions

Knowing the spatial location of a sound is particularly critical in a complex noisy environment, as spatial cues help to group ambiguous sound elements into coherent streams. Although pwMS were able to use spatial cues, those with moderate and advanced disability did not receive the same spatial release from noise as controls. As spatial perception has largely been studied only in the visual domain, this is the first study to investigate how pwMS navigate their acoustic surroundings and communicate in noisy social environments.

Collapse