Does Signal Degradation Affect Top-Down Processing of Speech?

Anita Wagner, Carina Pals, Charlotte de Blecourt, Anastasios Sarampalis, Deniz Başkent

Speech perception is formed based on both the acoustic signal and listeners’ knowledge of the world and semantic context. Access to semantic information can facilitate interpretation of degraded speech, such as speech in background noise or the speech signal transmitted via cochlear implants (CIs). This paper focuses on the latter, and investigates the time course of understanding words, and how sentential context reduces listeners’ dependency on the acoustic signal for natural and degraded speech via an acoustic CI simulation.
In an eye-tracking experiment we combined recordings of listeners’ gaze fixations with pupillometry, to capture effects of semantic information on both the time course and effort of speech processing. Normal-hearing listeners were presented with sentences with or without a semantically constraining verb (e.g., crawl) preceding the target (baby), and their ocular responses were recorded to four pictures, including the target, a phonological (bay) competitor and a semantic (worm) and an unrelated distractor.
The results show that in natural speech, listeners’ gazes reflect their uptake of acoustic information, and integration of preceding semantic context. Degradation of the signal leads to a later disambiguation of phonologically similar words, and to a delay in integration of semantic information. Complementary to this, the pupil dilation data show that early semantic integration reduces the effort in disambiguating phonologically similar words. Processing degraded speech comes with increased effort due to the impoverished nature of the signal. Delayed integration of semantic information further constrains listeners’ ability to compensate for inaudible signals.