Document Type
Article
Publication Date
5-16-2023
Abstract
Aphasia is a language disorder that often involves speech comprehension impairments affecting communication. In face-to-face settings, speech is accompanied by mouth and facial movements, but little is known about the extent to which they benefit aphasic comprehension. This study investigated the benefit of visual information accompanying speech for word comprehension in people with aphasia (PWA) and the neuroanatomic substrates of any benefit. Thirty-six PWA and 13 neurotypical matched control participants performed a picture-word verification task in which they indicated whether a picture of an animate/inanimate object matched a subsequent word produced by an actress in a video. Stimuli were either audiovisual (with visible mouth and facial movements) or auditory-only (still picture of a silhouette) with audio being clear (unedited) or degraded (6-band noise-vocoding). We found that visual speech information was more beneficial for neurotypical participants than PWA, and more beneficial for both groups when speech was degraded. A multivariate lesion-symptom mapping analysis for the degraded speech condition showed that lesions to superior temporal gyrus, underlying insula, primary and secondary somatosensory cortices, and inferior frontal gyrus were associated with reduced benefit of audiovisual compared to auditory-only speech, suggesting that the integrity of these fronto-temporo-parietal regions may facilitate cross-modal mapping. These findings provide initial insights into our understanding of the impact of audiovisual information on comprehension in aphasia and the brain regions mediating any benefit.
Recommended Citation
Krason, Anna; Vigliocco, Gabriella; Mailend, Marja-Liisa; Stoll, Harrison; Varley, Rosemary; and Buxbaum, Laurel J., "Benefit of Visual Speech Information for Word Comprehension in Post-stroke Aphasia" (2023). Moss-Magee Rehabilitation Papers. Paper 6.
https://jdc.jefferson.edu/mossrehabfp/6
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Supplementary data
Language
English
Comments
This article is the author's final published version in Cortex, Volume 165, August 2023, Pg. 86 - 100.
The published version is available at https://doi.org/10.1016/j.cortex.2023.04.011. Copyright © 2023 The Author(s). Published by Elsevier Ltd.