1
|
Li S, Wang Y, Yu Q, Feng Y, Tang P. The Effect of Visual Articulatory Cues on the Identification of Mandarin Tones by Children With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:2106-2114. [PMID: 38768072 DOI: 10.1044/2024_jslhr-23-00559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
PURPOSE This study explored the facilitatory effect of visual articulatory cues on the identification of Mandarin lexical tones by children with cochlear implants (CIs) in both quiet and noisy environments. It also explored whether early implantation is associated with better use of visual cues in tonal identification. METHOD Participants included 106 children with CIs and 100 normal-hearing (NH) controls. A tonal identification task was employed using a two-alternative forced-choice picture-pointing paradigm. Participants' tonal identification accuracies were compared between audio-only (AO) and audiovisual (AV) modalities. Correlations between implantation ages and visual benefits (accuracy differences between AO and AV modalities) were also examined. RESULTS Children with CIs demonstrated an improved identification accuracy from AO to AV modalities in the noisy environment. Additionally, earlier implantation was significantly correlated with a greater visual benefit in noise. CONCLUSIONS These findings indicated that children with CIs benefited from visual cues on tonal identification in noise, and early implantation enhanced the visual benefit. These results thus have practical implications on tonal perception interventions for Mandarin-speaking children with CIs.
Collapse
Affiliation(s)
- Shanpeng Li
- MIIT Key Lab for Language Information Processing and Applications, School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Yinuo Wang
- Department of English, Linguistics and Theatre Studies, Faculty of Arts & Social Sciences, National University of Singapore
| | - Qianxi Yu
- MIIT Key Lab for Language Information Processing and Applications, School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Yan Feng
- MIIT Key Lab for Language Information Processing and Applications, School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Ping Tang
- MIIT Key Lab for Language Information Processing and Applications, School of Foreign Studies, Nanjing University of Science and Technology, China
| |
Collapse
|
2
|
Liu Y, Wang Z, Wei T, Zhou S, Yin Y, Mi Y, Liu X, Tang Y. Alterations of Audiovisual Integration in Alzheimer's Disease. Neurosci Bull 2023; 39:1859-1872. [PMID: 37812301 PMCID: PMC10661680 DOI: 10.1007/s12264-023-01125-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 06/22/2023] [Indexed: 10/10/2023] Open
Abstract
Audiovisual integration is a vital information process involved in cognition and is closely correlated with aging and Alzheimer's disease (AD). In this review, we evaluated the altered audiovisual integrative behavioral symptoms in AD. We further analyzed the relationships between AD pathologies and audiovisual integration alterations bidirectionally and suggested the possible mechanisms of audiovisual integration alterations underlying AD, including the imbalance between energy demand and supply, activity-dependent degeneration, disrupted brain networks, and cognitive resource overloading. Then, based on the clinical characteristics including electrophysiological and imaging data related to audiovisual integration, we emphasized the value of audiovisual integration alterations as potential biomarkers for the early diagnosis and progression of AD. We also highlighted that treatments targeted audiovisual integration contributed to widespread pathological improvements in AD animal models and cognitive improvements in AD patients. Moreover, investigation into audiovisual integration alterations in AD also provided new insights and comprehension about sensory information processes.
Collapse
Affiliation(s)
- Yufei Liu
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Zhibin Wang
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Tao Wei
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Shaojiong Zhou
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Yunsi Yin
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Yingxin Mi
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Xiaoduo Liu
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Yi Tang
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China.
| |
Collapse
|
3
|
Seol HY, Jo M, Yun H, Park JG, Byun HM, Moon IJ. Comparison of speech recognition performance with and without a face mask between a basic and a premium hearing aid in hearing-impaired listeners. Am J Otolaryngol 2023; 44:103929. [PMID: 37245326 PMCID: PMC10200274 DOI: 10.1016/j.amjoto.2023.103929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 05/04/2023] [Accepted: 05/13/2023] [Indexed: 05/30/2023]
Abstract
BACKGROUND The mask mandate during the COVID-19 pandemic leads to communication challenges as sound energy gets reduced and the visual cues are lost due to the face mask. This study examines the impact of a face mask on sound energy and compares speech recognition performance between a basic and a premium hearing aid. METHODS Participants watched four video clips (a female and a male speaker with and without a face mask) and repeated the target sentences in various test conditions. Real-ear measurement was performed to investigate the changes in sound energy in no mask, surgical, and N95 mask conditions. RESULTS With the face mask on, sound energy significantly decreased for all types of masks. For speech recognition, the premium hearing aid showed significant improvement in the mask condition. CONCLUSION The findings emphasize and encourage health care professionals to actively use communication strategies, such as speaking slowly and reducing background noise, when interacting with individuals with hearing loss.
Collapse
Affiliation(s)
- Hye Yoon Seol
- Department of Communication Disorders, Ewha Womans University, Seoul, Republic of Korea; Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea
| | - Mini Jo
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea
| | - Heejung Yun
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea
| | | | - Hye Min Byun
- Demant Korea Co., Ltd., Seoul, Republic of Korea
| | - Il Joon Moon
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea; Department of Otolaryngology-Head & Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
4
|
Baron A, Harwood V, Kleinman D, Campanelli L, Molski J, Landi N, Irwin J. Where on the face do we look during phonemic restoration: An eye-tracking study. Front Psychol 2023; 14:1005186. [PMID: 37303890 PMCID: PMC10249372 DOI: 10.3389/fpsyg.2023.1005186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 04/28/2023] [Indexed: 06/13/2023] Open
Abstract
Face to face communication typically involves audio and visual components to the speech signal. To examine the effect of task demands on gaze patterns in response to a speaking face, adults participated in two eye-tracking experiments with an audiovisual (articulatory information from the mouth was visible) and a pixelated condition (articulatory information was not visible). Further, task demands were manipulated by having listeners respond in a passive (no response) or an active (button press response) context. The active experiment required participants to discriminate between speech stimuli and was designed to mimic environmental situations which require one to use visual information to disambiguate the speaker's message, simulating different listening conditions in real-world settings. Stimuli included a clear exemplar of the syllable /ba/ and a second exemplar in which the formant initial consonant was reduced creating an /a/-like consonant. Consistent with our hypothesis, results revealed that the greatest fixations to the mouth were present in the audiovisual active experiment and visual articulatory information led to a phonemic restoration effect for the /a/ speech token. In the pixelated condition, participants fixated on the eyes, and discrimination of the deviant token within the active experiment was significantly greater than the audiovisual condition. These results suggest that when required to disambiguate changes in speech, adults may look to the mouth for additional cues to support processing when it is available.
Collapse
Affiliation(s)
- Alisa Baron
- Department of Communicative Disorders, University of Rhode Island, Kingston, RI, United States
| | - Vanessa Harwood
- Department of Communicative Disorders, University of Rhode Island, Kingston, RI, United States
| | | | - Luca Campanelli
- Department of Communicative Disorders, The University of Alabama, Tuscaloosa, AL, United States
| | - Joseph Molski
- Department of Communicative Disorders, University of Rhode Island, Kingston, RI, United States
| | - Nicole Landi
- Haskins Laboratories, New Haven, CT, United States
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, United States
| | - Julia Irwin
- Haskins Laboratories, New Haven, CT, United States
- Department of Psychology, Southern Connecticut State University, New Haven, CT, United States
| |
Collapse
|
5
|
Beadle J, Kim J, Davis C. Visual Speech Improves Older and Younger Adults' Response Time and Accuracy for Speech Comprehension in Noise. Trends Hear 2022; 26:23312165221145006. [PMID: 36524310 PMCID: PMC9761220 DOI: 10.1177/23312165221145006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Past research suggests that older adults expend more cognitive resources when processing visual speech than younger adults. If so, given resource limitations, older adults may not get as large a visual speech benefit as younger ones on a resource-demanding speech processing task. We tested this using a speech comprehension task that required attention across two talkers and a simple response (i.e., the question-and-answer task) and measured response time and accuracy. Specifically, we compared the size of visual speech benefit for older and younger adults. We also examined whether the presence of a visual distractor would reduce the visual speech benefit more for older than younger adults. Twenty-five older adults (12 females, MAge = 72) and 25 younger adults (17 females, MAge = 22) completed the question-and-answer task under time pressure. The task included the following conditions: auditory and visual (AV) speech; AV speech plus visual distractor; and auditory speech with static face images. Both age groups showed a visual speech benefit regardless of whether a visual distractor was also presented. Likewise, the size of the visual speech benefit did not significantly interact with age group for accuracy or the potentially more sensitive response time measure.
Collapse
Affiliation(s)
- Julie Beadle
- The MARCS Institute for Brain, Behaviour, and Development,
Western Sydney
University, Sydney, Australia,The HEARing CRC, Australia
| | - Jeesun Kim
- The MARCS Institute for Brain, Behaviour, and Development,
Western Sydney
University, Sydney, Australia
| | - Chris Davis
- The MARCS Institute for Brain, Behaviour, and Development,
Western Sydney
University, Sydney, Australia,The HEARing CRC, Australia,Chris Davis, Western Sydney University, The
MARCS Institute for Brain, Behaviour and Development, Westmead Innovation
Quarter, Building U, Level 4, 160 Hawkesbury Road, Westmead NSW 2145, Australia.
| |
Collapse
|
6
|
Van Engen KJ, Dey A, Sommers MS, Peelle JE. Audiovisual speech perception: Moving beyond McGurk. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:3216. [PMID: 36586857 PMCID: PMC9894660 DOI: 10.1121/10.0015262] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 10/26/2022] [Accepted: 11/05/2022] [Indexed: 05/29/2023]
Abstract
Although it is clear that sighted listeners use both auditory and visual cues during speech perception, the manner in which multisensory information is combined is a matter of debate. One approach to measuring multisensory integration is to use variants of the McGurk illusion, in which discrepant auditory and visual cues produce auditory percepts that differ from those based on unimodal input. Not all listeners show the same degree of susceptibility to the McGurk illusion, and these individual differences are frequently used as a measure of audiovisual integration ability. However, despite their popularity, we join the voices of others in the field to argue that McGurk tasks are ill-suited for studying real-life multisensory speech perception: McGurk stimuli are often based on isolated syllables (which are rare in conversations) and necessarily rely on audiovisual incongruence that does not occur naturally. Furthermore, recent data show that susceptibility to McGurk tasks does not correlate with performance during natural audiovisual speech perception. Although the McGurk effect is a fascinating illusion, truly understanding the combined use of auditory and visual information during speech perception requires tasks that more closely resemble everyday communication: namely, words, sentences, and narratives with congruent auditory and visual speech cues.
Collapse
Affiliation(s)
- Kristin J Van Engen
- Department of Psychological and Brain Sciences, Washington University, St. Louis, Missouri 63130, USA
| | - Avanti Dey
- PLOS ONE, 1265 Battery Street, San Francisco, California 94111, USA
| | - Mitchell S Sommers
- Department of Psychological and Brain Sciences, Washington University, St. Louis, Missouri 63130, USA
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University, St. Louis, Missouri 63130, USA
| |
Collapse
|
7
|
Begau A, Arnau S, Klatt LI, Wascher E, Getzmann S. Using visual speech at the cocktail-party: CNV evidence for early speech extraction in younger and older adults. Hear Res 2022; 426:108636. [DOI: 10.1016/j.heares.2022.108636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 09/26/2022] [Accepted: 10/18/2022] [Indexed: 11/04/2022]
|
8
|
Begau A, Klatt LI, Schneider D, Wascher E, Getzmann S. The role of informational content of visual speech in an audiovisual cocktail party: Evidence from cortical oscillations in young and old participants. Eur J Neurosci 2022; 56:5215-5234. [PMID: 36017762 DOI: 10.1111/ejn.15811] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 08/01/2022] [Accepted: 08/20/2022] [Indexed: 12/14/2022]
Abstract
Age-related differences in the processing of audiovisual speech in a multi-talker environment were investigated analysing event-related spectral perturbations (ERSPs), focusing on theta, alpha and beta oscillations that are assumed to reflect conflict processing, multisensory integration and attentional mechanisms, respectively. Eighteen older and 21 younger healthy adults completed a two-alternative forced-choice word discrimination task, responding to audiovisual speech stimuli. In a cocktail-party scenario with two competing talkers (located at -15° and 15° azimuth), target words (/yes/or/no/) appeared at a pre-defined (attended) position, distractor words at the other position. In two audiovisual conditions, acoustic speech was combined either with informative or uninformative visual speech. While a behavioural benefit for informative visual speech occurred for both age groups, differences between audiovisual conditions in the theta and beta band were only present for older adults. A stronger increase in theta perturbations for stimuli containing uninformative visual speech could be associated with early conflict processing, while a stronger suppression in beta perturbations for informative visual speech could be associated to audiovisual integration. Compared to the younger group, the older group showed generally stronger beta perturbations. No condition differences in the alpha band were found. Overall, the findings suggest age-related differences in audiovisual speech integration in a multi-talker environment. While the behavioural benefit of informative visual speech was unaffected by age, older adults had a stronger need for cognitive control when processing conflicting audiovisual speech input. Furthermore, mechanisms of audiovisual integration are differently activated depending on the informational content of the visual information.
Collapse
Affiliation(s)
- Alexandra Begau
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Laura-Isabelle Klatt
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Daniel Schneider
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Edmund Wascher
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
9
|
Moon IJ, Jo M, Kim GY, Kim N, Cho YS, Hong SH, Seol HY. How Does a Face Mask Impact Speech Perception? Healthcare (Basel) 2022; 10:healthcare10091709. [PMID: 36141322 PMCID: PMC9498704 DOI: 10.3390/healthcare10091709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 08/31/2022] [Accepted: 09/06/2022] [Indexed: 11/22/2022] Open
Abstract
Face masks are mandatory during the COVID-19 pandemic, leading to attenuation of sound energy and loss of visual cues which are important for communication. This study explores how a face mask affects speech performance for individuals with and without hearing loss. Four video recordings (a female speaker with and without a face mask and a male speaker with and without a face mask) were used to examine individuals’ speech performance. The participants completed a listen-and-repeat task while watching four types of video recordings. Acoustic characteristics of speech signals based on mask type (no mask, surgical, and N95) were also examined. The availability of visual cues was beneficial for speech understanding—both groups showed significant improvements in speech perception when they were able to see the speaker without the mask. However, when the speakers were wearing the mask, no statistical significance was observed between no visual cues and visual cues conditions. Findings of the study demonstrate that provision of visual cues is beneficial for speech perception for individuals with normal hearing and hearing impairment. This study adds value to the importance of the use of communication strategies during the pandemic where visual information is lost due to the face mask.
Collapse
Affiliation(s)
- Il-Joon Moon
- Department of Otolaryngology-Head & Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Korea
- Hearing Research Laboratory, Samsung Medical Center, Seoul 06351, Korea
| | - Mini Jo
- Hearing Research Laboratory, Samsung Medical Center, Seoul 06351, Korea
| | - Ga-Young Kim
- Hearing Research Laboratory, Samsung Medical Center, Seoul 06351, Korea
| | - Nicolas Kim
- Hearing Research Laboratory, Samsung Medical Center, Seoul 06351, Korea
- Department of Molecular Biology, Cell Biology, and Biochemistry, Brown University, Providence, RI 02912, USA
| | - Young-Sang Cho
- Department of Otolaryngology-Head & Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Korea
- Hearing Research Laboratory, Samsung Medical Center, Seoul 06351, Korea
| | - Sung-Hwa Hong
- Hearing Research Laboratory, Samsung Medical Center, Seoul 06351, Korea
- Department of Otolaryngology-Head & Neck Surgery, Samsung Changwon Hospital, Sungkyunkwan University School of Medicine, Changwon 51353, Korea
| | - Hye-Yoon Seol
- Hearing Research Laboratory, Samsung Medical Center, Seoul 06351, Korea
- Medical Research Institute, Sungkyunkwan University School of Medicine, Suwon 16419, Korea
- Correspondence: ; Tel.: +82-2-3410-1630
| |
Collapse
|
10
|
Gabriel GA, Harris LR, Henriques DYP, Pandi M, Campos JL. Multisensory visual-vestibular training improves visual heading estimation in younger and older adults. Front Aging Neurosci 2022; 14:816512. [PMID: 36092809 PMCID: PMC9452741 DOI: 10.3389/fnagi.2022.816512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 08/01/2022] [Indexed: 11/16/2022] Open
Abstract
Self-motion perception (e.g., when walking/driving) relies on the integration of multiple sensory cues including visual, vestibular, and proprioceptive signals. Changes in the efficacy of multisensory integration have been observed in older adults (OA), which can sometimes lead to errors in perceptual judgments and have been associated with functional declines such as increased falls risk. The objectives of this study were to determine whether passive, visual-vestibular self-motion heading perception could be improved by providing feedback during multisensory training, and whether training-related effects might be more apparent in OAs vs. younger adults (YA). We also investigated the extent to which training might transfer to improved standing-balance. OAs and YAs were passively translated and asked to judge their direction of heading relative to straight-ahead (left/right). Each participant completed three conditions: (1) vestibular-only (passive physical motion in the dark), (2) visual-only (cloud-of-dots display), and (3) bimodal (congruent vestibular and visual stimulation). Measures of heading precision and bias were obtained for each condition. Over the course of 3 days, participants were asked to make bimodal heading judgments and were provided with feedback ("correct"/"incorrect") on 900 training trials. Post-training, participants' biases, and precision in all three sensory conditions (vestibular, visual, bimodal), and their standing-balance performance, were assessed. Results demonstrated improved overall precision (i.e., reduced JNDs) in heading perception after training. Pre- vs. post-training difference scores showed that improvements in JNDs were only found in the visual-only condition. Particularly notable is that 27% of OAs initially could not discriminate their heading at all in the visual-only condition pre-training, but subsequently obtained thresholds in the visual-only condition post-training that were similar to those of the other participants. While OAs seemed to show optimal integration pre- and post-training (i.e., did not show significant differences between predicted and observed JNDs), YAs only showed optimal integration post-training. There were no significant effects of training for bimodal or vestibular-only heading estimates, nor standing-balance performance. These results indicate that it may be possible to improve unimodal (visual) heading perception using a multisensory (visual-vestibular) training paradigm. The results may also help to inform interventions targeting tasks for which effective self-motion perception is important.
Collapse
Affiliation(s)
- Grace A. Gabriel
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Laurence R. Harris
- Department of Psychology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| | - Denise Y. P. Henriques
- Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Kinesiology, York University, Toronto, ON, Canada
| | - Maryam Pandi
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Jennifer L. Campos
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| |
Collapse
|
11
|
Butera IM, Larson ED, DeFreese AJ, Lee AK, Gifford RH, Wallace MT. Functional localization of audiovisual speech using near infrared spectroscopy. Brain Topogr 2022; 35:416-430. [PMID: 35821542 PMCID: PMC9334437 DOI: 10.1007/s10548-022-00904-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 05/19/2022] [Indexed: 11/21/2022]
Abstract
Visual cues are especially vital for hearing impaired individuals such as cochlear implant (CI) users to understand speech in noise. Functional Near Infrared Spectroscopy (fNIRS) is a light-based imaging technology that is ideally suited for measuring the brain activity of CI users due to its compatibility with both the ferromagnetic and electrical components of these implants. In a preliminary step toward better elucidating the behavioral and neural correlates of audiovisual (AV) speech integration in CI users, we designed a speech-in-noise task and measured the extent to which 24 normal hearing individuals could integrate the audio of spoken monosyllabic words with the corresponding visual signals of a female speaker. In our behavioral task, we found that audiovisual pairings provided average improvements of 103% and 197% over auditory-alone listening conditions in -6 and -9 dB signal-to-noise ratios consisting of multi-talker background noise. In an fNIRS task using similar stimuli, we measured activity during auditory-only listening, visual-only lipreading, and AV listening conditions. We identified cortical activity in all three conditions over regions of middle and superior temporal cortex typically associated with speech processing and audiovisual integration. In addition, three channels active during the lipreading condition showed uncorrected correlations associated with behavioral measures of audiovisual gain as well as with the McGurk effect. Further work focusing primarily on the regions of interest identified in this study could test how AV speech integration may differ for CI users who rely on this mechanism for daily communication.
Collapse
Affiliation(s)
- Iliza M Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| | - Eric D Larson
- Institute for Learning & Brain Sciences, University of Washington, Seattle Washington, USA
| | - Andrea J DeFreese
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Adrian Kc Lee
- Institute for Learning & Brain Sciences, University of Washington, Seattle Washington, USA
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, USA
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
12
|
Xiong YZ, Addleman DA, Nguyen NA, Nelson PB, Legge GE. Visual and Auditory Spatial Localization in Younger and Older Adults. Front Aging Neurosci 2022; 14:838194. [PMID: 35493928 PMCID: PMC9043801 DOI: 10.3389/fnagi.2022.838194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 03/22/2022] [Indexed: 11/17/2022] Open
Abstract
Visual and auditory localization abilities are crucial in real-life tasks such as navigation and social interaction. Aging is frequently accompanied by vision and hearing loss, affecting spatial localization. The purpose of the current study is to elucidate the effect of typical aging on spatial localization and to establish a baseline for older individuals with pathological sensory impairment. Using a verbal report paradigm, we investigated how typical aging affects visual and auditory localization performance, the reliance on vision during sound localization, and sensory integration strategies when localizing audiovisual targets. Fifteen younger adults (N = 15, mean age = 26 years) and thirteen older adults (N = 13, mean age = 68 years) participated in this study, all with age-adjusted normal vision and hearing based on clinical standards. There were significant localization differences between younger and older adults, with the older group missing peripheral visual stimuli at significantly higher rates, localizing central stimuli as more peripheral, and being less precise in localizing sounds from central locations when compared to younger subjects. Both groups localized auditory targets better when the test space was visible compared to auditory localization when blindfolded. The two groups also exhibited similar patterns of audiovisual integration, showing optimal integration in central locations that was consistent with a Maximum-Likelihood Estimation model, but non-optimal integration in peripheral locations. These findings suggest that, despite the age-related changes in auditory and visual localization, the interactions between vision and hearing are largely preserved in older individuals without pathological sensory impairments.
Collapse
Affiliation(s)
- Ying-Zi Xiong
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, MN, United States
- *Correspondence: Ying-Zi Xiong,
| | - Douglas A. Addleman
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, MN, United States
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, United States
- Douglas A. Addleman,
| | - Nam Anh Nguyen
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
| | - Peggy B. Nelson
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, MN, United States
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN, United States
| | - Gordon E. Legge
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, MN, United States
| |
Collapse
|
13
|
Basharat A, Thayanithy A, Barnett-Cowan M. A Scoping Review of Audiovisual Integration Methodology: Screening for Auditory and Visual Impairment in Younger and Older Adults. Front Aging Neurosci 2022; 13:772112. [PMID: 35153716 PMCID: PMC8829696 DOI: 10.3389/fnagi.2021.772112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 12/17/2021] [Indexed: 11/13/2022] Open
Abstract
With the rise of the aging population, many scientists studying multisensory integration have turned toward understanding how this process may change with age. This scoping review was conducted to understand and describe the scope and rigor with which researchers studying audiovisual sensory integration screen for hearing and vision impairment. A structured search in three licensed databases (Scopus, PubMed, and PsychInfo) using the key concepts of multisensory integration, audiovisual modality, and aging revealed 2,462 articles, which were screened for inclusion by two reviewers. Articles were included if they (1) tested healthy older adults (minimum mean or median age of 60) with younger adults as a comparison (mean or median age between 18 and 35), (2) measured auditory and visual integration, (3) were written in English, and (4) reported behavioral outcomes. Articles that included the following were excluded: (1) tested taste exclusively, (2) tested olfaction exclusively, (3) tested somatosensation exclusively, (4) tested emotion perception, (5) were not written in English, (6) were clinical commentaries, editorials, interviews, letters, newspaper articles, abstracts only, or non-peer reviewed literature (e.g., theses), and (7) focused on neuroimaging without a behavioral component. Data pertaining to the details of the study (e.g., country of publication, year of publication, etc.) were extracted, however, of higher importance to our research question, data pertaining to screening measures used for hearing and vision impairment (e.g., type of test used, whether hearing- and visual-aids were worn, thresholds used, etc.) were extracted, collated, and summarized. Our search revealed that only 64% of studies screened for age-abnormal hearing impairment, 51% screened for age-abnormal vision impairment, and that consistent definitions of normal or abnormal vision and hearing were not used among the studies that screened for sensory abilities. A total of 1,624 younger adults and 4,778 older participants were included in the scoping review with males composing approximately 44% and females composing 56% of the total sample and most of the data was obtained from only four countries. We recommend that studies investigating the effects of aging on multisensory integration should screen for normal vision and hearing by using the World Health Organization's (WHO) hearing loss and visual impairment cut-off scores in order to maintain consistency among other aging researchers. As mild cognitive impairment (MCI) has been defined as a “transitional” or a “transitory” stage between normal aging and dementia and because approximately 3–5% of the aging population will develop MCI each year, it is therefore important that when researchers aim to study a healthy aging population, that they appropriately screen for MCI. One of our secondary aims was to determine how often researchers were screening for cognitive impairment and the types of tests that were used to do so. Our results revealed that only 55 out of 72 studies tested for neurological and cognitive function, and only a subset used standardized tests. Additionally, among the studies that used standardized tests, the cut-off scores used were not always adequate for screening out mild cognitive impairment. An additional secondary aim of this scoping review was to determine the feasibility of whether a meta-analysis could be conducted in the future to further quantitatively evaluate the results (i.e., are the findings obtained from studies using self-reported vision and hearing impairment screening methods significantly different from those measuring vision and hearing impairment in the lab) and to assess the scope of this problem. We found that it may not be feasible to conduct a meta-analysis with the entire dataset of this scoping review. However, a meta-analysis can be conducted if stricter parameters are used (e.g., focusing on accuracy or response time data only).Systematic Review Registration:https://doi.org/10.17605/OSF.IO/GTUHD.
Collapse
|
14
|
Lasfargues-Delannoy A, Strelnikov K, Deguine O, Marx M, Barone P. Supra-normal skills in processing of visuo-auditory prosodic information by cochlear-implanted deaf patients. Hear Res 2021; 410:108330. [PMID: 34492444 DOI: 10.1016/j.heares.2021.108330] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 07/08/2021] [Accepted: 08/02/2021] [Indexed: 10/20/2022]
Abstract
Cochlear implanted (CI) adults with acquired deafness are known to depend on multisensory integration skills (MSI) for speech comprehension through the fusion of speech reading skills and their deficient auditory perception. But, little is known on how CI patients perceive prosodic information relating to speech content. Our study aimed to identify how CI patients use MSI between visual and auditory information to process paralinguistic prosodic information of multimodal speech and the visual strategies employed. A psychophysics assessment was developed, in which CI patients and hearing controls (NH) had to distinguish between a question and a statement. The controls were separated into two age groups (young and aged-matched) to dissociate any effect of aging. In addition, the oculomotor strategies used when facing a speaker in this prosodic decision task were recorded using an eye-tracking device and compared to controls. This study confirmed that prosodic processing is multisensory but it revealed that CI patients showed significant supra-normal audiovisual integration for prosodic information compared to hearing controls irrespective of age. This study clearly showed that CI patients had a visuo-auditory gain more than 3 times larger than that observed in hearing controls. Furthermore, CI participants performed better in the visuo-auditory situation through a specific oculomotor exploration of the face as they significantly fixate the mouth region more than young NH participants who fixate the eyes, whereas the aged-matched controls presented an intermediate exploration pattern equally reported between the eyes and mouth. To conclude, our study demonstrated that CI patients have supra-normal skills MSI when integrating visual and auditory linguistic prosodic information, and a specific adaptive strategy developed as it participates directly in speech content comprehension.
Collapse
Affiliation(s)
- Anne Lasfargues-Delannoy
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France.
| | - Kuzma Strelnikov
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse, France
| | - Olivier Deguine
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France
| | - Mathieu Marx
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France
| | - Pascal Barone
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France
| |
Collapse
|
15
|
Seol HY, Kang S, Lim J, Hong SH, Moon IJ. Feasibility of Virtual Reality Audiological Testing: Prospective Study. JMIR Serious Games 2021; 9:e26976. [PMID: 34463624 PMCID: PMC8441603 DOI: 10.2196/26976] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 05/13/2021] [Accepted: 05/29/2021] [Indexed: 11/25/2022] Open
Abstract
Background It has been noted in the literature that there is a gap between clinical assessment and real-world performance. Real-world conversations entail visual and audio information, yet there are not any audiological assessment tools that include visual information. Virtual reality (VR) technology has been applied to various areas, including audiology. However, the use of VR in speech-in-noise perception has not yet been investigated. Objective The purpose of this study was to investigate the impact of virtual space (VS) on speech performance and its feasibility to be used as a speech test instrument. We hypothesized that individuals’ ability to recognize speech would improve when visual cues were provided. Methods A total of 30 individuals with normal hearing and 25 individuals with hearing loss completed pure-tone audiometry and the Korean version of the Hearing in Noise Test (K-HINT) under three conditions—conventional K-HINT (cK-HINT), VS on PC (VSPC), and VS head-mounted display (VSHMD)—at –10 dB, –5 dB, 0 dB, and +5 dB signal-to-noise ratios (SNRs). Participants listened to target speech and repeated it back to the tester for all conditions. Hearing aid users in the hearing loss group completed testing under unaided and aided conditions. A questionnaire was administered after testing to gather subjective opinions on the headset, the VSHMD condition, and test preference. Results Provision of visual information had a significant impact on speech performance between the normal hearing and hearing impaired groups. The Mann-Whitney U test showed statistical significance (P<.05) between the two groups under all test conditions. Hearing aid use led to better integration of audio and visual cues. Statistical significance through the Mann-Whitney U test was observed for –5 dB (P=.04) and 0 dB (P=.02) SNRs under the cK-HINT condition, as well as for –10 dB (P=.007) and 0 dB (P=.04) SNRs under the VSPC condition, between hearing aid and non–hearing aid users. Participants reported positive responses across almost all items on the questionnaire except for the weight of the headset. Participants preferred a test method with visual imagery, but found the headset to be heavy. Conclusions Findings are in line with previous literature that showed that visual cues were beneficial for communication. This is the first study to include hearing aid users with a more naturalistic stimulus and a relatively simple test environment, suggesting the feasibility of VR audiological testing in clinical practice.
Collapse
Affiliation(s)
- Hye Yoon Seol
- Medical Research Institute, Sungkyunkwan University School of Medicine, Suwon, Republic of Korea.,Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea
| | - Soojin Kang
- Medical Research Institute, Sungkyunkwan University School of Medicine, Suwon, Republic of Korea.,Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea
| | - Jihyun Lim
- Center for Clinical Epidemiology, Samsung Medical Center, Seoul, Republic of Korea
| | - Sung Hwa Hong
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea.,Department of Otolaryngology-Head & Neck Surgery, Samsung Changwon Hospital, Changwon, Republic of Korea
| | - Il Joon Moon
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea.,Department of Otolaryngology-Head & Neck Surgery, Samsung Medical Center, Seoul, Republic of Korea
| |
Collapse
|
16
|
Ceuleers D, Dhooge I, Degeest S, Van Steen H, Keppler H, Baudonck N. The Effects of Age, Gender and Test Stimuli on Visual Speech Perception: A Preliminary Study. Folia Phoniatr Logop 2021; 74:131-140. [PMID: 34348290 DOI: 10.1159/000518205] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 06/30/2021] [Indexed: 11/19/2022] Open
Abstract
INTRODUCTION To the best of our knowledge, there is a lack of reliable, validated, and standardized (Dutch) measuring instruments to document visual speech perception in a structured way. This study aimed to: (1) evaluate the effects of age, gender, and the used word list on visual speech perception examined by a first version of the Dutch Test for (Audio-)Visual Speech Perception on word level (TAUVIS-words) and (2) assess the internal reliability of the TAUVIS-words. METHODS Thirty-nine normal-hearing adults divided into the following 3 age categories were included: (1) younger adults, age 18-39 years; (2) middle-aged adults, age 40-59 years; and (3) older adults, age >60 years. The TAUVIS-words consist of 4 word lists, i.e., 2 monosyllabic word lists (MS 1 and MS 2) and 2 polysyllabic word lists (PS 1 and PS 2). A first exploration of the effects of age, gender, and test stimuli (i.e., the used word list) on visual speech perception was conducted using the TAUVIS-words. A mixed-design analysis of variance (ANOVA) was conducted to analyze the results statistically. Lastly, the internal reliability of the TAUVIS-words was assessed by calculating the Chronbach α. RESULTS The results revealed a significant effect of the used list. More specifically, the score for MS 1 was significantly better compared to that for PS 2, and the score for PS 1 was significantly better compared to that for PS 2. Furthermore, a significant main effect of gender was found. Women scored significantly better compared to men. The effect of age was not significant. The TAUVIS-word lists were found to have good internal reliability. CONCLUSION This study was a first exploration of the effects of age, gender, and test stimuli on visual speech perception using the TAUVIS-words. Further research is necessary to optimize and validate the TAUVIS-words, making use of a larger study sample.
Collapse
Affiliation(s)
- Dorien Ceuleers
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Ingeborg Dhooge
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium.,Department of Ear, Nose, and Throat, Ghent University, Ghent, Belgium
| | - Sofie Degeest
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | | | - Hannah Keppler
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium.,Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Nele Baudonck
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| |
Collapse
|
17
|
Abstract
OBJECTIVES When auditory and visual speech information are presented together, listeners obtain an audiovisual (AV) benefit or a speech understanding improvement compared with auditory-only (AO) or visual-only (VO) presentations. Cochlear-implant (CI) listeners, who receive degraded speech input and therefore understand speech using primarily temporal information, seem to readily use visual cues and can achieve a larger AV benefit than normal-hearing (NH) listeners. It is unclear, however, if the AV benefit remains relatively large for CI listeners when trying to understand foreign-accented speech when compared with unaccented speech. Accented speech can introduce changes to temporal auditory cues and visual cues, which could decrease the usefulness of AV information. Furthermore, we sought to determine if the AV benefit was relatively larger in CI compared with NH listeners for both unaccented and accented speech. DESIGN AV benefit was investigated for unaccented and Spanish-accented speech by presenting English sentences in AO, VO, and AV conditions to 15 CI and 15 age- and performance-matched NH listeners. Performance matching between NH and CI listeners was achieved by varying the number of channels of a noise vocoder for the NH listeners. Because of the differences in age and hearing history of the CI listeners, the effects of listener-related variables on speech understanding performance and AV benefit were also examined. RESULTS AV benefit was observed for both unaccented and accented conditions and for both CI and NH listeners. The two groups showed similar performance for the AO and AV conditions, and the normalized AV benefit was relatively smaller for the accented than the unaccented conditions. In the CI listeners, older age was associated with significantly poorer performance with the accented speaker compared with the unaccented speaker. The negative impact of age was somewhat reduced by a significant improvement in performance with access to AV information. CONCLUSIONS When auditory speech information is degraded by CI sound processing, visual cues can be used to improve speech understanding, even in the presence of a Spanish accent. The AV benefit of the CI listeners closely matched that of the NH listeners presented with vocoded speech, which was unexpected given that CI listeners appear to rely more on visual information to communicate. This result is perhaps due to the one-to-one age and performance matching of the listeners. While aging decreased CI listener performance with the accented speaker, access to visual cues boosted performance and could partially overcome the age-related speech understanding deficits for the older CI listeners.
Collapse
|
18
|
Tremblay P, Basirat A, Pinto S, Sato M. Visual prediction cues can facilitate behavioural and neural speech processing in young and older adults. Neuropsychologia 2021; 159:107949. [PMID: 34228997 DOI: 10.1016/j.neuropsychologia.2021.107949] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Revised: 06/16/2021] [Accepted: 07/01/2021] [Indexed: 02/06/2023]
Abstract
The ability to process speech evolves over the course of the lifespan. Understanding speech at low acoustic intensity and in the presence of background noise becomes harder, and the ability for older adults to benefit from audiovisual speech also appears to decline. These difficulties can have important consequences on quality of life. Yet, a consensus on the cause of these difficulties is still lacking. The objective of this study was to examine the processing of speech in young and older adults under different modalities (i.e. auditory [A], visual [V], audiovisual [AV]) and in the presence of different visual prediction cues (i.e., no predictive cue (control), temporal predictive cue, phonetic predictive cue, and combined temporal and phonetic predictive cues). We focused on recognition accuracy and four auditory evoked potential (AEP) components: P1-N1-P2 and N2. Thirty-four right-handed French-speaking adults were recruited, including 17 younger adults (28 ± 2 years; 20-42 years) and 17 older adults (67 ± 3.77 years; 60-73 years). Participants completed a forced-choice speech identification task. The main findings of the study are: (1) The faciliatory effect of visual information was reduced, but present, in older compared to younger adults, (2) visual predictive cues facilitated speech recognition in younger and older adults alike, (3) age differences in AEPs were localized to later components (P2 and N2), suggesting that aging predominantly affects higher-order cortical processes related to speech processing rather than lower-level auditory processes. (4) Specifically, AV facilitation on P2 amplitude was lower in older adults, there was a reduced effect of the temporal predictive cue on N2 amplitude for older compared to younger adults, and P2 and N2 latencies were longer for older adults. Finally (5) behavioural performance was associated with P2 amplitude in older adults. Our results indicate that aging affects speech processing at multiple levels, including audiovisual integration (P2) and auditory attentional processes (N2). These findings have important implications for understanding barriers to communication in older ages, as well as for the development of compensation strategies for those with speech processing difficulties.
Collapse
Affiliation(s)
- Pascale Tremblay
- Département de Réadaptation, Faculté de Médecine, Université Laval, Quebec City, Canada; Cervo Brain Research Centre, Quebec City, Canada.
| | - Anahita Basirat
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, Lille, France
| | - Serge Pinto
- France Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France
| | - Marc Sato
- France Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France
| |
Collapse
|
19
|
Begau A, Klatt LI, Wascher E, Schneider D, Getzmann S. Do congruent lip movements facilitate speech processing in a dynamic audiovisual multi-talker scenario? An ERP study with older and younger adults. Behav Brain Res 2021; 412:113436. [PMID: 34175355 DOI: 10.1016/j.bbr.2021.113436] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 04/26/2021] [Accepted: 06/21/2021] [Indexed: 11/26/2022]
Abstract
In natural conversations, visible mouth and lip movements play an important role in speech comprehension. There is evidence that visual speech information improves speech comprehension, especially for older adults and under difficult listening conditions. However, the neurocognitive basis is still poorly understood. The present EEG experiment investigated the benefits of audiovisual speech in a dynamic cocktail-party scenario with 22 (aged 20-34 years) younger and 20 (aged 55-74 years) older participants. We presented three simultaneously talking faces with a varying amount of visual speech input (still faces, visually unspecific and audiovisually congruent). In a two-alternative forced-choice task, participants had to discriminate target words ("yes" or "no") among two distractors (one-digit number words). In half of the experimental blocks, the target was always presented from a central position, in the other half, occasional switches to a lateral position could occur. We investigated behavioral and electrophysiological modulations due to age, location switches and the content of visual information, analyzing response times and accuracy as well as the P1, N1, P2, N2 event-related potentials (ERPs) and the contingent negative variation (CNV) in the EEG. We found that audiovisually congruent speech information improved performance and modulated ERP amplitudes in both age groups, suggesting enhanced preparation and integration of the subsequent auditory input. In the older group, larger amplitude measures were found in early phases of processing (P1-N1). Here, amplitude measures were reduced in response to audiovisually congruent stimuli. In later processing phases (P2-N2) we found decreased amplitude measures in the older group, while an amplitude reduction for audiovisually congruent compared to visually unspecific stimuli was still observable. However, these benefits were only observed as long as no location switches occurred, leading to enhanced amplitude measures in later processing phases (P2-N2). To conclude, meaningful visual information in a multi-talker setting, when presented from the expected location, is shown to be beneficial for both younger and older adults.
Collapse
Affiliation(s)
- Alexandra Begau
- Leibniz Research Centre for Working Environment and Human Factors, TU Dortmund, Germany.
| | - Laura-Isabelle Klatt
- Leibniz Research Centre for Working Environment and Human Factors, TU Dortmund, Germany
| | - Edmund Wascher
- Leibniz Research Centre for Working Environment and Human Factors, TU Dortmund, Germany
| | - Daniel Schneider
- Leibniz Research Centre for Working Environment and Human Factors, TU Dortmund, Germany
| | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, TU Dortmund, Germany
| |
Collapse
|
20
|
Dias JW, McClaskey CM, Harris KC. Audiovisual speech is more than the sum of its parts: Auditory-visual superadditivity compensates for age-related declines in audible and lipread speech intelligibility. Psychol Aging 2021; 36:520-530. [PMID: 34124922 PMCID: PMC8427734 DOI: 10.1037/pag0000613] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Multisensory input can improve perception of ambiguous unisensory information. For example, speech heard in noise can be more accurately identified when listeners see a speaker's articulating face. Importantly, these multisensory effects can be superadditive to listeners' ability to process unisensory speech, such that audiovisual speech identification is better than the sum of auditory-only and visual-only speech identification. Age-related declines in auditory and visual speech perception have been hypothesized to be concomitant with stronger cross-sensory influences on audiovisual speech identification, but little evidence exists to support this. Currently, studies do not account for the multisensory superadditive benefit of auditory-visual input in their metrics of the auditory or visual influence on audiovisual speech perception. Here we treat multisensory superadditivity as independent from unisensory auditory and visual processing. In the current investigation, older and younger adults identified auditory, visual, and audiovisual speech in noisy listening conditions. Performance across these conditions was used to compute conventional metrics of the auditory and visual influence on audiovisual speech identification and a metric of auditory-visual superadditivity. Consistent with past work, auditory and visual speech identification declined with age, audiovisual speech identification was preserved, and no age-related differences in the auditory or visual influence on audiovisual speech identification were observed. However, we found that auditory-visual superadditivity improved with age. The novel findings suggest that multisensory superadditivity is independent of unisensory processing. As auditory and visual speech identification decline with age, compensatory changes in multisensory superadditivity may preserve audiovisual speech identification in older adults. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- James W Dias
- Department of Otolaryngology-Head and Neck Surgery
| | | | | |
Collapse
|
21
|
Pinto JO, Vieira De Melo BB, Dores AR, Peixoto B, Geraldo A, Barbosa F. Narrative review of the multisensory integration tasks used with older adults: inclusion of multisensory integration tasks into neuropsychological assessment. Expert Rev Neurother 2021; 21:657-674. [PMID: 33890537 DOI: 10.1080/14737175.2021.1914592] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
Introduction: Age-related changes in sensory functioning impact the activities of daily living and interact with cognitive decline. Given the interactions between sensory and cognitive functioning, combining multisensory integration (MI) assessment with the neuropsychological assessment of older adults seems promising. This review aims to examine the characteristics and utility of MI tasks in functional and cognitive assessment of older adults, with or without neurocognitive impairment.Areas covered: A literature search was conducted following the quality assessment of narrative review criteria. Results focused on tasks of detection, discrimination, sensory illusion, temporal judgment, and sensory conflict. Studies were not consensual regarding the enhancement of MI with age, but most studies showed that older adults had an expanded time window of integration. In older adults with mild cognitive impairment or major neurocognitive disorder it was a mediating role of the magnitude of visual-somatosensory integration between neurocognitive impairment and spatial aspects of gait.Expert opinion: Recently, some concerns have been raised about how to maximize the ecological validity of the neuropsychological assessment. Since most of our activities of daily living are multisensory and older adults benefit from multisensory information, MI assessment has the potential to improve the ecological validity of the neuropsychological assessment.
Collapse
Affiliation(s)
- Joana O Pinto
- Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal.,Human and Social Sciences Department, School of Health, Polytechnic Institute of Porto, Porto, Portugal.,CESPU, University Institute of Health Sciences, Gandra, Portugal
| | - Bruno B Vieira De Melo
- Psychosocial Rehabilitation Laboratory, Center for Rehabilitation Research, School of Health of the Polytechnic of Porto, Porto, Portugal
| | - Artemisa R Dores
- Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal.,Human and Social Sciences Department, School of Health, Polytechnic Institute of Porto, Porto, Portugal.,Psychosocial Rehabilitation Laboratory, Center for Rehabilitation Research, School of Health of the Polytechnic of Porto, Porto, Portugal
| | - Bruno Peixoto
- CESPU, University Institute of Health Sciences, Gandra, Portugal.,NeuroGen - Center for Health Technology and Services Research (CINTESIS), Porto, Portugal
| | - Andreia Geraldo
- Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal
| | - Fernando Barbosa
- Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal
| |
Collapse
|
22
|
Abstract
Visual speech cues play an important role in speech recognition, and the McGurk effect is a classic demonstration of this. In the original McGurk & Macdonald (Nature 264, 746-748 1976) experiment, 98% of participants reported an illusory "fusion" percept of /d/ when listening to the spoken syllable /b/ and watching the visual speech movements for /g/. However, more recent work shows that subject and task differences influence the proportion of fusion responses. In the current study, we varied task (forced-choice vs. open-ended), stimulus set (including /d/ exemplars vs. not), and data collection environment (lab vs. Mechanical Turk) to investigate the robustness of the McGurk effect. Across experiments, using the same stimuli to elicit the McGurk effect, we found fusion responses ranging from 10% to 60%, thus showing large variability in the likelihood of experiencing the McGurk effect across factors that are unrelated to the perceptual information provided by the stimuli. Rather than a robust perceptual illusion, we therefore argue that the McGurk effect exists only for some individuals under specific task situations.Significance: This series of studies re-evaluates the classic McGurk effect, which shows the relevance of visual cues on speech perception. We highlight the importance of taking into account subject variables and task differences, and challenge future researchers to think carefully about the perceptual basis of the McGurk effect, how it is defined, and what it can tell us about audiovisual integration in speech.
Collapse
|
23
|
Jones SA, Noppeney U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex 2021; 138:1-23. [PMID: 33676086 DOI: 10.1016/j.cortex.2021.02.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 01/23/2021] [Accepted: 02/02/2021] [Indexed: 11/29/2022]
Abstract
The processing of multisensory signals is crucial for effective interaction with the environment, but our ability to perform this vital function changes as we age. In the first part of this review, we summarise existing research into the effects of healthy ageing on multisensory integration. We note that age differences vary substantially with the paradigms and stimuli used: older adults often receive at least as much benefit (to both accuracy and response times) as younger controls from congruent multisensory stimuli, but are also consistently more negatively impacted by the presence of intersensory conflict. In the second part, we outline a normative Bayesian framework that provides a principled and computationally informed perspective on the key ingredients involved in multisensory perception, and how these are affected by ageing. Applying this framework to the existing literature, we conclude that changes to sensory reliability, prior expectations (together with attentional control), and decisional strategies all contribute to the age differences observed. However, we find no compelling evidence of any age-related changes to the basic inference mechanisms involved in multisensory perception.
Collapse
Affiliation(s)
- Samuel A Jones
- The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
24
|
Effects of stimulus intensity on audiovisual integration in aging across the temporal dynamics of processing. Int J Psychophysiol 2021; 162:95-103. [PMID: 33529642 DOI: 10.1016/j.ijpsycho.2021.01.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 10/26/2020] [Accepted: 01/24/2021] [Indexed: 11/24/2022]
Abstract
Previous studies have drawn different conclusions about whether older adults benefit more from audiovisual integration, and such conflicts may have been due to the stimulus features investigated in those studies, such as stimulus intensity. In the current study, using ERPs, we compared the effects of stimulus intensity on audiovisual integration between young adults and older adults. The results showed that inverse effectiveness, which depicts a phenomenon that lowing the effectiveness of sensory stimuli increases benefits of multisensory integration, was observed in young adults at earlier processing stages but was absent in older adults. Moreover, at the earlier processing stages (60-90 ms and 110-140 ms), older adults exhibited significantly greater audiovisual integration than young adults (all ps < 0.05). However, at the later processing stages (220-250 ms and 340-370 ms), young adults exhibited significantly greater audiovisual integration than old adults (all ps < 0.001). The results suggested that there is an age-related dissociation between early integration and late integration, which indicates that there are different audiovisual processing mechanisms in play between older adults and young adults.
Collapse
|
25
|
Block HJ, Sexton BM. Visuo-Proprioceptive Control of the Hand in Older Adults. Multisens Res 2020; 34:93-111. [PMID: 33706277 DOI: 10.1163/22134808-bja10032] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 06/25/2020] [Indexed: 11/19/2022]
Abstract
To control hand movement, we have both vision and proprioception, or position sense. The brain is known to integrate these to reduce variance. Here we ask whether older adults integrate vision and proprioception in a way that minimizes variance as young adults do, and whether older subjects compensate for an imposed visuo-proprioceptive mismatch as young adults do. Ten healthy older adults (mean age 69) and 10 healthy younger adults (mean age 19) participated. Subjects were asked to estimate the position of visual, proprioceptive, and combined targets, with no direct vision of either hand. After a veridical baseline block, a spatial visuo-proprioceptive misalignment was gradually imposed by shifting the visual component forward from the proprioceptive component without the subject's awareness. Older subjects were more variable than young subjects at estimating both visual and proprioceptive target positions. Older subjects tended to rely more heavily on vision than proprioception compared to younger subjects. However, the weighting of vision vs. proprioception was correlated with minimum variance predictions for both older and younger adults, suggesting that variance-minimizing mechanisms are present to some degree in older adults. Visual and proprioceptive realignment were similar for young and older subjects in the misalignment block, suggesting older subjects are able to realign as much as young subjects. These results suggest that intact multisensory processing in older adults should be explored as a potential means of mitigating degradation in individual sensory systems.
Collapse
Affiliation(s)
- Hannah J Block
- Program in Neuroscience and Department of Kinesiology, Indiana University, Bloomington, IN, USA
| | - Brandon M Sexton
- Program in Neuroscience and Department of Kinesiology, Indiana University, Bloomington, IN, USA
| |
Collapse
|
26
|
Michaelis K, Erickson LC, Fama ME, Skipper-Kallal LM, Xing S, Lacey EH, Anbari Z, Norato G, Rauschecker JP, Turkeltaub PE. Effects of age and left hemisphere lesions on audiovisual integration of speech. BRAIN AND LANGUAGE 2020; 206:104812. [PMID: 32447050 PMCID: PMC7379161 DOI: 10.1016/j.bandl.2020.104812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 04/02/2020] [Accepted: 05/04/2020] [Indexed: 06/11/2023]
Abstract
Neuroimaging studies have implicated left temporal lobe regions in audiovisual integration of speech and inferior parietal regions in temporal binding of incoming signals. However, it remains unclear which regions are necessary for audiovisual integration, especially when the auditory and visual signals are offset in time. Aging also influences integration, but the nature of this influence is unresolved. We used a McGurk task to test audiovisual integration and sensitivity to the timing of audiovisual signals in two older adult groups: left hemisphere stroke survivors and controls. We observed a positive relationship between age and audiovisual speech integration in both groups, and an interaction indicating that lesions reduce sensitivity to timing offsets between signals. Lesion-symptom mapping demonstrated that damage to the left supramarginal gyrus and planum temporale reduces temporal acuity in audiovisual speech perception. This suggests that a process mediated by these structures identifies asynchronous audiovisual signals that should not be integrated.
Collapse
Affiliation(s)
- Kelly Michaelis
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Laura C Erickson
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Neuroscience Department, Georgetown University Medical Center, Washington DC, USA
| | - Mackenzie E Fama
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Department of Speech-Language Pathology & Audiology, Towson University, Towson, MD, USA
| | - Laura M Skipper-Kallal
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Shihui Xing
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Department of Neurology, First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Elizabeth H Lacey
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Research Division, MedStar National Rehabilitation Hospital, Washington DC, USA
| | - Zainab Anbari
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Gina Norato
- Clinical Trials Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Josef P Rauschecker
- Neuroscience Department, Georgetown University Medical Center, Washington DC, USA
| | - Peter E Turkeltaub
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Research Division, MedStar National Rehabilitation Hospital, Washington DC, USA.
| |
Collapse
|
27
|
Randazzo M, Priefer R, Smith PJ, Nagler A, Avery T, Froud K. Neural Correlates of Modality-Sensitive Deviance Detection in the Audiovisual Oddball Paradigm. Brain Sci 2020; 10:brainsci10060328. [PMID: 32481538 PMCID: PMC7348766 DOI: 10.3390/brainsci10060328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 05/15/2020] [Accepted: 05/25/2020] [Indexed: 11/16/2022] Open
Abstract
The McGurk effect, an incongruent pairing of visual /ga/–acoustic /ba/, creates a fusion illusion /da/ and is the cornerstone of research in audiovisual speech perception. Combination illusions occur given reversal of the input modalities—auditory /ga/-visual /ba/, and percept /bga/. A robust literature shows that fusion illusions in an oddball paradigm evoke a mismatch negativity (MMN) in the auditory cortex, in absence of changes to acoustic stimuli. We compared fusion and combination illusions in a passive oddball paradigm to further examine the influence of visual and auditory aspects of incongruent speech stimuli on the audiovisual MMN. Participants viewed videos under two audiovisual illusion conditions: fusion with visual aspect of the stimulus changing, and combination with auditory aspect of the stimulus changing, as well as two unimodal auditory- and visual-only conditions. Fusion and combination deviants exerted similar influence in generating congruency predictions with significant differences between standards and deviants in the N100 time window. Presence of the MMN in early and late time windows differentiated fusion from combination deviants. When the visual signal changes, a new percept is created, but when the visual is held constant and the auditory changes, the response is suppressed, evoking a later MMN. In alignment with models of predictive processing in audiovisual speech perception, we interpreted our results to indicate that visual information can both predict and suppress auditory speech perception.
Collapse
Affiliation(s)
- Melissa Randazzo
- Department of Communication Sciences and Disorders, Adelphi University, Garden City, NY 11530, USA; (R.P.); (A.N.)
- Correspondence: ; Tel.: +1-516-877-4769
| | - Ryan Priefer
- Department of Communication Sciences and Disorders, Adelphi University, Garden City, NY 11530, USA; (R.P.); (A.N.)
| | - Paul J. Smith
- Neuroscience and Education, Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, NY 10027, USA; (P.J.S.); (T.A.); (K.F.)
| | - Amanda Nagler
- Department of Communication Sciences and Disorders, Adelphi University, Garden City, NY 11530, USA; (R.P.); (A.N.)
| | - Trey Avery
- Neuroscience and Education, Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, NY 10027, USA; (P.J.S.); (T.A.); (K.F.)
| | - Karen Froud
- Neuroscience and Education, Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, NY 10027, USA; (P.J.S.); (T.A.); (K.F.)
| |
Collapse
|
28
|
Higgen FL, Heine C, Krawinkel L, Göschl F, Engel AK, Hummel FC, Xue G, Gerloff C. Crossmodal Congruency Enhances Performance of Healthy Older Adults in Visual-Tactile Pattern Matching. Front Aging Neurosci 2020; 12:74. [PMID: 32256341 PMCID: PMC7090137 DOI: 10.3389/fnagi.2020.00074] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 03/02/2020] [Indexed: 11/23/2022] Open
Abstract
One of the pivotal challenges of aging is to maintain independence in the activities of daily life. In order to adapt to changes in the environment, it is crucial to continuously process and accurately combine simultaneous input from different sensory systems, i.e., crossmodal or multisensory integration. With aging, performance decreases in multiple domains, affecting bottom-up sensory processing as well as top-down control. However, whether this decline leads to impairments in crossmodal interactions remains an unresolved question. While some researchers propose that crossmodal interactions degrade with age, others suggest that they are conserved or even gain compensatory importance. To address this question, we compared the behavioral performance of older and young participants in a well-established crossmodal matching task, requiring the evaluation of congruency in simultaneously presented visual and tactile patterns. Older participants performed significantly worse than young controls in the crossmodal task when being stimulated at their individual unimodal visual and tactile perception thresholds. Performance increased with adjustment of stimulus intensities. This improvement was driven by better detection of congruent stimulus pairs, while the detection of incongruent pairs was not significantly enhanced. These results indicate that age-related impairments lead to poor performance in complex crossmodal scenarios and demanding cognitive tasks. Crossmodal congruency effects attenuate the difficulties of older adults in visuotactile pattern matching and might be an important factor to drive the benefits of older adults demonstrated in various crossmodal integration scenarios. Congruency effects might, therefore, be used to develop strategies for cognitive training and neurological rehabilitation.
Collapse
Affiliation(s)
- Focko L Higgen
- Department of Neurology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Charlotte Heine
- Department of Neurology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Lutz Krawinkel
- Department of Neurology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Florian Göschl
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Friedhelm C Hummel
- Defitech Chair of Clinical Neuroengineering, Brain Mind Institute and Center for Neuroprosthetics, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland.,Defitech Chair of Clinical Neuroengineering, Brain Mind Institute and Center for Neuroprosthetics, Swiss Federal Institute of Technology Valais (EPFL Valais), Clinique Romande de Réadaptation, Sion, Switzerland.,Clinical Neuroscience, Medical School University of Geneva, Geneva, Switzerland
| | - Gui Xue
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Christian Gerloff
- Department of Neurology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
29
|
Glick HA, Sharma A. Cortical Neuroplasticity and Cognitive Function in Early-Stage, Mild-Moderate Hearing Loss: Evidence of Neurocognitive Benefit From Hearing Aid Use. Front Neurosci 2020; 14:93. [PMID: 32132893 PMCID: PMC7040174 DOI: 10.3389/fnins.2020.00093] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 01/23/2020] [Indexed: 12/26/2022] Open
Abstract
Age-related hearing loss (ARHL) is associated with cognitive decline as well as structural and functional brain changes. However, the mechanisms underlying neurocognitive deficits in ARHL are poorly understood and it is unclear whether clinical treatment with hearing aids may modify neurocognitive outcomes. To address these topics, cortical visual evoked potentials (CVEPs), cognitive function, and speech perception abilities were measured in 28 adults with untreated, mild-moderate ARHL and 13 age-matched normal hearing (NH) controls. The group of adults with ARHL were then fit with bilateral hearing aids and re-evaluated after 6 months of amplification use. At baseline, the ARHL group exhibited more extensive recruitment of auditory, frontal, and pre-frontal cortices during a visual motion processing task, providing evidence of cross-modal re-organization and compensatory cortical neuroplasticity. Further, more extensive cross-modal recruitment of the right auditory cortex was associated with greater degree of hearing loss, poorer speech perception in noise, and worse cognitive function. Following clinical treatment with hearing aids, a reversal in cross-modal re-organization of auditory cortex by vision was observed in the ARHL group, coinciding with gains in speech perception and cognitive performance. Thus, beyond the known benefits of hearing aid use on communication, outcomes from this study provide evidence that clinical intervention with well-fit amplification may promote more typical cortical organization and functioning and provide cognitive benefit.
Collapse
Affiliation(s)
| | - Anu Sharma
- Brain and Behavior Laboratory, Department of Speech, Language, and Hearing Science, Center for Neuroscience, Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, United States
| |
Collapse
|
30
|
Couth S, Poole D, Gowen E, Champion RA, Warren PA, Poliakoff E. The Effect of Ageing on Optimal Integration of Conflicting and Non-Conflicting Visual–Haptic Stimuli. Multisens Res 2019; 32:771-796. [DOI: 10.1163/22134808-20191409] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Accepted: 06/06/2019] [Indexed: 11/19/2022]
Abstract
Abstract
Multisensory integration typically follows the predictions of a statistically optimal model whereby the contribution of each sensory modality is weighted according to its reliability. Previous research has shown that multisensory integration is affected by ageing, however it is less certain whether older adults follow this statistically optimal model. Additionally, previous studies often present multisensory cues which are conflicting in size, shape or location, yet naturally occurring multisensory cues are usually non-conflicting. Therefore, the mechanisms of integration in older adults might differ depending on whether the multisensory cues are consistent or conflicting. In the current experiment, young () and older () adults were asked to make judgements regarding the height of wooden blocks using visual, haptic or combined visual–haptic information. Dual modality visual–haptic blocks could be presented as equal or conflicting in size. Young and older adults’ size discrimination thresholds (i.e., precision) were not significantly different for visual, haptic or visual–haptic cues. In addition, both young and older adults’ discrimination thresholds and points of subjective equality did not follow model predictions of optimal integration, for both conflicting and non-conflicting cues. Instead, there was considerable between subject variability as to how visual and haptic cues were processed when presented simultaneously. This finding has implications for the development of multisensory therapeutic aids and interventions to assist older adults with everyday activities, where these should be tailored to the needs of each individual.
Collapse
Affiliation(s)
- Samuel Couth
- 1Division of Human Communication, Development and Hearing, Faculty of Biology, Medicine and Health, University of Manchester, UK
| | - Daniel Poole
- 2Division of Neuroscience and Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, UK
| | - Emma Gowen
- 2Division of Neuroscience and Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, UK
| | - Rebecca A. Champion
- 2Division of Neuroscience and Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, UK
| | - Paul A. Warren
- 2Division of Neuroscience and Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, UK
| | - Ellen Poliakoff
- 2Division of Neuroscience and Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, UK
| |
Collapse
|
31
|
Stawicki M, Majdak P, Başkent D. Ventriloquist Illusion Produced With Virtual Acoustic Spatial Cues and Asynchronous Audiovisual Stimuli in Both Young and Older Individuals. Multisens Res 2019; 32:745-770. [DOI: 10.1163/22134808-20191430] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2019] [Accepted: 09/03/2019] [Indexed: 11/19/2022]
Abstract
Abstract
Ventriloquist illusion, the change in perceived location of an auditory stimulus when a synchronously presented but spatially discordant visual stimulus is added, has been previously shown in young healthy populations to be a robust paradigm that mainly relies on automatic processes. Here, we propose ventriloquist illusion as a potential simple test to assess audiovisual (AV) integration in young and older individuals. We used a modified version of the illusion paradigm that was adaptive, nearly bias-free, relied on binaural stimulus representation using generic head-related transfer functions (HRTFs) instead of multiple loudspeakers, and tested with synchronous and asynchronous presentation of AV stimuli (both tone and speech). The minimum audible angle (MAA), the smallest perceptible difference in angle between two sound sources, was compared with or without the visual stimuli in young and older adults with no or minimal sensory deficits. The illusion effect, measured by means of MAAs implemented with HRTFs, was observed with both synchronous and asynchronous visual stimulus, but only with tone and not speech stimulus. The patterns were similar between young and older individuals, indicating the versatility of the modified ventriloquist illusion paradigm.
Collapse
Affiliation(s)
- Marnix Stawicki
- 1Department of Otorhinolaryngology / Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- 2Graduate School of Medical Sciences, Research School of Behavioral and Cognitive Neurosciences (BCN), University of Groningen, Groningen, The Netherlands
| | - Piotr Majdak
- 3Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Deniz Başkent
- 1Department of Otorhinolaryngology / Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- 2Graduate School of Medical Sciences, Research School of Behavioral and Cognitive Neurosciences (BCN), University of Groningen, Groningen, The Netherlands
| |
Collapse
|
32
|
Brown VA, Hedayati M, Zanger A, Mayn S, Ray L, Dillman-Hasso N, Strand JF. What accounts for individual differences in susceptibility to the McGurk effect? PLoS One 2018; 13:e0207160. [PMID: 30418995 PMCID: PMC6231656 DOI: 10.1371/journal.pone.0207160] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 10/25/2018] [Indexed: 11/29/2022] Open
Abstract
The McGurk effect is a classic audiovisual speech illusion in which discrepant auditory and visual syllables can lead to a fused percept (e.g., an auditory /bɑ/ paired with a visual /gɑ/ often leads to the perception of /dɑ/). The McGurk effect is robust and easily replicated in pooled group data, but there is tremendous variability in the extent to which individual participants are susceptible to it. In some studies, the rate at which individuals report fusion responses ranges from 0% to 100%. Despite its widespread use in the audiovisual speech perception literature, the roots of the wide variability in McGurk susceptibility are largely unknown. This study evaluated whether several perceptual and cognitive traits are related to McGurk susceptibility through correlational analyses and mixed effects modeling. We found that an individual's susceptibility to the McGurk effect was related to their ability to extract place of articulation information from the visual signal (i.e., a more fine-grained analysis of lipreading ability), but not to scores on tasks measuring attentional control, processing speed, working memory capacity, or auditory perceptual gradiency. These results provide support for the claim that a small amount of the variability in susceptibility to the McGurk effect is attributable to lipreading skill. In contrast, cognitive and perceptual abilities that are commonly used predictors in individual differences studies do not appear to underlie susceptibility to the McGurk effect.
Collapse
Affiliation(s)
- Violet A. Brown
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| | - Maryam Hedayati
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| | - Annie Zanger
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| | - Sasha Mayn
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| | - Lucia Ray
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| | - Naseem Dillman-Hasso
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| | - Julia F. Strand
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| |
Collapse
|
33
|
Brooks CJ, Chan YM, Anderson AJ, McKendrick AM. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss. Front Hum Neurosci 2018; 12:192. [PMID: 29867415 PMCID: PMC5954093 DOI: 10.3389/fnhum.2018.00192] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2017] [Accepted: 04/20/2018] [Indexed: 11/26/2022] Open
Abstract
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.
Collapse
Affiliation(s)
- Cassandra J Brooks
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Yu Man Chan
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Andrew J Anderson
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Allison M McKendrick
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
34
|
Validating a Method to Assess Lipreading, Audiovisual Gain, and Integration During Speech Reception With Cochlear-Implanted and Normal-Hearing Subjects Using a Talking Head. Ear Hear 2018; 39:503-516. [DOI: 10.1097/aud.0000000000000502] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
35
|
Dumas K, Holtzer R, Mahoney JR. Visual-Somatosensory Integration in Older Adults: Links to Sensory Functioning. Multisens Res 2018; 29:397-420. [PMID: 29384609 DOI: 10.1163/22134808-00002521] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Research investigating multisensory integration (MSI) processes in aging is scarce, but converging evidence for larger behavioral MSI effects in older compared to younger adults exists. The current study employed a three-prong approach to determine whether inherent age-related sensory processing declines were associated with larger (i.e., worse) visual-somatosensory (VS) reaction time (RT) facilitation effects. Non-demented older adults ( n = 156 ; mean age = 77 years; 55% female) without any medical or psychiatric conditions were included. Participants were instructed to make speeded foot-pedal responses as soon as they detected visual, somatosensory, or VS stimulation. Visual acuity was assessed using the Snellen test while somatosensory sensitivity was determined using vibration thresholds. The aims of the current study were to: (1) replicate a reliable MSI effect; (2) investigate the effect of unisensory functioning on VS RT facilitation; and (3) determine whether sensory functioning combination groups manifested differential MSI effects. Results revealed a significant VS RT facilitation effect that was influenced by somatosensory sensitivity but not visual acuity. That is, older adults with poor somatosensory sensitivity demonstrated significantly larger MSI effects than those with intact somatosensory sensitivity. Additionally, a significant interaction between stimulus condition and sensory functioning group suggested that the group with poor visual acuity and poor somatosensory functioning demonstrated the largest MSI effect compared to the other groups. In summary, the current study reveals that worse somatosensory functioning is associated with larger MSI effects in older adults. To our knowledge, this is first study to identify potential mechanisms behind increased RT facilitation in aging.
Collapse
|
36
|
Heikkilä J, Fagerlund P, Tiippana K. Semantically Congruent Visual Information Can Improve Auditory Recognition Memory in Older Adults. Multisens Res 2018; 31:213-225. [DOI: 10.1163/22134808-00002602] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Accepted: 07/31/2017] [Indexed: 11/19/2022]
Abstract
In the course of normal aging, memory functions show signs of impairment. Studies of memory in the elderly have previously focused on a single sensory modality, although multisensory encoding has been shown to improve memory performance in children and young adults. In this study, we investigated how audiovisual encoding affects auditory recognition memory in older (mean age 71 years) and younger (mean age 23 years) adults. Participants memorized auditory stimuli (sounds, spoken words) presented either alone or with semantically congruent visual stimuli (pictures, text) during encoding. Subsequent recognition memory performance of auditory stimuli was better for stimuli initially presented together with visual stimuli than for auditory stimuli presented alone during encoding. This facilitation was observed both in older and younger participants, while the overall memory performance was poorer in older participants. However, the pattern of facilitation was influenced by age. When encoding spoken words, the gain was greater for older adults. When encoding sounds, the gain was greater for younger adults. These findings show that semantically congruent audiovisual encoding improves memory performance in late adulthood, particularly for auditory verbal material.
Collapse
Affiliation(s)
- Jenni Heikkilä
- Department of Psychology and Logopedics, Faculty of Medicine, P.O. Box 9, 00014 University of Helsinki, Finland
| | - Petra Fagerlund
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Helsinki, Finland
| | - Kaisa Tiippana
- Department of Psychology and Logopedics, Faculty of Medicine, P.O. Box 9, 00014 University of Helsinki, Finland
| |
Collapse
|
37
|
Couth S, Gowen E, Poliakoff E. Using Race Model Violation to Explore Multisensory Responses in Older Adults: Enhanced Multisensory Integration or Slower Unisensory Processing? Multisens Res 2018; 31:151-174. [DOI: 10.1163/22134808-00002588] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Accepted: 06/14/2017] [Indexed: 11/19/2022]
Abstract
Older adults exhibit greater multisensory reaction time (RT) facilitation than young adults. Since older adults exhibit greater violation of the race model (i.e., cumulative distribution functions for multisensory RTs are greater than that of the summed unisensory RTs), this has been attributed to enhanced multisensory integration. Here we explored whether (a) individual differences in RT distributions within each age group might drive this effect, and (b) the race model is more likely to be violated if unisensory RTs are slower. Young () and older adults () made speeded responses to visual, auditory or tactile stimuli, or any combination of these (bi-/tri-modal). The test of the race model suggested greater audiovisual integration for older adults, but only when accounting for individual differences in RT distributions. Moreover, correlations in both age groups showed that slower unisensory RTs were associated with a greater degree of race model violation. Therefore, greater race model violation may be due to greater ‘room for improvement’ from unisensory responses in older adults compared to young adults, and thus could falsely give the impression of enhanced multisensory integration.
Collapse
Affiliation(s)
- Samuel Couth
- Faculty of Biology, Medicine and Health, The University of Manchester, Oxford Road, Manchester M13 9PL, UK
| | - Emma Gowen
- Faculty of Biology, Medicine and Health, The University of Manchester, Oxford Road, Manchester M13 9PL, UK
| | - Ellen Poliakoff
- Faculty of Biology, Medicine and Health, The University of Manchester, Oxford Road, Manchester M13 9PL, UK
| |
Collapse
|
38
|
Alsius A, Paré M, Munhall KG. Forty Years After Hearing Lips and Seeing Voices: the McGurk Effect Revisited. Multisens Res 2018; 31:111-144. [PMID: 31264597 DOI: 10.1163/22134808-00002565] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Accepted: 03/09/2017] [Indexed: 11/19/2022]
Abstract
Since its discovery 40 years ago, the McGurk illusion has been usually cited as a prototypical paradigmatic case of multisensory binding in humans, and has been extensively used in speech perception studies as a proxy measure for audiovisual integration mechanisms. Despite the well-established practice of using the McGurk illusion as a tool for studying the mechanisms underlying audiovisual speech integration, the magnitude of the illusion varies enormously across studies. Furthermore, the processing of McGurk stimuli differs from congruent audiovisual processing at both phenomenological and neural levels. This questions the suitability of this illusion as a tool to quantify the necessary and sufficient conditions under which audiovisual integration occurs in natural conditions. In this paper, we review some of the practical and theoretical issues related to the use of the McGurk illusion as an experimental paradigm. We believe that, without a richer understanding of the mechanisms involved in the processing of the McGurk effect, experimenters should be really cautious when generalizing data generated by McGurk stimuli to matching audiovisual speech events.
Collapse
Affiliation(s)
- Agnès Alsius
- Psychology Department, Queen's University, Humphrey Hall, 62 Arch St., Kingston, Ontario, K7L 3N6 Canada
| | - Martin Paré
- Psychology Department, Queen's University, Humphrey Hall, 62 Arch St., Kingston, Ontario, K7L 3N6 Canada
| | - Kevin G Munhall
- Psychology Department, Queen's University, Humphrey Hall, 62 Arch St., Kingston, Ontario, K7L 3N6 Canada
| |
Collapse
|
39
|
Campos JL, El-Khechen Richandi G, Taati B, Keshavarz B. The Rubber Hand Illusion in Healthy Younger and Older Adults. Multisens Res 2018; 31:537-555. [PMID: 31264613 DOI: 10.1163/22134808-00002614] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2017] [Accepted: 09/15/2017] [Indexed: 11/19/2022]
Abstract
Percepts about our body's position in space and about body ownership are informed by multisensory feedback from visual, proprioceptive, and tactile inputs. The Rubber Hand Illusion (RHI) is a multisensory illusion that is induced when an observer sees a rubber hand being stroked while they feel their own, spatially displaced, and obstructed hand being stroked. When temporally synchronous, the visual-tactile interactions can create the illusion that the rubber hand belongs to the observer and that the observer's real hand is shifted in position towards the rubber hand. Importantly, little is understood about whether these multisensory perceptions of the body change with older age. Thus, in this study we implemented a classic RHI protocol (synchronous versus asynchronous stroking) with healthy younger (18-35) and older (65+) adults and measured the magnitude of proprioceptive drift and the subjective experience of body ownership. As an adjunctive objective measure, skin temperature was recorded to evaluate whether decreases in skin temperature were associated with illusory percepts, as has been shown previously. The RHI was observed for both age groups with respect to increased drift and higher ratings of ownership following synchronous compared to asynchronous stroking. Importantly, no effects of age and no interactions between age and condition were observed for either of these outcome measures. No effects were observed for skin temperature. Overall, these results contribute to an emerging field of research investigating the conditions under which age-related differences in multisensory integration are observed by providing insights into the role of visual, proprioceptive, and tactile inputs on bodily percepts.
Collapse
Affiliation(s)
- Jennifer L Campos
- University of Toronto, Psychology, Toronto, ON, Canada.,Toronto Rehabilitation Institute, University Health Network, 550 University Ave., Toronto, ON, Canada
| | - Graziella El-Khechen Richandi
- Toronto Rehabilitation Institute, University Health Network, 550 University Ave., Toronto, ON, Canada.,University of Toronto, Rehabilitation Science Institute, Toronto, ON, Canada
| | - Babak Taati
- Toronto Rehabilitation Institute, University Health Network, 550 University Ave., Toronto, ON, Canada.,University of Toronto, Department of Computer Science, Toronto, ON, Canada.,University of Toronto, Institute for Biomaterials and Biomedical Engineering, Toronto, ON, Canada
| | - Behrang Keshavarz
- Toronto Rehabilitation Institute, University Health Network, 550 University Ave., Toronto, ON, Canada.,Ryerson University, Department of Psychology, Toronto, ON, Canada
| |
Collapse
|
40
|
Zou Z, Chau BKH, Ting KH, Chan CCH. Aging Effect on Audiovisual Integrative Processing in Spatial Discrimination Task. Front Aging Neurosci 2017; 9:374. [PMID: 29184494 PMCID: PMC5694625 DOI: 10.3389/fnagi.2017.00374] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2017] [Accepted: 11/01/2017] [Indexed: 11/13/2022] Open
Abstract
Multisensory integration is an essential process that people employ daily, from conversing in social gatherings to navigating the nearby environment. The aim of this study was to investigate the impact of aging on modulating multisensory integrative processes using event-related potential (ERP), and the validity of the study was improved by including “noise” in the contrast conditions. Older and younger participants were involved in perceiving visual and/or auditory stimuli that contained spatial information. The participants responded by indicating the spatial direction (far vs. near and left vs. right) conveyed in the stimuli using different wrist movements. electroencephalograms (EEGs) were captured in each task trial, along with the accuracy and reaction time of the participants’ motor responses. Older participants showed a greater extent of behavioral improvements in the multisensory (as opposed to unisensory) condition compared to their younger counterparts. Older participants were found to have fronto-centrally distributed super-additive P2, which was not the case for the younger participants. The P2 amplitude difference between the multisensory condition and the sum of the unisensory conditions was found to correlate significantly with performance on spatial discrimination. The results indicated that the age-related effect modulated the integrative process in the perceptual and feedback stages, particularly the evaluation of auditory stimuli. Audiovisual (AV) integration may also serve a functional role during spatial-discrimination processes to compensate for the compromised attention function caused by aging.
Collapse
Affiliation(s)
- Zhi Zou
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Bolton K H Chau
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Kin-Hung Ting
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Chetwyn C H Chan
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| |
Collapse
|
41
|
Ross LA, Del Bene VA, Molholm S, Woo YJ, Andrade GN, Abrahams BS, Foxe JJ. Common variation in the autism risk gene CNTNAP2, brain structural connectivity and multisensory speech integration. BRAIN AND LANGUAGE 2017; 174:50-60. [PMID: 28738218 DOI: 10.1016/j.bandl.2017.07.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 04/07/2017] [Accepted: 07/11/2017] [Indexed: 06/07/2023]
Abstract
Three lines of evidence motivated this study. 1) CNTNAP2 variation is associated with autism risk and speech-language development. 2) CNTNAP2 variations are associated with differences in white matter (WM) tracts comprising the speech-language circuitry. 3) Children with autism show impairment in multisensory speech perception. Here, we asked whether an autism risk-associated CNTNAP2 single nucleotide polymorphism in neurotypical adults was associated with multisensory speech perception performance, and whether such a genotype-phenotype association was mediated through white matter tract integrity in speech-language circuitry. Risk genotype at rs7794745 was associated with decreased benefit from visual speech and lower fractional anisotropy (FA) in several WM tracts (right precentral gyrus, left anterior corona radiata, right retrolenticular internal capsule). These structural connectivity differences were found to mediate the effect of genotype on audiovisual speech perception, shedding light on possible pathogenic pathways in autism and biological sources of inter-individual variation in audiovisual speech processing in neurotypicals.
Collapse
Affiliation(s)
- Lars A Ross
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA.
| | - Victor A Del Bene
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA; Ferkauf Graduate School of Psychology Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Sophie Molholm
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA; Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Young Jae Woo
- Department of Genetics, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Gizely N Andrade
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA
| | - Brett S Abrahams
- Department of Genetics, Albert Einstein College of Medicine, Bronx, NY 10461, USA; Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - John J Foxe
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA; Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY 10461, USA; Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA.
| |
Collapse
|
42
|
Do age and linguistic background alter the audiovisual advantage when listening to speech in the presence of energetic and informational masking? Atten Percept Psychophys 2017; 80:242-261. [DOI: 10.3758/s13414-017-1423-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
43
|
Auditory and Audiovisual Close Shadowing in Post-Lingually Deaf Cochlear-Implanted Patients and Normal-Hearing Elderly Adults. Ear Hear 2017; 39:139-149. [PMID: 28753162 DOI: 10.1097/aud.0000000000000474] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The goal of this study was to determine the effect of auditory deprivation and age-related speech decline on perceptuo-motor abilities during speech processing in post-lingually deaf cochlear-implanted participants and in normal-hearing elderly (NHE) participants. DESIGN A close-shadowing experiment was carried out on 10 cochlear-implanted patients and on 10 NHE participants, with two groups of normal-hearing young participants as controls. To this end, participants had to categorize auditory and audiovisual syllables as quickly as possible, either manually or orally. Reaction times and percentages of correct responses were compared depending on response modes, stimulus modalities, and syllables. RESULTS Responses of cochlear-implanted subjects were globally slower and less accurate than those of both young and elderly normal-hearing people. Adding the visual modality was found to enhance performance for cochlear-implanted patients, whereas no significant effect was obtained for the NHE group. Critically, oral responses were faster than manual ones for all groups. In addition, for NHE participants, manual responses were more accurate than oral responses, as was the case for normal-hearing young participants when presented with noisy speech stimuli. CONCLUSIONS Faster reaction times were observed for oral than for manual responses in all groups, suggesting that perceptuo-motor relationships were somewhat successfully functional after cochlear implantation and remain efficient in the NHE group. These results are in agreement with recent perceptuo-motor theories of speech perception. They are also supported by the theoretical assumption that implicit motor knowledge and motor representations partly constrain auditory speech processing. In this framework, oral responses would have been generated at an earlier stage of a sensorimotor loop, whereas manual responses would appear late, leading to slower but more accurate responses. The difference between oral and manual responses suggests that the perceptuo-motor loop is still effective for NHE subjects and also for cochlear-implanted participants, despite degraded global performance.
Collapse
|
44
|
Festa EK, Katz AP, Ott BR, Tremont G, Heindel WC. Dissociable Effects of Aging and Mild Cognitive Impairment on Bottom-Up Audiovisual Integration. J Alzheimers Dis 2017; 59:155-167. [DOI: 10.3233/jad-161062] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Elena K. Festa
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - Andrew P. Katz
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - Brian R. Ott
- Department of Neurology, Alpert Medical School, Brown University, Providence, RI, USA
- Department of Neurology, Rhode Island Hospital, Providence, RI, USA
| | - Geoffrey Tremont
- Department of Psychiatry and Human Behavior, Alpert Medical School, Brown University, Providence, RI, USA
- Department of Psychiatry, Rhode Island Hospital, Providence, RI, USA
| | - William C. Heindel
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| |
Collapse
|
45
|
Gordon-Salant S, Yeni-Komshian GH, Fitzgibbons PJ, Willison HM, Freund MS. Recognition of asynchronous auditory-visual speech by younger and older listeners: A preliminary study. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:151. [PMID: 28764460 PMCID: PMC5507703 DOI: 10.1121/1.4992026] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Revised: 03/20/2017] [Accepted: 06/23/2017] [Indexed: 05/15/2023]
Abstract
This study examined the effects of age and hearing loss on recognition of speech presented when the auditory and visual speech information was misaligned in time (i.e., asynchronous). Prior research suggests that older listeners are less sensitive than younger listeners in detecting the presence of asynchronous speech for auditory-lead conditions, but recognition of speech in auditory-lead conditions has not yet been examined. Recognition performance was assessed for sentences and words presented in the auditory-visual modalities with varying degrees of auditory lead and lag. Detection of auditory-visual asynchrony for sentences was assessed to verify that listeners detected these asynchronies. The listeners were younger and older normal-hearing adults and older hearing-impaired adults. Older listeners (regardless of hearing status) exhibited a significant decline in performance in auditory-lead conditions relative to visual lead, unlike younger listeners whose recognition performance was relatively stable across asynchronies. Recognition performance was not correlated with asynchrony detection. However, one of the two cognitive measures assessed, processing speed, was identified in multiple regression analyses as contributing significantly to the variance in auditory-visual speech recognition scores. The findings indicate that, particularly in auditory-lead conditions, listener age has an impact on the ability to recognize asynchronous auditory-visual speech signals.
Collapse
Affiliation(s)
- Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Grace H Yeni-Komshian
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Peter J Fitzgibbons
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Hannah M Willison
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Maya S Freund
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
46
|
Scarbel L, Beautemps D, Schwartz JL, Sato M. Sensory-motor relationships in speech production in post-lingually deaf cochlear-implanted adults and normal-hearing seniors: Evidence from phonetic convergence and speech imitation. Neuropsychologia 2017; 101:39-46. [PMID: 28483485 DOI: 10.1016/j.neuropsychologia.2017.05.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2017] [Revised: 04/17/2017] [Accepted: 05/04/2017] [Indexed: 11/26/2022]
Abstract
Speech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. One striking example comes from phonetic convergence, when speakers automatically tend to mimic their interlocutor's speech during communicative interaction. The goal of this study was to investigate sensory-motor linkage in speech production in postlingually deaf cochlear implanted participants and normal hearing elderly adults through phonetic convergence and imitation. To this aim, two vowel production tasks, with or without instruction to imitate an acoustic vowel, were proposed to three groups of young adults with normal hearing, elderly adults with normal hearing and post-lingually deaf cochlear-implanted patients. Measure of the deviation of each participant's f0 from their own mean f0 was measured to evaluate the ability to converge to each acoustic target. RESULTS showed that cochlear-implanted participants have the ability to converge to an acoustic target, both intentionally and unintentionally, albeit with a lower degree than young and elderly participants with normal hearing. By providing evidence for phonetic convergence and speech imitation, these results suggest that, as in young adults, perceptuo-motor relationships are efficient in elderly adults with normal hearing and that cochlear-implanted adults recovered significant perceptuo-motor abilities following cochlear implantation.
Collapse
Affiliation(s)
- Lucie Scarbel
- GIPSA-LAB, Département Parole & Cognition, CNRS & Grenoble Université, Grenoble, France.
| | - Denis Beautemps
- GIPSA-LAB, Département Parole & Cognition, CNRS & Grenoble Université, Grenoble, France
| | - Jean-Luc Schwartz
- GIPSA-LAB, Département Parole & Cognition, CNRS & Grenoble Université, Grenoble, France
| | - Marc Sato
- Laboratoire Parole & Langage, CNRS & Aix-Marseille Université, Aix-en-Provence, France
| |
Collapse
|
47
|
Cohen JI, Gordon-Salant S. The effect of visual distraction on auditory-visual speech perception by younger and older listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:EL470. [PMID: 28599569 PMCID: PMC5724720 DOI: 10.1121/1.4983399] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Revised: 03/05/2017] [Accepted: 04/27/2017] [Indexed: 06/07/2023]
Abstract
Visual distractions are present in real-world listening environments, such as conversing in a crowded restaurant. This study examined the impact of visual distractors on younger and older adults' ability to understand auditory-visual (AV) speech in noise. AV speech stimuli were presented with one competing talker and with three different types of visual distractors. SNR50 thresholds for both listener groups were affected by visual distraction; the poorest performance for both groups was the AV + Video condition, and differences across groups were noted for some conditions. These findings suggest that older adults may be more susceptible to irrelevant auditory and visual competition in a real-world environment.
Collapse
Affiliation(s)
- Julie I Cohen
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA ,
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA ,
| |
Collapse
|
48
|
Stevenson RA, Baum SH, Krueger J, Newhouse PA, Wallace MT. Links between temporal acuity and multisensory integration across life span. J Exp Psychol Hum Percept Perform 2017; 44:106-116. [PMID: 28447850 DOI: 10.1037/xhp0000424] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The temporal relationship between individual pieces of information from the different sensory modalities is one of the stronger cues to integrate such information into a unified perceptual gestalt, conveying numerous perceptual and behavioral advantages. Temporal acuity, however, varies greatly over the life span. It has previously been hypothesized that changes in temporal acuity in both development and healthy aging may thus play a key role in integrative abilities. This study tested the temporal acuity of 138 individuals ranging in age from 5 to 80. Temporal acuity and multisensory integration abilities were tested both within and across modalities (audition and vision) with simultaneity judgment and temporal order judgment tasks. We observed that temporal acuity, both within and across modalities, improved throughout development into adulthood and subsequently declined with healthy aging, as did the ability to integrate multisensory speech information. Of importance, throughout development, temporal acuity of simple stimuli (i.e., flashes and beeps) predicted individuals' abilities to integrate more complex speech information. However, in the aging population, although temporal acuity declined with healthy aging and was accompanied by declines in integrative abilities, temporal acuity was not able to predict integration at the individual level. Together, these results suggest that the impact of temporal acuity on multisensory integration varies throughout the life span. Although the maturation of temporal acuity drives the rise of multisensory integrative abilities during development, it is unable to account for changes in integrative abilities in healthy aging. The differential relationships between age, temporal acuity, and multisensory integration suggest an important role for experience in these processes. (PsycINFO Database Record
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychology, Brain and Mind Institute, University of Western Ontario
| | - Sarah H Baum
- Department of Psychology, University of Washington
| | | | - Paul A Newhouse
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center
| | | |
Collapse
|
49
|
Chaby L, Hupont I, Avril M, Luherne-du Boullay V, Chetouani M. Gaze Behavior Consistency among Older and Younger Adults When Looking at Emotional Faces. Front Psychol 2017; 8:548. [PMID: 28450841 PMCID: PMC5390044 DOI: 10.3389/fpsyg.2017.00548] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Accepted: 03/24/2017] [Indexed: 01/27/2023] Open
Abstract
The identification of non-verbal emotional signals, and especially of facial expressions, is essential for successful social communication among humans. Previous research has reported an age-related decline in facial emotion identification, and argued for socio-emotional or aging-brain model explanations. However, more perceptual differences in the gaze strategies that accompany facial emotional processing with advancing age have been under-explored yet. In this study, 22 young (22.2 years) and 22 older (70.4 years) adults were instructed to look at basic facial expressions while their gaze movements were recorded by an eye-tracker. Participants were then asked to identify each emotion, and the unbiased hit rate was applied as performance measure. Gaze data were first analyzed using traditional measures of fixations over two preferential regions of the face (upper and lower areas) for each emotion. Then, to better capture core gaze changes with advancing age, spatio-temporal gaze behaviors were deeper examined using data-driven analysis (dimension reduction, clustering). Results first confirmed that older adults performed worse than younger adults at identifying facial expressions, except for "joy" and "disgust," and this was accompanied by a gaze preference toward the lower-face. Interestingly, this phenomenon was maintained during the whole time course of stimulus presentation. More importantly, trials corresponding to older adults were more tightly clustered, suggesting that the gaze behavior patterns of older adults are more consistent than those of younger adults. This study demonstrates that, confronted to emotional faces, younger and older adults do not prioritize or ignore the same facial areas. Older adults mainly adopted a focused-gaze strategy, consisting in focusing only on the lower part of the face throughout the whole stimuli display time. This consistency may constitute a robust and distinctive "social signature" of emotional identification in aging. Younger adults, however, were more dispersed in terms of gaze behavior and used a more exploratory-gaze strategy, consisting in repeatedly visiting both facial areas.
Collapse
Affiliation(s)
- Laurence Chaby
- Institut de Psychologie, Sorbonne Paris Cité, Université Paris DescartesBoulogne-Billancourt, France
- Sorbonne Universités, Université Pierre et Marie Curie - Paris 06, Institut des Systèmes Intelligents et de Robotique, Centre National de la Recherche Scientifique UMR 7222Paris, France
| | - Isabelle Hupont
- Sorbonne Universités, Université Pierre et Marie Curie - Paris 06, Institut des Systèmes Intelligents et de Robotique, Centre National de la Recherche Scientifique UMR 7222Paris, France
| | - Marie Avril
- Sorbonne Universités, Université Pierre et Marie Curie - Paris 06, Institut des Systèmes Intelligents et de Robotique, Centre National de la Recherche Scientifique UMR 7222Paris, France
| | - Viviane Luherne-du Boullay
- Service de Neurochirurgie, Assistance Publique – Hôpitaux de Paris, Groupe Hospitalier Pitié-SalpetrièreParis, France
| | - Mohamed Chetouani
- Sorbonne Universités, Université Pierre et Marie Curie - Paris 06, Institut des Systèmes Intelligents et de Robotique, Centre National de la Recherche Scientifique UMR 7222Paris, France
| |
Collapse
|
50
|
Noel JP, De Niear M, Van der Burg E, Wallace MT. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan. PLoS One 2016; 11:e0161698. [PMID: 27551918 PMCID: PMC4994953 DOI: 10.1371/journal.pone.0161698] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2016] [Accepted: 08/10/2016] [Indexed: 11/18/2022] Open
Abstract
Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Neuroscience Graduate Program, Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
| | - Matthew De Niear
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
- Medical Scientist Training Program, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
| | - Erik Van der Burg
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- School of Psychology, University of Sydney, Sydney, Australia
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, 37235, United States of America
- Department of Psychology, Vanderbilt University, Nashville, TN, 37235, United States of America
- * E-mail:
| |
Collapse
|