1
|
Hugdahl K. When fMRI came to Bergen and Norway - as I remember it. Scand J Psychol 2025; 66:111-120. [PMID: 39248103 DOI: 10.1111/sjop.13069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/10/2024]
Abstract
In this personal recollection, I review the beginning of functional magnetic resonance imaging (fMRI) research in Norway, i.e., at the University of Bergen and the Haukeland University Hospital in Bergen. Research with fMRI had already started in Bergen in 1993, and the small group of researchers involved were the first to take up this new method for studies of the brain and brain-behavior relationships. This article is a recollection of the early years of how the field started and developed in Bergen, Norway over the years, including basic as well as clinical research, and how the research also led to successful innovation and commercialization through the establishment of a MedTech company, NordicNeuroLab (NNL), that has delivered products to more than 2,000 university hospitals worldwide.
Collapse
Affiliation(s)
- Kenneth Hugdahl
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway
- Department of Radiology, Haukeland University Hospital, Bergen, Norway
| |
Collapse
|
2
|
Rødland E, Melleby KM, Specht K. Evaluation of a Simple Clinical Language Paradigm With Respect to Sensory Independency, Functional Asymmetry, and Effective Connectivity. Front Behav Neurosci 2022; 16:806520. [PMID: 35309683 PMCID: PMC8928437 DOI: 10.3389/fnbeh.2022.806520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/10/2022] [Indexed: 01/18/2023] Open
Abstract
The present study replicates a known visual language paradigm, and extends it to a paradigm that is independent from the sensory modality of the stimuli and, hence, could be administered either visually or aurally, such that both patients with limited sight or hearing could be examined. The stimuli were simple sentences, but required the subject not only to understand the content of the sentence but also to formulate a response that had a semantic relation to the content of the presented sentence. Thereby, this paradigm does not only test perception of the stimuli, but also to some extend sentence and semantic processing, and covert speech production within one task. When the sensory base-line condition was subtracted, both the auditory and visual version of the paradigm demonstrated a broadly overlapping and asymmetric network, comprising distinct areas of the left posterior temporal lobe, left inferior frontal areas, left precentral gyrus, and supplementary motor area. The consistency of activations and their asymmetry was evaluated with a conjunction analysis, probability maps, and intraclass correlation coefficients (ICC). This underlying network was further analyzed with dynamic causal modeling (DCM) to explore whether not only the same brain areas were involved, but also the network structure and information flow were the same between the sensory modalities. In conclusion, the paradigm reliably activated the most central parts of the speech and language network with a great consistency across subjects, and independently of whether the stimuli were administered aurally or visually. However, there was individual variability in the degree of functional asymmetry between the two sensory conditions.
Collapse
Affiliation(s)
- Erik Rødland
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway
- Division of Psychiatry, Department of Child and Adolescent, Haukeland University Hospital, Bergen, Norway
| | - Kathrine Midgaard Melleby
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway
- Adult Habilitation Section, Telemark Hospital Skien, Skien, Norway
| | - Karsten Specht
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway
- Mohn Medical Imaging and Visualization Centre, Haukeland University Hospital, Bergen, Norway
- Department of Education, UiT The Arctic University of Norway, Tromsø, Norway
- *Correspondence: Karsten Specht,
| |
Collapse
|
3
|
Chou TY, Wang JC, Lin MY, Tsai PY. Low-Frequency vs. Theta Burst Transcranial Magnetic Stimulation for the Treatment of Chronic Non-fluent Aphasia in Stroke: A Proof-of-Concept Study. Front Aging Neurosci 2022; 13:800377. [PMID: 35095477 PMCID: PMC8795082 DOI: 10.3389/fnagi.2021.800377] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Accepted: 12/27/2021] [Indexed: 01/27/2023] Open
Abstract
BACKGROUND Although low-frequency repetitive transcranial magnetic stimulation (LF-rTMS) has shown promise in the treatment of poststroke aphasia, the efficacy of high-frequency rTMS (HF-rTMS) has yet to be determined. PURPOSE We investigated the efficacy of intermittent theta burst stimulation (iTBS) in ameliorating chronic non-fluent aphasia and compared it with that of LF-rTMS. METHODS We randomly assigned patients with poststroke non-fluent aphasia to an ipsilesional iTBS (n = 29), contralesional 1-Hz rTMS (n = 27), or sham (n = 29) group. Each group received the rTMS protocol executed in 10 daily sessions over 2 weeks. We evaluated language function before and after the intervention by using the Concise Chinese Aphasia Test (CCAT). RESULTS Compared with the sham group, the iTBS group exhibited significant improvements in conversation, description, and expression scores (P = 0.0004-0.031), which characterize verbal production, as well as in auditory comprehension, reading comprehension, and matching scores (P < 0.01), which characterize language perception. The 1-Hz group exhibited superior improvements in expression, reading comprehension, and imitation writing scores compared with the sham group (P < 0.05). The iTBS group had significantly superior results in CCAT total score, matching and auditory comprehension (P < 0.05) relative to the 1-Hz group. CONCLUSION Our study findings contribute to a growing body of evidence that ipsilesional iTBS enhances the language recovery of patients with non-fluent aphasia after a chronic stroke. Auditory comprehension was more preferentially enhanced by iTBS compared with the 1-Hz protocol. Our findings highlight the importance of ipsilesional modulation through excitatory rTMS for the recovery of non-fluent aphasia in patients with chronic stroke. CLINICAL TRIAL REGISTRATION [www.ClinicalTrials.gov], identifier [NCT03059225].
Collapse
Affiliation(s)
- Ting-Yu Chou
- Department of Physical Medicine and Rehabilitation, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Jia-Chi Wang
- Department of Physical Medicine and Rehabilitation, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Mu-Yun Lin
- Department of Physical Medicine and Rehabilitation, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Po-Yi Tsai
- Department of Physical Medicine and Rehabilitation, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| |
Collapse
|
4
|
Multilevel fMRI adaptation for spoken word processing in the awake dog brain. Sci Rep 2020; 10:11968. [PMID: 32747731 PMCID: PMC7398925 DOI: 10.1038/s41598-020-68821-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Accepted: 06/30/2020] [Indexed: 01/08/2023] Open
Abstract
Human brains process lexical meaning separately from emotional prosody of speech at higher levels of the processing hierarchy. Recently we demonstrated that dog brains can also dissociate lexical and emotional prosodic information in human spoken words. To better understand the neural dynamics of lexical processing in the dog brain, here we used an event-related design, optimized for fMRI adaptation analyses on multiple time scales. We investigated repetition effects in dogs’ neural (BOLD) responses to lexically marked (praise) words and to lexically unmarked (neutral) words, in praising and neutral prosody. We identified temporally and anatomically distinct adaptation patterns. In a subcortical auditory region, we found both short- and long-term fMRI adaptation for emotional prosody, but not for lexical markedness. In multiple cortical auditory regions, we found long-term fMRI adaptation for lexically marked compared to unmarked words. This lexical adaptation showed right-hemisphere bias and was age-modulated in a near-primary auditory region and was independent of prosody in a secondary auditory region. Word representations in dogs’ auditory cortex thus contain more than just the emotional prosody they are typically associated with. These findings demonstrate multilevel fMRI adaptation effects in the dog brain and are consistent with a hierarchical account of spoken word processing.
Collapse
|
5
|
Abstract
There are functional and anatomical distinctions between the neural systems involved in the recognition of sounds in the environment and those involved in the sensorimotor guidance of sound production and the spatial processing of sound. Evidence for the separation of these processes has historically come from disparate literatures on the perception and production of speech, music and other sounds. More recent evidence indicates that there are computational distinctions between the rostral and caudal primate auditory cortex that may underlie functional differences in auditory processing. These functional differences may originate from differences in the response times and temporal profiles of neurons in the rostral and caudal auditory cortex, suggesting that computational accounts of primate auditory pathways should focus on the implications of these temporal response differences.
Collapse
|
6
|
Rejnö-Habte Selassie G, Pegenius G, Karlsson T, Viggedal G, Hallböök T, Elam M. Cortical mapping of receptive language processing in children using navigated transcranial magnetic stimulation. Epilepsy Behav 2020; 103:106836. [PMID: 31839497 DOI: 10.1016/j.yebeh.2019.106836] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Revised: 11/25/2019] [Accepted: 11/25/2019] [Indexed: 10/25/2022]
Abstract
We used a stepwise process to develop a new paradigm for preoperative cortical mapping of receptive language in children, using temporary functional blocking with transcranial magnetic stimulation (TMS). The method combines short sentences with a lexical decision task in which children are asked to point at a picture that fits a short sentence delivered aurally. This was first tested with 24 healthy children aged 4-16 years. Next, 75 sentences and 25 slides were presented to five healthy children in a clinical setting without TMS. Responses were registered on a separate computer, and facial expressions and hand movements were filmed for later offline review. Technical adjustments were made to combine these elements with the existing TMS equipment. The audio-recorded sentences were presented before the visual stimuli. Sentence lists were constructed to avoid similar stimuli in a row. Two different baseline lists were used before the TMS registration; the second baseline resulted in faster responses and was chosen as the reference for possible response delays induced by TMS. Protocols for offline reviews were constructed. No response, incorrect response, self-correction, delayed response, and perseveration were considered clear stimulation effects, while poor attention, discomfort, and other events were regarded as unclear. Finally, three children (6:2, 14:0, 14:10 years) with epilepsy and expected to undergo neurosurgery were assessed using TMS (left hemisphere in one; both hemispheres in the other two). In the two assessed bilaterally, TMS effects indicated bilateral language processing. Delayed response was the most common error. This is a first attempt to develop a new TMS paradigm for receptive language mapping, and further evaluation is suggested.
Collapse
Affiliation(s)
| | - Göran Pegenius
- Unit of Clinical Neurophysiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Tomas Karlsson
- Institute of Neuroscience & Physiology, Sahlgrenska Academy, University of Gothenburg, Sweden
| | - Gerd Viggedal
- Department of Pediatrics, Queen Silvia Children's Hospital and Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Sweden
| | - Tove Hallböök
- Department of Pediatrics, Queen Silvia Children's Hospital and Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Sweden
| | - Mikael Elam
- Unit of Clinical Neurophysiology, Sahlgrenska University Hospital, Gothenburg, Sweden; Institute of Neuroscience & Physiology, Sahlgrenska Academy, University of Gothenburg, Sweden
| |
Collapse
|
7
|
Van der Haegen L, Brysbaert M. The relationship between behavioral language laterality, face laterality and language performance in left-handers. PLoS One 2018; 13:e0208696. [PMID: 30576313 PMCID: PMC6303078 DOI: 10.1371/journal.pone.0208696] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Accepted: 11/21/2018] [Indexed: 01/20/2023] Open
Abstract
Left-handers provide unique information about the relationship between cognitive functions because of their larger variability in hemispheric dominance. This study presents the laterality distribution of, correlations between and test-retest reliability of behavioral lateralized language tasks (speech production, reading and speech perception), face recognition tasks, handedness measures and language performance tests based on data from 98 left-handers. The results show that a behavioral test battery leads to percentages of (a)typical dominance that are similar to those found in neuropsychological studies even though the incidence of clear atypical lateralization (about 20%) may be overestimated at the group level. Significant correlations were found between the language tasks for both reaction time and accuracy lateralization indices. The degree of language laterality could however not be linked to face laterality, handedness or language performance. Finally, individuals were classified less consistently than expected as being typical, bilateral or atypical across all tasks. This may be due to the often good (speech production and perception tasks) but sometimes weak (reading and face tasks) test-retest reliabilities. The lack of highly reliable and valid test protocols for functions unrelated to speech remains one of the largest impediments for individual analysis and cross-task investigations in laterality research.
Collapse
Affiliation(s)
- Lise Van der Haegen
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
- * E-mail:
| | - Marc Brysbaert
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
8
|
Venezia JH, Thurman SM, Richards VM, Hickok G. Hierarchy of speech-driven spectrotemporal receptive fields in human auditory cortex. Neuroimage 2018; 186:647-666. [PMID: 30500424 DOI: 10.1016/j.neuroimage.2018.11.049] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2018] [Revised: 10/11/2018] [Accepted: 11/26/2018] [Indexed: 12/22/2022] Open
Abstract
Existing data indicate that cortical speech processing is hierarchically organized. Numerous studies have shown that early auditory areas encode fine acoustic details while later areas encode abstracted speech patterns. However, it remains unclear precisely what speech information is encoded across these hierarchical levels. Estimation of speech-driven spectrotemporal receptive fields (STRFs) provides a means to explore cortical speech processing in terms of acoustic or linguistic information associated with characteristic spectrotemporal patterns. Here, we estimate STRFs from cortical responses to continuous speech in fMRI. Using a novel approach based on filtering randomly-selected spectrotemporal modulations (STMs) from aurally-presented sentences, STRFs were estimated for a group of listeners and categorized using a data-driven clustering algorithm. 'Behavioral STRFs' highlighting STMs crucial for speech recognition were derived from intelligibility judgments. Clustering revealed that STRFs in the supratemporal plane represented a broad range of STMs, while STRFs in the lateral temporal lobe represented circumscribed STM patterns important to intelligibility. Detailed analysis recovered a bilateral organization with posterior-lateral regions preferentially processing STMs associated with phonological information and anterior-lateral regions preferentially processing STMs associated with word- and phrase-level information. Regions in lateral Heschl's gyrus preferentially processed STMs associated with vocalic information (pitch).
Collapse
Affiliation(s)
- Jonathan H Venezia
- VA Loma Linda Healthcare System, Loma Linda, CA, USA; Dept. of Otolaryngology, School of Medicine, Loma Linda University, Loma Linda, CA, USA.
| | | | - Virginia M Richards
- Depts. of Cognitive Sciences and Language Science, University of California, Irvine, Irvine, CA, USA
| | - Gregory Hickok
- Depts. of Cognitive Sciences and Language Science, University of California, Irvine, Irvine, CA, USA
| |
Collapse
|
9
|
Specht K, Wigglesworth P. The functional and structural asymmetries of the superior temporal sulcus. Scand J Psychol 2018; 59:74-82. [PMID: 29356006 DOI: 10.1111/sjop.12410] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2017] [Accepted: 10/01/2017] [Indexed: 01/09/2023]
Abstract
The superior temporal sulcus (STS) is an anatomical structure that increasingly interests researchers. This structure appears to receive multisensory input and is involved in several perceptual and cognitive core functions, such as speech perception, audiovisual integration, (biological) motion processing and theory of mind capacities. In addition, the superior temporal sulcus is not only one of the longest sulci of the brain, but it also shows marked functional and structural asymmetries, some of which have only been found in humans. To explore the functional-structural relationships of these asymmetries in more detail, this study combines functional and structural magnetic resonance imaging. Using a speech perception task, an audiovisual integration task, and a theory of mind task, this study again demonstrated an involvement of the STS in these processes, with an expected strong leftward asymmetry for the speech perception task. Furthermore, this study confirmed the earlier described, human-specific asymmetries, namely that the left STS is longer than the right STS and that the right STS is deeper than the left STS. However, this study did not find any relationship between these structural asymmetries and the detected brain activations or their functional asymmetries. This can, on the other hand, give further support to the notion that the structural asymmetry of the STS is not directly related to the functional asymmetry of the speech perception and the language system as a whole, but that it may have other causes and functions.
Collapse
Affiliation(s)
- Karsten Specht
- Department of Biological and Medical Psychology, University of Bergen, Norway.,Department of Education, UiT/The Arctic University of Norway, Tromsø, Norway
| | - Philip Wigglesworth
- Department of Behavioural Sciences, Oslo, and Akershus University College of Applied Sciences, Oslo, Norway
| |
Collapse
|
10
|
Venezia JH, Vaden KI, Rong F, Maddox D, Saberi K, Hickok G. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus. Front Hum Neurosci 2017; 11:174. [PMID: 28439236 PMCID: PMC5383672 DOI: 10.3389/fnhum.2017.00174] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2016] [Accepted: 03/24/2017] [Indexed: 11/30/2022] Open
Abstract
The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.
Collapse
Affiliation(s)
| | - Kenneth I Vaden
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South CarolinaCharleston, SC, USA
| | - Feng Rong
- Department of Cognitive Sciences, Center for Cognitive Neuroscience and Engineering, University of CaliforniaIrvine, CA, USA
| | - Dale Maddox
- Department of Cognitive Sciences, Center for Cognitive Neuroscience and Engineering, University of CaliforniaIrvine, CA, USA
| | - Kourosh Saberi
- Department of Cognitive Sciences, Center for Cognitive Neuroscience and Engineering, University of CaliforniaIrvine, CA, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, Center for Cognitive Neuroscience and Engineering, University of CaliforniaIrvine, CA, USA
| |
Collapse
|
11
|
Morken F, Helland T, Hugdahl K, Specht K. Reading in dyslexia across literacy development: A longitudinal study of effective connectivity. Neuroimage 2017; 144:92-100. [DOI: 10.1016/j.neuroimage.2016.09.060] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2016] [Revised: 08/24/2016] [Accepted: 09/25/2016] [Indexed: 10/20/2022] Open
|
12
|
Manca AD, Grimaldi M. Vowels and Consonants in the Brain: Evidence from Magnetoencephalographic Studies on the N1m in Normal-Hearing Listeners. Front Psychol 2016; 7:1413. [PMID: 27713712 PMCID: PMC5031792 DOI: 10.3389/fpsyg.2016.01413] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 09/05/2016] [Indexed: 01/07/2023] Open
Abstract
Speech sound perception is one of the most fascinating tasks performed by the human brain. It involves a mapping from continuous acoustic waveforms onto the discrete phonological units computed to store words in the mental lexicon. In this article, we review the magnetoencephalographic studies that have explored the timing and morphology of the N1m component to investigate how vowels and consonants are computed and represented within the auditory cortex. The neurons that are involved in the N1m act to construct a sensory memory of the stimulus due to spatially and temporally distributed activation patterns within the auditory cortex. Indeed, localization of auditory fields maps in animals and humans suggested two levels of sound coding, a tonotopy dimension for spectral properties and a tonochrony dimension for temporal properties of sounds. When the stimulus is a complex speech sound, tonotopy and tonochrony data may give important information to assess whether the speech sound parsing and decoding are generated by pure bottom-up reflection of acoustic differences or whether they are additionally affected by top-down processes related to phonological categories. Hints supporting pure bottom-up processing coexist with hints supporting top-down abstract phoneme representation. Actually, N1m data (amplitude, latency, source generators, and hemispheric distribution) are limited and do not help to disentangle the issue. The nature of these limitations is discussed. Moreover, neurophysiological studies on animals and neuroimaging studies on humans have been taken into consideration. We compare also the N1m findings with the investigation of the magnetic mismatch negativity (MMNm) component and with the analogous electrical components, the N1 and the MMN. We conclude that N1 seems more sensitive to capture lateralization and hierarchical processes than N1m, although the data are very preliminary. Finally, we suggest that MEG data should be integrated with EEG data in the light of the neural oscillations framework and we propose some concerns that should be addressed by future investigations if we want to closely line up language research with issues at the core of the functional brain mechanisms.
Collapse
Affiliation(s)
- Anna Dora Manca
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| | - Mirko Grimaldi
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| |
Collapse
|
13
|
Vydrova R, Komarek V, Sanda J, Sterbova K, Jahodova A, Maulisova A, Zackova J, Reissigova J, Krsek P, Kyncl M. Structural alterations of the language connectome in children with specific language impairment. BRAIN AND LANGUAGE 2015; 151:35-41. [PMID: 26609941 DOI: 10.1016/j.bandl.2015.10.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2015] [Revised: 09/18/2015] [Accepted: 10/24/2015] [Indexed: 06/05/2023]
Abstract
We evaluated brain white matter pathways associated with language processing in 37 children with specific language impairment aged 6-12 years and 34 controls, matched for age, sex and handedness. Arcuate fascicle (AF), inferior fronto-occipital fascicle (IFOF), inferior longitudinal fascicle (ILF) and uncinate fascicle (UF) were identified using magnetic resonance diffusion tensor imaging (DTI). Diffusivity parameters and volume of the tracts were compared between the SLI and control group. Children with SLI showed decreased fractional anisotropy in all investigated tracts, increased mean diffusivity and radial diffusivity component in arcuate fascicle bilaterally, left IFOF and left ILF. Further, bilaterally increased volume of the ILF in children with SLI was found. We confirmed previous findings indicating deficient connectivity of the arcuate fascicle and as a novel finding, demonstrate abnormal development of the ventral language stream in patients with SLI.
Collapse
Affiliation(s)
- Rosa Vydrova
- Department of Pediatric Neurology, Charles University, 2nd Faculty of Medicine, University Hospital Motol, Prague, Czech Republic
| | - Vladimir Komarek
- Department of Pediatric Neurology, Charles University, 2nd Faculty of Medicine, University Hospital Motol, Prague, Czech Republic
| | - Jan Sanda
- Department of Radiology, Charles University, 2nd Faculty of Medicine, University Hospital Motol, Prague, Czech Republic
| | - Katalin Sterbova
- Department of Pediatric Neurology, Charles University, 2nd Faculty of Medicine, University Hospital Motol, Prague, Czech Republic
| | - Alena Jahodova
- Department of Pediatric Neurology, Charles University, 2nd Faculty of Medicine, University Hospital Motol, Prague, Czech Republic
| | - Alice Maulisova
- Department of Psychology, Charles University, 2nd Faculty of Medicine, University Hospital Motol, Prague, Czech Republic
| | - Jitka Zackova
- Department of Psychology, Charles University, 2nd Faculty of Medicine, University Hospital Motol, Prague, Czech Republic
| | - Jindra Reissigova
- Institute of Computer Science AS CR, Department of Medical Informatics and Biostatistics, Prague, Czech Republic
| | - Pavel Krsek
- Department of Pediatric Neurology, Charles University, 2nd Faculty of Medicine, University Hospital Motol, Prague, Czech Republic.
| | - Martin Kyncl
- Department of Radiology, Charles University, 2nd Faculty of Medicine, University Hospital Motol, Prague, Czech Republic
| |
Collapse
|
14
|
Häberling IS, Steinemann A, Corballis MC. Cerebral asymmetry for language: Comparing production with comprehension. Neuropsychologia 2015; 80:17-23. [PMID: 26548403 DOI: 10.1016/j.neuropsychologia.2015.11.002] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2015] [Revised: 10/12/2015] [Accepted: 11/04/2015] [Indexed: 10/22/2022]
Abstract
Although left-hemispheric damage can impair both the production and comprehension of language, it has been claimed that comprehension is more bilaterally represented than is production. A variant of this theme is based on the theory that different aspects of language are processed by a dorsal stream, responsible for mapping words to articulation, and a ventral stream for processing input for meaning. Some have claimed that the dorsal stream is left-hemispheric, while the ventral stream is bilaterally organized. We used fMRI to record activation while left- and right-handed participants performed covert word-generation task and judged whether word pairs were synonyms. Regions of interest were Broca's area as part of the dorsal stream and the superior and middle temporal gyri as part of the ventral stream. Laterality indices showed equal left-hemispheric lateralization in Broca's area for word generation and both Broca's area and temporal lobe for the synonym judgments. Handedness influenced laterality equally in each area and task, with right-handers showing stronger left-hemispheric dominance than left-handers. Although our findings provide no evidence that asymmetry is more pronounced for production than for comprehension, correlations between the tasks and regions of interest support the view that lateralization in the temporal lobe depends on feedback influences from frontal regions.
Collapse
Affiliation(s)
- Isabelle S Häberling
- School of Psychology, University of Auckland, Private Bag 92019, Auckland 1142, New Zealand.
| | - Anita Steinemann
- School of Psychology, University of Auckland, Private Bag 92019, Auckland 1142, New Zealand
| | - Michael C Corballis
- School of Psychology, University of Auckland, Private Bag 92019, Auckland 1142, New Zealand.
| |
Collapse
|
15
|
Van der Haegen L, Acke F, Vingerhoets G, Dhooge I, De Leenheer E, Cai Q, Brysbaert M. Laterality and unilateral deafness: Patients with congenital right ear deafness do not develop atypical language dominance. Neuropsychologia 2015; 93:482-492. [PMID: 26522620 DOI: 10.1016/j.neuropsychologia.2015.10.032] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2015] [Revised: 10/19/2015] [Accepted: 10/26/2015] [Indexed: 02/06/2023]
Abstract
Auditory speech perception, speech production and reading lateralize to the left hemisphere in the majority of healthy right-handers. In this study, we investigated to what extent sensory input underlies the side of language dominance. We measured the lateralization of the three core subprocesses of language in patients who had profound hearing loss in the right ear from birth and in matched control subjects. They took part in a semantic decision listening task involving speech and sound stimuli (auditory perception), a word generation task (speech production) and a passive reading task (reading). The results show that a lack of sensory auditory input on the right side, which is strongly connected to the contralateral left hemisphere, does not lead to atypical lateralization of speech perception. Speech production and reading were also typically left lateralized in all but one patient, contradicting previous small scale studies. Other factors such as genetic constraints presumably overrule the role of sensory input in the development of (a)typical language lateralization.
Collapse
Affiliation(s)
| | - Frederic Acke
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Guy Vingerhoets
- Department of Experimental Psychology, Ghent University, Belgium
| | - Ingeborg Dhooge
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Els De Leenheer
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Qing Cai
- Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, East China Normal University, Shanghai 200062, China; NYU-ECNU Institute of Brain and Cognitive Science, NYU Shanghai, 200062 Shanghai, China.
| | - Marc Brysbaert
- Department of Experimental Psychology, Ghent University, Belgium
| |
Collapse
|
16
|
Berthier ML, Lambon Ralph MA. Dissecting the function of networks underpinning language repetition. Front Hum Neurosci 2014; 8:727. [PMID: 25324751 PMCID: PMC4183086 DOI: 10.3389/fnhum.2014.00727] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2014] [Accepted: 08/29/2014] [Indexed: 11/13/2022] Open
Affiliation(s)
- Marcelo L Berthier
- Cognitive Neurology and Aphasia Unit, Centro de Investigaciones Médico-Sanitarias, University of Málaga Málaga, Spain
| | - Matthew A Lambon Ralph
- Neuroscience and Aphasia Research Unit, School of Psychological Sciences, University of Manchester UK
| |
Collapse
|
17
|
Jenson D, Bowers AL, Harkrider AW, Thornton D, Cuellar M, Saltuklaroglu T. Temporal dynamics of sensorimotor integration in speech perception and production: independent component analysis of EEG data. Front Psychol 2014; 5:656. [PMID: 25071633 PMCID: PMC4091311 DOI: 10.3389/fpsyg.2014.00656] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2014] [Accepted: 06/08/2014] [Indexed: 11/17/2022] Open
Abstract
Activity in anterior sensorimotor regions is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20 Hz) and alpha (~10 Hz) spectral power within the EEG μ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different) of syllables pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to (1) identify clusters of μ components common to all conditions and (2) examine real-time event-related spectral perturbations (ERSP) within alpha and beta bands. 17 and 15 out of 20 participants produced left and right μ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR < 0.05) early alpha event-related synchronization (ERS) prior to and during stimulus presentation and later alpha event-related desynchronization (ERD) following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that μ-beta ERD indexes early predictive coding (e.g., internal modeling) and/or overt and covert attentional/motor processes. μ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while μ-alpha ERD may index sensory feedback during speech rehearsal and production.
Collapse
Affiliation(s)
- David Jenson
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Andrew L. Bowers
- Department of Communication Disorders, University of ArkansasFayetteville, AR, USA
| | - Ashley W. Harkrider
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - David Thornton
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Megan Cuellar
- Speech-Language Pathology Program, College of Health Sciences, Midwestern UniversityChicago, IL, USA
| | - Tim Saltuklaroglu
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| |
Collapse
|
18
|
Specht K, Baumgartner F, Stadler J, Hugdahl K, Pollmann S. Functional asymmetry and effective connectivity of the auditory system during speech perception is modulated by the place of articulation of the consonant- A 7T fMRI study. Front Psychol 2014; 5:549. [PMID: 24966841 PMCID: PMC4052338 DOI: 10.3389/fpsyg.2014.00549] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2013] [Accepted: 05/18/2014] [Indexed: 11/16/2022] Open
Abstract
To differentiate between stop-consonants, the auditory system has to detect subtle place of articulation (PoA) and voice-onset time (VOT) differences between stop-consonants. How this differential processing is represented on the cortical level remains unclear. The present functional magnetic resonance (fMRI) study takes advantage of the superior spatial resolution and high sensitivity of ultra-high-field 7 T MRI. Subjects were attentively listening to consonant–vowel (CV) syllables with an alveolar or bilabial stop-consonant and either a short or long VOT. The results showed an overall bilateral activation pattern in the posterior temporal lobe during the processing of the CV syllables. This was however modulated strongest by PoA such that syllables with an alveolar stop-consonant showed stronger left lateralized activation. In addition, analysis of underlying functional and effective connectivity revealed an inhibitory effect of the left planum temporale (PT) onto the right auditory cortex (AC) during the processing of alveolar CV syllables. Furthermore, the connectivity result indicated also a directed information flow from the right to the left AC, and further to the left PT for all syllables. These results indicate that auditory speech perception relies on an interplay between the left and right ACs, with the left PT as modulator. Furthermore, the degree of functional asymmetry is determined by the acoustic properties of the CV syllables.
Collapse
Affiliation(s)
- Karsten Specht
- Department of Biological and Medical Psychology University of Bergen, Bergen, Norway ; Department of Medical Engineering, Haukeland University Hospital Bergen, Norway
| | - Florian Baumgartner
- Department of Experimental Psychology, Otto-von-Guericke University Magdeburg, Germany
| | - Jörg Stadler
- Leibniz Institute for Neurobiology, Magdeburg Germany
| | - Kenneth Hugdahl
- Department of Biological and Medical Psychology University of Bergen, Bergen, Norway ; Division of Psychiatry, Haukeland University Hospital Bergen, Norway ; Department of Radiology, Haukeland University Hospital Bergen, Norway ; NORMENT Senter for Fremragende Forskning Oslo Norway
| | - Stefan Pollmann
- Department of Experimental Psychology, Otto-von-Guericke University Magdeburg, Germany ; Center for Behavioral Brain Sciences Magdeburg Germany
| |
Collapse
|
19
|
Specht K. Neuronal basis of speech comprehension. Hear Res 2013; 307:121-35. [PMID: 24113115 DOI: 10.1016/j.heares.2013.09.011] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2013] [Revised: 09/15/2013] [Accepted: 09/19/2013] [Indexed: 01/18/2023]
Abstract
Verbal communication does not rely only on the simple perception of auditory signals. It is rather a parallel and integrative processing of linguistic and non-linguistic information, involving temporal and frontal areas in particular. This review describes the inherent complexity of auditory speech comprehension from a functional-neuroanatomical perspective. The review is divided into two parts. In the first part, structural and functional asymmetry of language relevant structures will be discus. The second part of the review will discuss recent neuroimaging studies, which coherently demonstrate that speech comprehension processes rely on a hierarchical network involving the temporal, parietal, and frontal lobes. Further, the results support the dual-stream model for speech comprehension, with a dorsal stream for auditory-motor integration, and a ventral stream for extracting meaning but also the processing of sentences and narratives. Specific patterns of functional asymmetry between the left and right hemisphere can also be demonstrated. The review article concludes with a discussion on interactions between the dorsal and ventral streams, particularly the involvement of motor related areas in speech perception processes, and outlines some remaining unresolved issues. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Karsten Specht
- Department of Biological and Medical Psychology, University of Bergen, Jonas Lies vei 91, 5009 Bergen, Norway; Department for Medical Engineering, Haukeland University Hospital, Bergen, Norway.
| |
Collapse
|