1
|
Osorio S, Assaneo MF. Anatomically distinct cortical tracking of music and speech by slow (1-8Hz) and fast (70-120Hz) oscillatory activity. PLoS One 2025; 20:e0320519. [PMID: 40341725 PMCID: PMC12061428 DOI: 10.1371/journal.pone.0320519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Accepted: 02/19/2025] [Indexed: 05/11/2025] Open
Abstract
Music and speech encode hierarchically organized structural complexity at the service of human expressiveness and communication. Previous research has shown that populations of neurons in auditory regions track the envelope of acoustic signals within the range of slow and fast oscillatory activity. However, the extent to which cortical tracking is influenced by the interplay between stimulus type, frequency band, and brain anatomy remains an open question. In this study, we reanalyzed intracranial recordings from thirty subjects implanted with electrocorticography (ECoG) grids in the left cerebral hemisphere, drawn from an existing open-access ECoG database. Participants passively watched a movie where visual scenes were accompanied by either music or speech stimuli. Cross-correlation between brain activity and the envelope of music and speech signals, along with density-based clustering analyses and linear mixed-effects modeling, revealed both anatomically overlapping and functionally distinct mapping of the tracking effect as a function of stimulus type and frequency band. We observed widespread left-hemisphere tracking of music and speech signals in the Slow Frequency Band (SFB, band-passed filtered low-frequency signal between 1-8Hz), with near zero temporal lags. In contrast, cortical tracking in the High Frequency Band (HFB, envelope of the 70-120Hz band-passed filtered signal) was higher during speech perception, was more densely concentrated in classical language processing areas, and showed a frontal-to-temporal gradient in lag values that was not observed during perception of musical stimuli. Our results highlight a complex interaction between cortical region and frequency band that shapes temporal dynamics during processing of naturalistic music and speech signals.
Collapse
Affiliation(s)
- Sergio Osorio
- Department of Neurology, Harvard Medical School, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts, United States of America
| | | |
Collapse
|
2
|
Morgenroth E, Moia S, Vilaclara L, Fournier R, Muszynski M, Ploumitsakou M, Almató-Bellavista M, Vuilleumier P, Van De Ville D. Emo-FilM: A multimodal dataset for affective neuroscience using naturalistic stimuli. Sci Data 2025; 12:684. [PMID: 40268934 PMCID: PMC12019557 DOI: 10.1038/s41597-025-04803-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Accepted: 03/12/2025] [Indexed: 04/25/2025] Open
Abstract
The Emo-FilM dataset stands for Emotion research using Films and fMRI in healthy participants. This dataset includes emotion annotations by 44 raters for 14 short films with a combined duration of over 2½ hours and recordings of respiration, heart rate, and functional magnetic resonance imaging (fMRI) from a sample of 30 individuals watching the same films. 50 items were annotated including discrete emotions and emotion components from the domains of appraisal, motivation, motor expression, physiological response, and feeling. The ratings had a mean inter-rater agreement of 0.38. The fMRI data acquired at 3 Tesla is includes high-resolution structural and resting state fMRI for each participant. Physiological recordings included heart rate, respiration, and electrodermal activity. This dataset is designed, but not limited, to studying the dynamic neural processes involved in emotion experience. It has a high temporal resolution of annotations, and includes validations of annotations by the fMRI sample. The Emo-FilM dataset is a treasure trove for researching emotion in response to naturalistic stimulation in a multimodal framework.
Collapse
Affiliation(s)
- Elenor Morgenroth
- Neuro-X Institute, École Polytechnique Fédérale de Lausanne, Geneva, 1202, Switzerland.
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, 1202, Switzerland.
- Swiss Center for Affective Sciences, University of Geneva, Geneva, 1202, Switzerland.
| | - Stefano Moia
- Neuro-X Institute, École Polytechnique Fédérale de Lausanne, Geneva, 1202, Switzerland
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, 1202, Switzerland
| | - Laura Vilaclara
- Neuro-X Institute, École Polytechnique Fédérale de Lausanne, Geneva, 1202, Switzerland
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, 1202, Switzerland
| | - Raphael Fournier
- Department of Basic Neurosciences, University of Geneva, Geneva, 1202, Switzerland
| | - Michal Muszynski
- Department of Basic Neurosciences, University of Geneva, Geneva, 1202, Switzerland
| | - Maria Ploumitsakou
- Neuro-X Institute, École Polytechnique Fédérale de Lausanne, Geneva, 1202, Switzerland
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, 1202, Switzerland
| | - Marina Almató-Bellavista
- Neuro-X Institute, École Polytechnique Fédérale de Lausanne, Geneva, 1202, Switzerland
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, 1202, Switzerland
| | - Patrik Vuilleumier
- Swiss Center for Affective Sciences, University of Geneva, Geneva, 1202, Switzerland
- Department of Basic Neurosciences, University of Geneva, Geneva, 1202, Switzerland
- CIBM Center for Biomedical Imaging, Geneva, 1202, Switzerland
| | - Dimitri Van De Ville
- Neuro-X Institute, École Polytechnique Fédérale de Lausanne, Geneva, 1202, Switzerland
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, 1202, Switzerland
- CIBM Center for Biomedical Imaging, Geneva, 1202, Switzerland
| |
Collapse
|
3
|
Zang B, Sun T, Lu Y, Zhang Y, Wang G, Wan S. Tensor-powered insights into neural dynamics. Commun Biol 2025; 8:298. [PMID: 39994447 PMCID: PMC11850929 DOI: 10.1038/s42003-025-07711-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 02/10/2025] [Indexed: 02/26/2025] Open
Abstract
The complex spatiotemporal dynamics of neurons encompass a wealth of information relevant to perception and decision-making, making the decoding of neural activity a central focus in neuroscience research. Traditional machine learning or deep learning-based neural information modeling approaches have achieved significant results in decoding. Nevertheless, such methodologies require the vectorization of data, a process that disrupts the intrinsic relationships inherent in high-dimensional spaces, consequently impeding their capability to effectively process information in high-order tensor domains. In this paper, we introduce a novel decoding approach, namely the Least Squares Sport Tensor Machine (LS-STM), which is based on tensor space and represents a tensorized improvement over traditional vector learning frameworks. In extensive evaluations using human and mouse data, our results demonstrate that LS-STM exhibits superior performance in neural signal decoding tasks compared to traditional vectorization-based decoding methods. Furthermore, LS-STM demonstrates better performance in decoding neural signals with limited samples and the tensor weights of the LS-STM decoder enable the retrospective identification of key neurons during the neural encoding process. This study introduces a novel tensor computing approach and perspective for decoding high-dimensional neural information in the field.
Collapse
Affiliation(s)
- Boyang Zang
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- School of Clinical Medicine, Tsinghua University, Beijing, China
- Department of Neurosurgery, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Tao Sun
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- School of Control Science and Engineering, Dalian University of Technology, Dalian, China
| | - Yang Lu
- School of Clinical Medicine, Tsinghua University, Beijing, China
- Department of Neurosurgery, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Yuhang Zhang
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Guihuai Wang
- School of Clinical Medicine, Tsinghua University, Beijing, China.
- Department of Neurosurgery, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China.
| | - Sen Wan
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
| |
Collapse
|
4
|
Dorin D, Kiselev N, Grabovoy A, Strijov V. Forecasting fMRI images from video sequences: linear model analysis. Health Inf Sci Syst 2024; 12:55. [PMID: 39554225 PMCID: PMC11568086 DOI: 10.1007/s13755-024-00315-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 10/29/2024] [Indexed: 11/19/2024] Open
Abstract
Over the past few decades, a variety of significant scientific breakthroughs have been achieved in the fields of brain encoding and decoding using the functional magnetic resonance imaging (fMRI). Many studies have been conducted on the topic of human brain reaction to visual stimuli. However, the relationship between fMRI images and video sequences viewed by humans remains complex and is often studied using large transformer models. In this paper, we investigate the correlation between videos presented to participants during an experiment and the resulting fMRI images. To achieve this, we propose a method for creating a linear model that predicts changes in fMRI signals based on video sequence images. A linear model is constructed for each individual voxel in the fMRI image, assuming that the image sequence follows a Markov property. Through the comprehensive qualitative experiments, we demonstrate the relationship between the two time series. We hope that our findings contribute to a deeper understanding of the human brain's reaction to external stimuli and provide a basis for future research in this area.
Collapse
Affiliation(s)
| | - Nikita Kiselev
- Research Center for Artificial Intelligence, Innopolis University, Innopolis, Russia
| | | | | |
Collapse
|
5
|
Zhou J, Duan Y, Chang YC, Wang YK, Lin CT. BELT: Bootstrapped EEG-to-Language Training by Natural Language Supervision. IEEE Trans Neural Syst Rehabil Eng 2024; 32:3278-3288. [PMID: 39190511 DOI: 10.1109/tnsre.2024.3450795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2024]
Abstract
Decoding natural language from noninvasive brain signals has been an exciting topic with the potential to expand the applications of brain-computer interface (BCI) systems. However, current methods face limitations in decoding sentences from electroencephalography (EEG) signals. Improving decoding performance requires the development of a more effective encoder for the EEG modality. Nonetheless, learning generalizable EEG representations remains a challenge due to the relatively small scale of existing EEG datasets. In this paper, we propose enhancing the EEG encoder to improve subsequent decoding performance. Specifically, we introduce the discrete Conformer encoder (D-Conformer) to transform EEG signals into discrete representations and bootstrap the learning process by imposing EEG-language alignment from the early training stage. The D-Conformer captures both local and global patterns from EEG signals and discretizes the EEG representation, making the representation more resilient to variations, while early-stage EEG-language alignment mitigates the limitations of small EEG datasets and facilitates the learning of the semantic representations from EEG signals. These enhancements result in improved EEG representations and decoding performance. We conducted extensive experiments and ablation studies to thoroughly evaluate the proposed method. Utilizing the D-Conformer encoder and bootstrapping training strategy, our approach demonstrates superior decoding performance across various tasks, including word-level, sentence-level, and sentiment-level decoding from EEG signals. Specifically, in word-level classification, we show that our encoding method produces more distinctive representations and higher classification performance compared to the EEG encoders from existing methods. At the sentence level, our model outperformed the baseline by 5.45%, achieving a BLEU-1 score of 42.31%. Furthermore, in sentiment classification, our model exceeded the baseline by 14%, achieving a sentiment classification accuracy of 69.3%.
Collapse
|
6
|
Desai M, Field AM, Hamilton LS. A comparison of EEG encoding models using audiovisual stimuli and their unimodal counterparts. PLoS Comput Biol 2024; 20:e1012433. [PMID: 39250485 PMCID: PMC11412666 DOI: 10.1371/journal.pcbi.1012433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 09/19/2024] [Accepted: 08/21/2024] [Indexed: 09/11/2024] Open
Abstract
Communication in the real world is inherently multimodal. When having a conversation, typically sighted and hearing people use both auditory and visual cues to understand one another. For example, objects may make sounds as they move in space, or we may use the movement of a person's mouth to better understand what they are saying in a noisy environment. Still, many neuroscience experiments rely on unimodal stimuli to understand encoding of sensory features in the brain. The extent to which visual information may influence encoding of auditory information and vice versa in natural environments is thus unclear. Here, we addressed this question by recording scalp electroencephalography (EEG) in 11 subjects as they listened to and watched movie trailers in audiovisual (AV), visual (V) only, and audio (A) only conditions. We then fit linear encoding models that described the relationship between the brain responses and the acoustic, phonetic, and visual information in the stimuli. We also compared whether auditory and visual feature tuning was the same when stimuli were presented in the original AV format versus when visual or auditory information was removed. In these stimuli, visual and auditory information was relatively uncorrelated, and included spoken narration over a scene as well as animated or live-action characters talking with and without their face visible. For this stimulus, we found that auditory feature tuning was similar in the AV and A-only conditions, and similarly, tuning for visual information was similar when stimuli were presented with the audio present (AV) and when the audio was removed (V only). In a cross prediction analysis, we investigated whether models trained on AV data predicted responses to A or V only test data similarly to models trained on unimodal data. Overall, prediction performance using AV training and V test sets was similar to using V training and V test sets, suggesting that the auditory information has a relatively smaller effect on EEG. In contrast, prediction performance using AV training and A only test set was slightly worse than using matching A only training and A only test sets. This suggests the visual information has a stronger influence on EEG, though this makes no qualitative difference in the derived feature tuning. In effect, our results show that researchers may benefit from the richness of multimodal datasets, which can then be used to answer more than one research question.
Collapse
Affiliation(s)
- Maansi Desai
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, Texas, United States of America
| | - Alyssa M Field
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, Texas, United States of America
| | - Liberty S Hamilton
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, Texas, United States of America
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
7
|
Ferdous TR, Pollonini L, Francis JT. Enhancing Auditory BCI Performance: Incorporation of Connectivity Analysis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-5. [PMID: 40040033 DOI: 10.1109/embc53108.2024.10782147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Brain connectivity analysis to classify auditory stimuli applicable to invasive auditory BCI technology, particularly intracranial electroencephalography (iEEG) remains an exciting frontier. This study revealed insights into brain network dynamics, improving analysis precision to distinguish related auditory stimuli such as speech and music. We thereby contribute to advancing auditory BCI systems to bridge the gap between noninvasive and invasive BCI by utilizing noninvasive BCI methodological frameworks to invasive BCI (iEEG) data. We focused on the viability of using connectivity matrices in BCI calculated across brain waves such as alpha, beta, theta, and gamma. The research highlights that the traditional machine learning classifier, Support Vector Machine (SVM), demonstrates exceptional capabilities in handling brain connectivity data, exhibiting an outstanding 97% accuracy in classifying brain states, surpassing previous relevant studies with an improvement of 9.64% The results are significant as we show that neural activity in the gamma band provides the best classification performance using connectivity matrices calculated with Phase Locking Values and Coherence methods.
Collapse
|
8
|
Zhou M, Gong Z, Dai Y, Wen Y, Liu Y, Zhen Z. A large-scale fMRI dataset for human action recognition. Sci Data 2023; 10:415. [PMID: 37369643 DOI: 10.1038/s41597-023-02325-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 06/21/2023] [Indexed: 06/29/2023] Open
Abstract
Human action recognition is a critical capability for our survival, allowing us to interact easily with the environment and others in everyday life. Although the neural basis of action recognition has been widely studied using a few action categories from simple contexts as stimuli, how the human brain recognizes diverse human actions in real-world environments still needs to be explored. Here, we present the Human Action Dataset (HAD), a large-scale functional magnetic resonance imaging (fMRI) dataset for human action recognition. HAD contains fMRI responses to 21,600 video clips from 30 participants. The video clips encompass 180 human action categories and offer a comprehensive coverage of complex activities in daily life. We demonstrate that the data are reliable within and across participants and, notably, capture rich representation information of the observed human actions. This extensive dataset, with its vast number of action categories and exemplars, has the potential to deepen our understanding of human action recognition in natural environments.
Collapse
Affiliation(s)
- Ming Zhou
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Zhengxin Gong
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Yuxuan Dai
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Yushan Wen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Youyi Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Zonglei Zhen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
9
|
Simistira Liwicki F, Gupta V, Saini R, De K, Abid N, Rakesh S, Wellington S, Wilson H, Liwicki M, Eriksson J. Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition. Sci Data 2023; 10:378. [PMID: 37311807 DOI: 10.1038/s41597-023-02286-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 06/01/2023] [Indexed: 06/15/2023] Open
Abstract
The recognition of inner speech, which could give a 'voice' to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech. This paper presents the first publicly available bimodal dataset containing EEG and fMRI data acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants during an inner-speech task with words in either a social or numerical category. Each of the 8-word stimuli were assessed with 40 trials, resulting in 320 trials in each modality for each participant. The aim of this work is to provide a publicly available bimodal dataset on inner speech, contributing towards speech prostheses.
Collapse
Affiliation(s)
- Foteini Simistira Liwicki
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden.
| | - Vibha Gupta
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Rajkumar Saini
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Kanjar De
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Nosheen Abid
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Sumit Rakesh
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | | | - Holly Wilson
- University of Bath, Department of Computer Science, Bath, UK
| | - Marcus Liwicki
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Johan Eriksson
- Umeå University, Department of Integrative Medical Biology (IMB) and Umeå Center for Functional Brain Imaging (UFBI), Umeå, Sweden
| |
Collapse
|
10
|
Mercier MR, Dubarry AS, Tadel F, Avanzini P, Axmacher N, Cellier D, Vecchio MD, Hamilton LS, Hermes D, Kahana MJ, Knight RT, Llorens A, Megevand P, Melloni L, Miller KJ, Piai V, Puce A, Ramsey NF, Schwiedrzik CM, Smith SE, Stolk A, Swann NC, Vansteensel MJ, Voytek B, Wang L, Lachaux JP, Oostenveld R. Advances in human intracranial electroencephalography research, guidelines and good practices. Neuroimage 2022; 260:119438. [PMID: 35792291 PMCID: PMC10190110 DOI: 10.1016/j.neuroimage.2022.119438] [Citation(s) in RCA: 75] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/23/2022] [Accepted: 06/30/2022] [Indexed: 12/11/2022] Open
Abstract
Since the second-half of the twentieth century, intracranial electroencephalography (iEEG), including both electrocorticography (ECoG) and stereo-electroencephalography (sEEG), has provided an intimate view into the human brain. At the interface between fundamental research and the clinic, iEEG provides both high temporal resolution and high spatial specificity but comes with constraints, such as the individual's tailored sparsity of electrode sampling. Over the years, researchers in neuroscience developed their practices to make the most of the iEEG approach. Here we offer a critical review of iEEG research practices in a didactic framework for newcomers, as well addressing issues encountered by proficient researchers. The scope is threefold: (i) review common practices in iEEG research, (ii) suggest potential guidelines for working with iEEG data and answer frequently asked questions based on the most widespread practices, and (iii) based on current neurophysiological knowledge and methodologies, pave the way to good practice standards in iEEG research. The organization of this paper follows the steps of iEEG data processing. The first section contextualizes iEEG data collection. The second section focuses on localization of intracranial electrodes. The third section highlights the main pre-processing steps. The fourth section presents iEEG signal analysis methods. The fifth section discusses statistical approaches. The sixth section draws some unique perspectives on iEEG research. Finally, to ensure a consistent nomenclature throughout the manuscript and to align with other guidelines, e.g., Brain Imaging Data Structure (BIDS) and the OHBM Committee on Best Practices in Data Analysis and Sharing (COBIDAS), we provide a glossary to disambiguate terms related to iEEG research.
Collapse
Affiliation(s)
- Manuel R Mercier
- INSERM, INS, Institut de Neurosciences des Systèmes, Aix-Marseille University, Marseille, France.
| | | | - François Tadel
- Signal & Image Processing Institute, University of Southern California, Los Angeles, CA United States of America
| | - Pietro Avanzini
- Institute of Neuroscience, National Research Council of Italy, Parma, Italy
| | - Nikolai Axmacher
- Department of Neuropsychology, Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Universitätsstraße 150, Bochum 44801, Germany; State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekou Outer St, Beijing 100875, China
| | - Dillan Cellier
- Department of Cognitive Science, University of California, La Jolla, San Diego, United States of America
| | - Maria Del Vecchio
- Institute of Neuroscience, National Research Council of Italy, Parma, Italy
| | - Liberty S Hamilton
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, United States of America; Institute for Neuroscience, The University of Texas at Austin, Austin, TX, United States of America; Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, United States of America
| | - Dora Hermes
- Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, MN, United States of America
| | - Michael J Kahana
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Robert T Knight
- Department of Psychology and the Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94720, United States of America
| | - Anais Llorens
- Helen Wills Neuroscience Institute, University of California, Berkeley, United States of America
| | - Pierre Megevand
- Department of Clinical neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Lucia Melloni
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany; Department of Neurology, NYU Grossman School of Medicine, 145 East 32nd Street, Room 828, New York, NY 10016, United States of America
| | - Kai J Miller
- Department of Neurosurgery, Mayo Clinic, Rochester, MN 55905, USA
| | - Vitória Piai
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands; Department of Medical Psychology, Radboudumc, Donders Centre for Medical Neuroscience, Nijmegen, the Netherlands
| | - Aina Puce
- Department of Psychological & Brain Sciences, Programs in Neuroscience, Cognitive Science, Indiana University, Bloomington, IN, United States of America
| | - Nick F Ramsey
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, UMC Utrecht, the Netherlands
| | - Caspar M Schwiedrzik
- Neural Circuits and Cognition Lab, European Neuroscience Institute Göttingen - A Joint Initiative of the University Medical Center Göttingen and the Max Planck Society, Göttingen, Germany; Perception and Plasticity Group, German Primate Center, Leibniz Institute for Primate Research, Göttingen, Germany
| | - Sydney E Smith
- Neurosciences Graduate Program, University of California, La Jolla, San Diego, United States of America
| | - Arjen Stolk
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands; Psychological and Brain Sciences, Dartmouth College, Hanover, NH, United States of America
| | - Nicole C Swann
- University of Oregon in the Department of Human Physiology, United States of America
| | - Mariska J Vansteensel
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, UMC Utrecht, the Netherlands
| | - Bradley Voytek
- Department of Cognitive Science, University of California, La Jolla, San Diego, United States of America; Neurosciences Graduate Program, University of California, La Jolla, San Diego, United States of America; Halıcıoğlu Data Science Institute, University of California, La Jolla, San Diego, United States of America; Kavli Institute for Brain and Mind, University of California, La Jolla, San Diego, United States of America
| | - Liang Wang
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Jean-Philippe Lachaux
- Lyon Neuroscience Research Center, EDUWELL Team, INSERM UMRS 1028, CNRS UMR 5292, Université Claude Bernard Lyon 1, Université de Lyon, Lyon F-69000, France
| | - Robert Oostenveld
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands; NatMEG, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
11
|
Wang S, Zhang X, Zhang J, Zong C. A synchronized multimodal neuroimaging dataset for studying brain language processing. Sci Data 2022; 9:590. [PMID: 36180444 PMCID: PMC9525723 DOI: 10.1038/s41597-022-01708-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 08/22/2022] [Indexed: 11/15/2022] Open
Abstract
We present a synchronized multimodal neuroimaging dataset for studying brain language processing (SMN4Lang) that contains functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) data on the same 12 healthy volunteers while the volunteers listened to 6 hours of naturalistic stories, as well as high-resolution structural (T1, T2), diffusion MRI and resting-state fMRI data for each participant. We also provide rich linguistic annotations for the stimuli, including word frequencies, syntactic tree structures, time-aligned characters and words, and various types of word and character embeddings. Quality assessment indicators verify that this is a high-quality neuroimaging dataset. Such synchronized data is separately collected by the same group of participants first listening to story materials in fMRI and then in MEG which are well suited to studying the dynamic processing of language comprehension, such as the time and location of different linguistic features encoded in the brain. In addition, this dataset, comprising a large vocabulary from stories with various topics, can serve as a brain benchmark to evaluate and improve computational language models. Measurement(s) | functional brain measurement • Magnetoencephalography | Technology Type(s) | Functional Magnetic Resonance Imaging • Magnetoencephalography | Factor Type(s) | naturalistic stimuli listening | Sample Characteristic - Organism | humanbeings |
Collapse
Affiliation(s)
- Shaonan Wang
- National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China. .,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
| | - Xiaohan Zhang
- National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Jiajun Zhang
- National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Chengqing Zong
- National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|