1
|
Woelk SP, Garfinkel SN. Dissociative Symptoms and Interoceptive Integration. Curr Top Behav Neurosci 2024. [PMID: 38755513 DOI: 10.1007/7854_2024_480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2024]
Abstract
Dissociative symptoms and disorders of dissociation are characterised by disturbances in the experience of the self and the surrounding world, manifesting as a breakdown in the normal integration of consciousness, memory, identity, emotion, and perception. This paper aims to provide insights into dissociative symptoms from the perspective of interoception, the sense of the body's internal physiological state, adopting a transdiagnostic framework.Dissociative symptoms are associated with a blunting of autonomic reactivity and a reduction in interoceptive precision. In addition to the central function of interoception in homeostasis, afferent visceral signals and their neural and mental representation have been shown to shape emotional feeling states, support memory encoding, and contribute to self-representation. Changes in interoceptive processing and disrupted integration of interoceptive signals into wider cognition may contribute to detachment from the body and the world, blunted emotional experience, and altered subjective recall, as experienced by individuals who suffer from dissociation.A better understanding of the role of altered interoceptive integration across the symptom areas of dissociation could thus provide insights into the neurophysiological mechanisms underlying dissociative disorders. As new therapeutic approaches targeting interoceptive processing emerge, recognising the significance of interoceptive mechanisms in dissociation holds potential implications for future treatment targets.
Collapse
Affiliation(s)
- Sascha P Woelk
- Institute of Cognitive Neuroscience, University College London, London, UK.
| | - Sarah N Garfinkel
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
2
|
Ryu J, Choi JW, Niketeghad S, Torres EB, Pouratian N. Irregularity of instantaneous gamma frequency in the motor control network characterize visuomotor and proprioceptive information processing. J Neural Eng 2024; 21:10.1088/1741-2552/ad2e1d. [PMID: 38417152 PMCID: PMC11025688 DOI: 10.1088/1741-2552/ad2e1d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 02/28/2024] [Indexed: 03/01/2024]
Abstract
Objective.The study aims to characterize movements with different sensory goals, by contrasting the neural activity involved in processing proprioceptive and visuo-motor information. To accomplish this, we have developed a new methodology that utilizes the irregularity of the instantaneous gamma frequency parameter for characterization.Approach.In this study, eight essential tremor patients undergoing an awake deep brain stimulation implantation surgery repetitively touched the clinician's finger (forward visually-guided/FV movement) and then one's own chin (backward proprioceptively-guided/BP movement). Neural electrocorticographic recordings from the motor (M1), somatosensory (S1), and posterior parietal cortex (PPC) were obtained and band-pass filtered in the gamma range (30-80 Hz). The irregularity of the inter-event intervals (IEI; inverse of instantaneous gamma frequency) were examined as: (1) auto-information of the IEI time series and (2) correlation between the amplitude and its proceeding IEI. We further explored the network connectivity after segmenting the FV and BP movements by periods of accelerating and decelerating forces, and applying the IEI parameter to transfer entropy methods.Main results.Conceptualizing that the irregularity in IEI reflects active new information processing, we found the highest irregularity in M1 during BP movement, highest in PPC during FV movement, and the lowest during rest at all sites. Also, connectivity was the strongest from S1 to M1 and from S1 to PPC during FV movement with accelerating force and weakest during rest.Significance. We introduce a novel methodology that utilize the instantaneous gamma frequency (i.e. IEI) parameter in characterizing goal-oriented movements with different sensory goals, and demonstrate its use to inform the directional connectivity within the motor cortical network. This method successfully characterizes different movement types, while providing interpretations to the sensory-motor integration processes.
Collapse
Affiliation(s)
- Jihye Ryu
- Department of Neurosurgery, David Geffen School of Medicine at UCLA, Los Angeles, CA 90095, USA
| | - Jeong Woo Choi
- Department of Neurological Surgery, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Soroush Niketeghad
- Department of Neurosurgery, David Geffen School of Medicine at UCLA, Los Angeles, CA 90095, USA
| | - Elizabeth B. Torres
- Psychology Department, Rutgers University Center for Cognitive Science, Computational Biomedicine Imaging and Modeling Center at Computer Science Department, Rutgers University, Piscataway, NJ 08854
| | - Nader Pouratian
- Department of Neurological Surgery, UT Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
3
|
Sulfaro AA, Robinson AK, Carlson TA. Properties of imagined experience across visual, auditory, and other sensory modalities. Conscious Cogn 2024; 117:103598. [PMID: 38086154 DOI: 10.1016/j.concog.2023.103598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 10/13/2023] [Accepted: 10/23/2023] [Indexed: 01/16/2024]
Abstract
Little is known about the perceptual characteristics of mental images nor how they vary across sensory modalities. We conducted an exhaustive survey into how mental images are experienced across modalities, mainly targeting visual and auditory imagery of a single stimulus, the letter "O", to facilitate direct comparisons. We investigated temporal properties of mental images (e.g. onset latency, duration), spatial properties (e.g. apparent location), effort (e.g. ease, spontaneity, control), movement requirements (e.g. eye movements), real-imagined interactions (e.g. inner speech while reading), beliefs about imagery norms and terminologies, as well as respondent confidence. Participants also reported on the five traditional senses and their prominence during thinking, imagining, and dreaming. Overall, visual and auditory experiences dominated mental events, although auditory mental images were superior to visual mental images on almost every metric tested except regarding spatial properties. Our findings suggest that modality-specific differences in mental imagery may parallel those of other sensory neural processes.
Collapse
Affiliation(s)
- Alexander A Sulfaro
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown 2006, New South Wales, Australia.
| | - Amanda K Robinson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown 2006, New South Wales, Australia; Queensland Brain Institute, The University of Queensland, St Lucia 4072, Queensland, Australia.
| | - Thomas A Carlson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown 2006, New South Wales, Australia.
| |
Collapse
|
4
|
Chung LKH, Jack BN, Griffiths O, Pearson D, Luque D, Harris AWF, Spencer KM, Le Pelley ME, So SHW, Whitford TJ. Neurophysiological evidence of motor preparation in inner speech and the effect of content predictability. Cereb Cortex 2023; 33:11556-11569. [PMID: 37943760 PMCID: PMC10751289 DOI: 10.1093/cercor/bhad389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Revised: 09/25/2023] [Accepted: 09/26/2023] [Indexed: 11/12/2023] Open
Abstract
Self-generated overt actions are preceded by a slow negativity as measured by electroencephalogram, which has been associated with motor preparation. Recent studies have shown that this neural activity is modulated by the predictability of action outcomes. It is unclear whether inner speech is also preceded by a motor-related negativity and influenced by the same factor. In three experiments, we compared the contingent negative variation elicited in a cue paradigm in an active vs. passive condition. In Experiment 1, participants produced an inner phoneme, at which an audible phoneme whose identity was unpredictable was concurrently presented. We found that while passive listening elicited a late contingent negative variation, inner speech production generated a more negative late contingent negative variation. In Experiment 2, the same pattern of results was found when participants were instead asked to overtly vocalize the phoneme. In Experiment 3, the identity of the audible phoneme was made predictable by establishing probabilistic expectations. We observed a smaller late contingent negative variation in the inner speech condition when the identity of the audible phoneme was predictable, but not in the passive condition. These findings suggest that inner speech is associated with motor preparatory activity that may also represent the predicted action-effects of covert actions.
Collapse
Affiliation(s)
- Lawrence K-h Chung
- School of Psychology, University of New South Wales (UNSW Sydney), Mathews Building, Library Walk, Kensington NSW 2052, Australia
- Department of Psychology, The Chinese University of Hong Kong, 3/F Sino Building, Chung Chi Road, Shatin, New Territories, Hong Kong SAR, China
| | - Bradley N Jack
- Research School of Psychology, Australian National University, Building 39, Science Road, Canberra ACT 2601, Australia
| | - Oren Griffiths
- School of Psychological Sciences, University of Newcastle, Behavioural Sciences Building, University Drive, Callaghan NSW 2308, Australia
| | - Daniel Pearson
- School of Psychology, University of Sydney, Griffith Taylor Building, Manning Road, Camperdown NSW 2006, Australia
| | - David Luque
- Department of Basic Psychology and Speech Therapy, University of Malaga, Faculty of Psychology, Dr Ortiz Ramos Street, 29010 Malaga, Spain
| | - Anthony W F Harris
- Westmead Clinical School, University of Sydney, 176 Hawkesbury Road, Westmead NSW 2145, Australia
- Brain Dynamics Centre, Westmead Institute for Medical Research, 176 Hawkesbury Road, Westmead NSW 2145, Australia
| | - Kevin M Spencer
- Research Service, Veterans Affairs Boston Healthcare System, and Department of Psychiatry, Harvard Medical School, 150 South Huntington Avenue, Boston MA 02130, United States
| | - Mike E Le Pelley
- School of Psychology, University of New South Wales (UNSW Sydney), Mathews Building, Library Walk, Kensington NSW 2052, Australia
| | - Suzanne H-w So
- Department of Psychology, The Chinese University of Hong Kong, 3/F Sino Building, Chung Chi Road, Shatin, New Territories, Hong Kong SAR, China
| | - Thomas J Whitford
- School of Psychology, University of New South Wales (UNSW Sydney), Mathews Building, Library Walk, Kensington NSW 2052, Australia
- Brain Dynamics Centre, Westmead Institute for Medical Research, 176 Hawkesbury Road, Westmead NSW 2145, Australia
| |
Collapse
|
5
|
Sulfaro AA, Robinson AK, Carlson TA. Modelling perception as a hierarchical competition differentiates imagined, veridical, and hallucinated percepts. Neurosci Conscious 2023; 2023:niad018. [PMID: 37621984 PMCID: PMC10445666 DOI: 10.1093/nc/niad018] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 08/26/2023] Open
Abstract
Mental imagery is a process by which thoughts become experienced with sensory characteristics. Yet, it is not clear why mental images appear diminished compared to veridical images, nor how mental images are phenomenologically distinct from hallucinations, another type of non-veridical sensory experience. Current evidence suggests that imagination and veridical perception share neural resources. If so, we argue that considering how neural representations of externally generated stimuli (i.e. sensory input) and internally generated stimuli (i.e. thoughts) might interfere with one another can sufficiently differentiate between veridical, imaginary, and hallucinatory perception. We here use a simple computational model of a serially connected, hierarchical network with bidirectional information flow to emulate the primate visual system. We show that modelling even first approximations of neural competition can more coherently explain imagery phenomenology than non-competitive models. Our simulations predict that, without competing sensory input, imagined stimuli should ubiquitously dominate hierarchical representations. However, with competition, imagination should dominate high-level representations but largely fail to outcompete sensory inputs at lower processing levels. To interpret our findings, we assume that low-level stimulus information (e.g. in early visual cortices) contributes most to the sensory aspects of perceptual experience, while high-level stimulus information (e.g. towards temporal regions) contributes most to its abstract aspects. Our findings therefore suggest that ongoing bottom-up inputs during waking life may prevent imagination from overriding veridical sensory experience. In contrast, internally generated stimuli may be hallucinated when sensory input is dampened or eradicated. Our approach can explain individual differences in imagery, along with aspects of daydreaming, hallucinations, and non-visual mental imagery.
Collapse
Affiliation(s)
- Alexander A Sulfaro
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
| | - Amanda K Robinson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
- Queensland Brain Institute, QBI Building 79, The University of Queensland, St Lucia, QLD 4067, Australia
| | - Thomas A Carlson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
| |
Collapse
|
6
|
Simistira Liwicki F, Gupta V, Saini R, De K, Abid N, Rakesh S, Wellington S, Wilson H, Liwicki M, Eriksson J. Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition. Sci Data 2023; 10:378. [PMID: 37311807 DOI: 10.1038/s41597-023-02286-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 06/01/2023] [Indexed: 06/15/2023] Open
Abstract
The recognition of inner speech, which could give a 'voice' to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech. This paper presents the first publicly available bimodal dataset containing EEG and fMRI data acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants during an inner-speech task with words in either a social or numerical category. Each of the 8-word stimuli were assessed with 40 trials, resulting in 320 trials in each modality for each participant. The aim of this work is to provide a publicly available bimodal dataset on inner speech, contributing towards speech prostheses.
Collapse
Affiliation(s)
- Foteini Simistira Liwicki
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden.
| | - Vibha Gupta
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Rajkumar Saini
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Kanjar De
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Nosheen Abid
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Sumit Rakesh
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | | | - Holly Wilson
- University of Bath, Department of Computer Science, Bath, UK
| | - Marcus Liwicki
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Johan Eriksson
- Umeå University, Department of Integrative Medical Biology (IMB) and Umeå Center for Functional Brain Imaging (UFBI), Umeå, Sweden
| |
Collapse
|
7
|
Harrison AW, Hughes G, Rudman G, Christensen BK, Whitford TJ. Exploring the internal forward model: action-effect prediction and attention in sensorimotor processing. Cereb Cortex 2023:7191713. [PMID: 37288477 DOI: 10.1093/cercor/bhad189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 05/10/2023] [Accepted: 05/11/2023] [Indexed: 06/09/2023] Open
Abstract
Action-effect predictions are believed to facilitate movement based on its association with sensory objectives and suppress the neurophysiological response to self- versus externally generated stimuli (i.e. sensory attenuation). However, research is needed to explore theorized differences in the use of action-effect prediction based on whether movement is uncued (i.e. volitional) or in response to external cues (i.e. stimulus-driven). While much of the sensory attenuation literature has examined effects involving the auditory N1, evidence is also conflicted regarding this component's sensitivity to action-effect prediction. In this study (n = 64), we explored the influence of action-effect contingency on event-related potentials associated with visually cued and uncued movement, as well as resultant stimuli. Our findings replicate recent evidence demonstrating reduced N1 amplitude for tones produced by stimulus-driven movement. Despite influencing motor preparation, action-effect contingency was not found to affect N1 amplitudes. Instead, we explore electrophysiological markers suggesting that attentional mechanisms may suppress the neurophysiological response to sound produced by stimulus-driven movement. Our findings demonstrate lateralized parieto-occipital activity that coincides with the auditory N1, corresponds to a reduction in its amplitude, and is topographically consistent with documented effects of attentional suppression. These results provide new insights into sensorimotor coordination and potential mechanisms underlying sensory attenuation.
Collapse
Affiliation(s)
- Anthony W Harrison
- School of Psychology, UNSW Sydney, Mathews Building, Library Walk, Kensington NSW 2052, Australia
| | - Gethin Hughes
- Department of Psychology, University Of Essex, Wivenhoe Park, Colchester CO4 3SQ, United Kingdom
| | - Gabriella Rudman
- School of Psychology, UNSW Sydney, Mathews Building, Library Walk, Kensington NSW 2052, Australia
| | - Bruce K Christensen
- Research School of Psychology, Building 39, The Australian National University, Science Rd, Canberra ACT 2601, Australia
| | - Thomas J Whitford
- School of Psychology, UNSW Sydney, Mathews Building, Library Walk, Kensington NSW 2052, Australia
| |
Collapse
|
8
|
Skipper JI. A voice without a mouth no more: The neurobiology of language and consciousness. Neurosci Biobehav Rev 2022; 140:104772. [PMID: 35835286 DOI: 10.1016/j.neubiorev.2022.104772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 05/18/2022] [Accepted: 07/05/2022] [Indexed: 11/26/2022]
Abstract
Most research on the neurobiology of language ignores consciousness and vice versa. Here, language, with an emphasis on inner speech, is hypothesised to generate and sustain self-awareness, i.e., higher-order consciousness. Converging evidence supporting this hypothesis is reviewed. To account for these findings, a 'HOLISTIC' model of neurobiology of language, inner speech, and consciousness is proposed. It involves a 'core' set of inner speech production regions that initiate the experience of feeling and hearing words. These take on affective qualities, deriving from activation of associated sensory, motor, and emotional representations, involving a largely unconscious dynamic 'periphery', distributed throughout the whole brain. Responding to those words forms the basis for sustained network activity, involving 'default mode' activation and prefrontal and thalamic/brainstem selection of contextually relevant responses. Evidence for the model is reviewed, supporting neuroimaging meta-analyses conducted, and comparisons with other theories of consciousness made. The HOLISTIC model constitutes a more parsimonious and complete account of the 'neural correlates of consciousness' that has implications for a mechanistic account of mental health and wellbeing.
Collapse
|
9
|
Han N, Jack BN, Hughes G, Whitford TJ. The Role of Action-Effect Contingency on Sensory Attenuation in the Absence of Movement. J Cogn Neurosci 2022; 34:1488-1499. [PMID: 35579993 DOI: 10.1162/jocn_a_01867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Stimuli that have been generated by a person's own willed motor actions generally elicit a suppressed electrophysiological, as well as phenomenological, response than identical stimuli that have been externally generated. This well-studied phenomenon, known as sensory attenuation, has mostly been studied by comparing ERPs evoked by self-initiated and externally generated sounds. However, most studies have assumed a uniform action-effect contingency, in which a motor action leads to a resulting sensation 100% of the time. In this study, we investigated the effect of manipulating the probability of action-effect contingencies on the sensory attenuation effect. In Experiment 1, participants watched a moving, marked tickertape while EEG was recorded. In the full-contingency (FC) condition, participants chose whether to press a button by a certain mark on the tickertape. If a button press had not occurred by the mark, a sound would be played a second later 100% of the time. If the button was pressed before the mark, the sound was not played. In the no-contingency (NC) condition, participants observed the same tickertape; in contrast, however, if participants did not press the button by the mark, a sound would occur only 50% of the time (NC-inaction). Furthermore, in the NC condition, if a participant pressed the button before the mark, a sound would also play 50% of the time (NC-action). In Experiment 2, the design was identical, except that a willed action (as opposed to a willed inaction) triggered the sound in the FC condition. The results were consistent across the two experiments: Although there were no differences in N1 amplitude between conditions, the amplitude of the Tb and P2 components were smaller in the FC condition compared with the NC-inaction condition, and the amplitude of the P2 component was also smaller in the FC condition compared with the NC-action condition. The results suggest that the effect of contingency on electrophysiological indices of sensory attenuation may be indexed primarily by the Tb and P2 components, rather than the N1 component which is most commonly studied.
Collapse
|
10
|
Simistira Liwicki F, Gupta V, Saini R, De K, Liwicki M. Rethinking the Methods and Algorithms for Inner Speech Decoding and Making Them Reproducible. NeuroSci 2022; 3:226-44. [DOI: 10.3390/neurosci3020017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
This study focuses on the automatic decoding of inner speech using noninvasive methods, such as Electroencephalography (EEG). While inner speech has been a research topic in philosophy and psychology for half a century, recent attempts have been made to decode nonvoiced spoken words by using various brain–computer interfaces. The main shortcomings of existing work are reproducibility and the availability of data and code. In this work, we investigate various methods (using Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU), Long Short-Term Memory Networks (LSTM)) for the detection task of five vowels and six words on a publicly available EEG dataset. The main contributions of this work are (1) subject dependent vs. subject-independent approaches, (2) the effect of different preprocessing steps (Independent Component Analysis (ICA), down-sampling and filtering), and (3) word classification (where we achieve state-of-the-art performance on a publicly available dataset). Overall we achieve a performance accuracy of 35.20% and 29.21% when classifying five vowels and six words, respectively, in a publicly available dataset, using our tuned iSpeech-CNN architecture. All of our code and processed data are publicly available to ensure reproducibility. As such, this work contributes to a deeper understanding and reproducibility of experiments in the area of inner speech detection.
Collapse
|
11
|
Calmels C, Le Garrec S, Brocherie F. Motor Simulation as an Adjunct to Patient Recovery Process Following Intensive Care Unit Admission. Front Med (Lausanne) 2022; 9:868514. [PMID: 35372455 PMCID: PMC8968139 DOI: 10.3389/fmed.2022.868514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 02/23/2022] [Indexed: 11/13/2022] Open
Affiliation(s)
- Claire Calmels
- Laboratory Sport, Expertise and Performance (EA 7370), French Institute of Sport, Paris, France
| | | | - Franck Brocherie
- Laboratory Sport, Expertise and Performance (EA 7370), French Institute of Sport, Paris, France
- *Correspondence: Franck Brocherie
| |
Collapse
|
12
|
Affiliation(s)
- Wade Munroe
- University of Michigan, Department of Philosophy and the Weinberg Institute for Cognitive Science, Ann Arbor, MI, USA
| |
Collapse
|
13
|
Napoli DJ. Stimuli for initiation: a comparison of dance and (sign) language. J Cult Cogn Sci. [DOI: 10.1007/s41809-022-00095-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
14
|
Panachakel JT, G RA. Classification of Phonological Categories in Imagined Speech using Phase Synchronization Measure. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:2226-2229. [PMID: 34891729 DOI: 10.1109/embc46164.2021.9630699] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Phonological categories in articulated speech are defined based on the place and manner of articulation. In this work, we investigate whether the phonological categories of the prompts imagined during speech imagery lead to differences in phase synchronization in various cortical regions that can be discriminated from the EEG captured during the imagination. Nasal and bilabial consonant are the two phonological categories considered due to their differences in both place and manner of articulation. Mean phase coherence (MPC) is used for measuring the phase synchronization and shallow neural network (NN) is used as the classifier. As a benchmark, we have also designed another NN based on statistical parameters extracted from imagined speech EEG. The NN trained on MPC values in the beta band gives classification results superior to NN trained on alpha band MPC values, gamma band MPC values and statistical parameters extracted from the EEG.Clinical relevance: Brain-computer interface (BCI) is a promising tool for aiding differently-abled people and for neurorehabilitation. One of the challenges in designing speech imagery based BCI is the identification of speech prompts that can lead to distinct neural activations. We have shown that nasal and blilabial consonants lead to dissimilar activations. Hence prompts orthogonal in these phonological categories are good choices as speech imagery prompts.
Collapse
|
15
|
Abstract
Phonemes are classified into different categories based on the place and manner of articulation. We investigate the differences between the neural correlates of imagined nasal and bilabial consonants (distinct phonological categories). Mean phase coherence is used as a metric for measuring the phase synchronisation between pairs of electrodes in six cortical regions (auditory, motor, prefrontal, sensorimotor, so-matosensory and premotor) during the imagery of nasal and bilabial consonants. Statistically significant difference at 95% confidence interval is observed in beta and lower-gamma bands in various cortical regions. Our observations are inline with the directions into velocities of articulators and dual stream prediction models and support the hypothesis that phonological categories not only exist in articulated speech but can also be distinguished from the EEG of imagined speech.
Collapse
|
16
|
Stripeikyte G, Pereira M, Rognini G, Potheegadoo J, Blanke O, Faivre N. Increased Functional Connectivity of the Intraparietal Sulcus Underlies the Attenuation of Numerosity Estimations for Self-Generated Words. J Neurosci 2021; 41:8917-8927. [PMID: 34497152 PMCID: PMC8549530 DOI: 10.1523/jneurosci.3164-20.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 06/29/2021] [Accepted: 07/01/2021] [Indexed: 11/21/2022] Open
Abstract
Previous studies have shown that self-generated stimuli in auditory, visual, and somatosensory domains are attenuated, producing decreased behavioral and neural responses compared with the same stimuli that are externally generated. Yet, whether such attenuation also occurs for higher-level cognitive functions beyond sensorimotor processing remains unknown. In this study, we assessed whether cognitive functions such as numerosity estimations are subject to attenuation in 56 healthy participants (32 women). We designed a task allowing the controlled comparison of numerosity estimations for self-generated (active condition) and externally generated (passive condition) words. Our behavioral results showed a larger underestimation of self-generated compared with externally generated words, suggesting that numerosity estimations for self-generated words are attenuated. Moreover, the linear relationship between the reported and actual number of words was stronger for self-generated words, although the ability to track errors about numerosity estimations was similar across conditions. Neuroimaging results revealed that numerosity underestimation involved increased functional connectivity between the right intraparietal sulcus and an extended network (bilateral supplementary motor area, left inferior parietal lobule, and left superior temporal gyrus) when estimating the number of self-generated versus externally generated words. We interpret our results in light of two models of attenuation and discuss their perceptual versus cognitive origins.SIGNIFICANCE STATEMENT We perceive sensory events as less intense when they are self-generated compared with when they are externally generated. This phenomenon, called attenuation, enables us to distinguish sensory events from self and external origins. Here, we designed a novel fMRI paradigm to assess whether cognitive processes such as numerosity estimations are also subject to attenuation. When asking participants to estimate the number of words they had generated or passively heard, we found bigger underestimation in the former case, providing behavioral evidence of attenuation. Attenuation was associated with increased functional connectivity of the intraparietal sulcus, a region involved in numerosity processing. Together, our results indicate that the attenuation of self-generated stimuli is not limited to sensory consequences but is also impact cognitive processes such as numerosity estimations.
Collapse
Affiliation(s)
- Giedre Stripeikyte
- Center for Neuroprosthetics, Swiss Federal Institute of Technology (EPFL), CH-1202 Geneva, Switzerland
- Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), CH-1015 Lausanne, Switzerland
| | - Michael Pereira
- Center for Neuroprosthetics, Swiss Federal Institute of Technology (EPFL), CH-1202 Geneva, Switzerland
- Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), CH-1015 Lausanne, Switzerland
- Laboratoire de Psychologie et NeuroCognition, CNRS, Univ. Grenoble Alpes, CNRS, LPNC, 38000 Grenoble, France
| | - Giulio Rognini
- Center for Neuroprosthetics, Swiss Federal Institute of Technology (EPFL), CH-1202 Geneva, Switzerland
- Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), CH-1015 Lausanne, Switzerland
| | - Jevita Potheegadoo
- Center for Neuroprosthetics, Swiss Federal Institute of Technology (EPFL), CH-1202 Geneva, Switzerland
- Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), CH-1015 Lausanne, Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics, Swiss Federal Institute of Technology (EPFL), CH-1202 Geneva, Switzerland
- Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), CH-1015 Lausanne, Switzerland
- Department of Neurology, University of Geneva, CH-1211 Geneva, Switzerland
| | - Nathan Faivre
- Center for Neuroprosthetics, Swiss Federal Institute of Technology (EPFL), CH-1202 Geneva, Switzerland
- Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), CH-1015 Lausanne, Switzerland
- Laboratoire de Psychologie et NeuroCognition, CNRS, Univ. Grenoble Alpes, CNRS, LPNC, 38000 Grenoble, France
| |
Collapse
|
17
|
Centanni SW, Janes AC, Haggerty DL, Atwood B, Hopf FW. Better living through understanding the insula: Why subregions can make all the difference. Neuropharmacology 2021; 198:108765. [PMID: 34461066 DOI: 10.1016/j.neuropharm.2021.108765] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 07/19/2021] [Accepted: 08/23/2021] [Indexed: 02/07/2023]
Abstract
Insula function is considered critical for many motivated behaviors, with proposed functions ranging from attention, behavioral control, emotional regulation, goal-directed and aversion-resistant responding. Further, the insula is implicated in many neuropsychiatric conditions including substance abuse. More recently, multiple insula subregions have been distinguished based on anatomy, connectivity, and functional contributions. Generally, posterior insula is thought to encode more somatosensory inputs, which integrate with limbic/emotional information in middle insula, that in turn integrate with cognitive processes in anterior insula. Together, these regions provide rapid interoceptive information about the current or predicted situation, facilitating autonomic recruitment and quick, flexible action. Here, we seek to create a robust foundation from which to understand potential subregion differences, and provide direction for future studies. We address subregion differences across humans and rodents, so that the latter's mechanistic interventions can best mesh with clinical relevance of human conditions. We first consider the insula's suggested roles in humans, then compare subregional studies, and finally describe rodent work. One primary goal is to encourage precision in describing insula subregions, since imprecision (e.g. including both posterior and anterior studies when describing insula work) does a disservice to a larger understanding of insula contributions. Additionally, we note that specific task details can greatly impact recruitment of various subregions, requiring care and nuance in design and interpretation of studies. Nonetheless, the central ethological importance of the insula makes continued research to uncover mechanistic, mood, and behavioral contributions of paramount importance and interest. This article is part of the special Issue on 'Neurocircuitry Modulating Drug and Alcohol Abuse'.
Collapse
|
18
|
Si X, Li S, Xiang S, Yu J, Ming D. Imagined speech increases the hemodynamic response and functional connectivity of the dorsal motor cortex. J Neural Eng 2021; 18. [PMID: 34507311 DOI: 10.1088/1741-2552/ac25d9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 09/10/2021] [Indexed: 11/12/2022]
Abstract
Objective. Decoding imagined speech from brain signals could provide a more natural, user-friendly way for developing the next generation of the brain-computer interface (BCI). With the advantages of non-invasive, portable, relatively high spatial resolution and insensitivity to motion artifacts, the functional near-infrared spectroscopy (fNIRS) shows great potential for developing the non-invasive speech BCI. However, there is a lack of fNIRS evidence in uncovering the neural mechanism of imagined speech. Our goal is to investigate the specific brain regions and the corresponding cortico-cortical functional connectivity features during imagined speech with fNIRS.Approach. fNIRS signals were recorded from 13 subjects' bilateral motor and prefrontal cortex during overtly and covertly repeating words. Cortical activation was determined through the mean oxygen-hemoglobin concentration changes, and functional connectivity was calculated by Pearson's correlation coefficient.Main results. (a) The bilateral dorsal motor cortex was significantly activated during the covert speech, whereas the bilateral ventral motor cortex was significantly activated during the overt speech. (b) As a subregion of the motor cortex, sensorimotor cortex (SMC) showed a dominant dorsal response to covert speech condition, whereas a dominant ventral response to overt speech condition. (c) Broca's area was deactivated during the covert speech but activated during the overt speech. (d) Compared to overt speech, dorsal SMC(dSMC)-related functional connections were enhanced during the covert speech.Significance. We provide fNIRS evidence for the involvement of dSMC in speech imagery. dSMC is the speech imagery network's key hub and is probably involved in the sensorimotor information processing during the covert speech. This study could inspire the BCI community to focus on the potential contribution of dSMC during speech imagery.
Collapse
Affiliation(s)
- Xiaopeng Si
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin International Engineering Institute, Tianjin University, Tianjin 300072, People's Republic of China.,Institute of Applied Psychology, Tianjin University, Tianjin 300350, People's Republic of China
| | - Sicheng Li
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Shaoxin Xiang
- Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin International Engineering Institute, Tianjin University, Tianjin 300072, People's Republic of China
| | - Jiayue Yu
- Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin International Engineering Institute, Tianjin University, Tianjin 300072, People's Republic of China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| |
Collapse
|
19
|
Marion G, Di Liberto GM, Shamma SA. The Music of Silence: Part I: Responses to Musical Imagery Encode Melodic Expectations and Acoustics. J Neurosci 2021; 41:7435-7448. [PMID: 34341155 PMCID: PMC8412990 DOI: 10.1523/jneurosci.0183-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 06/23/2021] [Accepted: 06/28/2021] [Indexed: 02/06/2023] Open
Abstract
Musical imagery is the voluntary internal hearing of music in the mind without the need for physical action or external stimulation. Numerous studies have already revealed brain areas activated during imagery. However, it remains unclear to what extent imagined music responses preserve the detailed temporal dynamics of the acoustic stimulus envelope and, crucially, whether melodic expectations play any role in modulating responses to imagined music, as they prominently do during listening. These modulations are important as they reflect aspects of the human musical experience, such as its acquisition, engagement, and enjoyment. This study explored the nature of these modulations in imagined music based on EEG recordings from 21 professional musicians (6 females and 15 males). Regression analyses were conducted to demonstrate that imagined neural signals can be predicted accurately, similarly to the listening task, and were sufficiently robust to allow for accurate identification of the imagined musical piece from the EEG. In doing so, our results indicate that imagery and listening tasks elicited an overlapping but distinctive topography of neural responses to sound acoustics, which is in line with previous fMRI literature. Melodic expectation, however, evoked very similar frontal spatial activation in both conditions, suggesting that they are supported by the same underlying mechanisms. Finally, neural responses induced by imagery exhibited a specific transformation from the listening condition, which primarily included a relative delay and a polarity inversion of the response. This transformation demonstrates the top-down predictive nature of the expectation mechanisms arising during both listening and imagery.SIGNIFICANCE STATEMENT It is well known that the human brain is activated during musical imagery: the act of voluntarily hearing music in our mind without external stimulation. It is unclear, however, what the temporal dynamics of this activation are, as well as what musical features are precisely encoded in the neural signals. This study uses an experimental paradigm with high temporal precision to record and analyze the cortical activity during musical imagery. This study reveals that neural signals encode music acoustics and melodic expectations during both listening and imagery. Crucially, it is also found that a simple mapping based on a time-shift and a polarity inversion could robustly describe the relationship between listening and imagery signals.
Collapse
Affiliation(s)
- Guilhem Marion
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, 75005, Paris, France
| | - Giovanni M Di Liberto
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, 75005, Paris, France
- Trinity Centre for Biomedical Engineering, Trinity College Institute of Neuroscience, Department of Mechanical, Manufacturing and Biomedical Engineering, Trinity College, University of Dublin, D02 PN40, Dublin 2, Ireland
- School of Electrical and Electronic Engineering and UCD Centre for Biomedical Engineering, University College Dublin, D04 V1W8, Dublin 4, Ireland
| | - Shihab A Shamma
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, 75005, Paris, France
- Institute for Systems Research, Electrical and Computer Engineering, University of Maryland, College Park, MD 20742
| |
Collapse
|
20
|
Yao B, Taylor JR, Banks B, Kotz SA. Reading direct speech quotes increases theta phase-locking: Evidence for cortical tracking of inner speech? Neuroimage 2021; 239:118313. [PMID: 34175425 DOI: 10.1016/j.neuroimage.2021.118313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 05/28/2021] [Accepted: 06/24/2021] [Indexed: 11/25/2022] Open
Abstract
Growing evidence shows that theta-band (4-7 Hz) activity in the auditory cortex phase-locks to rhythms of overt speech. Does theta activity also encode the rhythmic dynamics of inner speech? Previous research established that silent reading of direct speech quotes (e.g., Mary said: "This dress is lovely!") elicits more vivid inner speech than indirect speech quotes (e.g., Mary said that the dress was lovely). As we cannot directly track the phase alignment between theta activity and inner speech over time, we used EEG to measure the brain's phase-locked responses to the onset of speech quote reading. We found that direct (vs. indirect) quote reading was associated with increased theta phase synchrony over trials at 250-500 ms post-reading onset, with sources of the evoked activity estimated in the speech processing network. An eye-tracking control experiment confirmed that increased theta phase synchrony in direct quote reading was not driven by eye movement patterns, and more likely reflects synchronous phase resetting at the onset of inner speech. These findings suggest a functional role of theta phase modulation in reading-induced inner speech.
Collapse
Affiliation(s)
- Bo Yao
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester M13 9PL, United Kingdom.
| | - Jason R Taylor
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester M13 9PL, United Kingdom
| | - Briony Banks
- Department of Psychology, Lancaster University, Lancaster LA1 4YF, United Kingdom
| | - Sonja A Kotz
- Department of Neuropsychology & Psychopharmacology, Maastricht University, Maastricht 6211 LK, Netherlands; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| |
Collapse
|
21
|
Luchsinger JR, Fetterly TL, Williford KM, Salimando GJ, Doyle MA, Maldonado J, Simerly RB, Winder DG, Centanni SW. Delineation of an insula-BNST circuit engaged by struggling behavior that regulates avoidance in mice. Nat Commun 2021; 12:3561. [PMID: 34117229 PMCID: PMC8196075 DOI: 10.1038/s41467-021-23674-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 05/07/2021] [Indexed: 12/31/2022] Open
Abstract
Active responses to stressors involve motor planning, execution, and feedback. Here we identify an insular cortex to BNST (insula→BNST) circuit recruited during restraint stress-induced active struggling that modulates affective behavior. We demonstrate that activity in this circuit tightly follows struggling behavioral events and that the size of the fluorescent sensor transient reports the duration of the struggle event, an effect that fades with repeated exposure to the homotypic stressor. Struggle events are associated with enhanced glutamatergic- and decreased GABAergic signaling in the insular cortex, indicating the involvement of a larger circuit. We delineate the afferent network for this pathway, identifying substantial input from motor- and premotor cortex, somatosensory cortex, and the amygdala. To begin to dissect these incoming signals, we examine the motor cortex input, and show that the cells projecting from motor regions to insular cortex are engaged shortly before struggle event onset. This study thus demonstrates a role for the insula→BNST pathway in monitoring struggling activity and regulating affective behavior.
Collapse
Affiliation(s)
- Joseph R Luchsinger
- Vanderbilt Center for Addiction Research, Vanderbilt University School of Medicine, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University School of Medicine, Nashville, TN, USA
- Vanderbilt J.F. Kennedy Center for Research on Human Development, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Tracy L Fetterly
- Vanderbilt Center for Addiction Research, Vanderbilt University School of Medicine, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University School of Medicine, Nashville, TN, USA
- Vanderbilt J.F. Kennedy Center for Research on Human Development, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Kellie M Williford
- Vanderbilt Center for Addiction Research, Vanderbilt University School of Medicine, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University School of Medicine, Nashville, TN, USA
- Vanderbilt J.F. Kennedy Center for Research on Human Development, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Gregory J Salimando
- Vanderbilt Center for Addiction Research, Vanderbilt University School of Medicine, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University School of Medicine, Nashville, TN, USA
- Vanderbilt J.F. Kennedy Center for Research on Human Development, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Marie A Doyle
- Vanderbilt Center for Addiction Research, Vanderbilt University School of Medicine, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University School of Medicine, Nashville, TN, USA
- Vanderbilt J.F. Kennedy Center for Research on Human Development, Vanderbilt University School of Medicine, Nashville, TN, USA
- Department of Molecular Physiology & Biophysics, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Jose Maldonado
- Vanderbilt Center for Addiction Research, Vanderbilt University School of Medicine, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University School of Medicine, Nashville, TN, USA
- Vanderbilt J.F. Kennedy Center for Research on Human Development, Vanderbilt University School of Medicine, Nashville, TN, USA
- Department of Molecular Physiology & Biophysics, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Richard B Simerly
- Vanderbilt Center for Addiction Research, Vanderbilt University School of Medicine, Nashville, TN, USA.
- Vanderbilt Brain Institute, Vanderbilt University School of Medicine, Nashville, TN, USA.
- Vanderbilt J.F. Kennedy Center for Research on Human Development, Vanderbilt University School of Medicine, Nashville, TN, USA.
- Department of Molecular Physiology & Biophysics, Vanderbilt University School of Medicine, Nashville, TN, USA.
| | - Danny G Winder
- Vanderbilt Center for Addiction Research, Vanderbilt University School of Medicine, Nashville, TN, USA.
- Vanderbilt Brain Institute, Vanderbilt University School of Medicine, Nashville, TN, USA.
- Vanderbilt J.F. Kennedy Center for Research on Human Development, Vanderbilt University School of Medicine, Nashville, TN, USA.
- Department of Molecular Physiology & Biophysics, Vanderbilt University School of Medicine, Nashville, TN, USA.
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University School of Medicine, Nashville, TN, USA.
| | - Samuel W Centanni
- Vanderbilt Center for Addiction Research, Vanderbilt University School of Medicine, Nashville, TN, USA.
- Department of Molecular Physiology & Biophysics, Vanderbilt University School of Medicine, Nashville, TN, USA.
| |
Collapse
|
22
|
Han N, Jack BN, Hughes G, Elijah RB, Whitford TJ. Sensory attenuation in the absence of movement: Differentiating motor action from sense of agency. Cortex 2021; 141:436-448. [PMID: 34146742 DOI: 10.1016/j.cortex.2021.04.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 02/13/2021] [Accepted: 04/01/2021] [Indexed: 11/29/2022]
Abstract
Sensory attenuation is the phenomenon that stimuli generated by willed motor actions elicit a smaller neurophysiological response than those generated by external sources. It has mostly been investigated in the auditory domain, by comparing ERPs evoked by self-initiated (active condition) and externally-generated (passive condition) sounds. The mechanistic basis of sensory attenuation has been argued to involve a duplicate of the motor command being used to predict sensory consequences of self-generated movements. An alternative possibility is that the effect is driven by between-condition differences in participants' sense of agency over the sound. In this paper, we disambiguated the effects of motor-action and sense of agency on sensory attenuation with a novel experimental paradigm. In Experiment 1, participants watched a moving, marked tickertape while EEG was recorded. In the active condition, participants chose whether to press a button by a certain mark on the tickertape. If a button-press had not occurred by the mark, then a tone would be played 1 s later. If the button was pressed prior to the mark, the tone was not played. In the passive condition, participants passively watched the animation, and were informed about whether a tone would be played on each trial. The design for Experiment 2 was identical, except that the contingencies were reversed (i.e., a button-press by the mark led to a tone). The results were consistent across the two experiments: while there were no differences in N1 amplitude between the active and passive conditions, the amplitude of the Tb component was suppressed in the active condition. The amplitude of the P2 component was enhanced in the active condition in both Experiments 1 and 2. These results suggest that motor-actions and sense of agency have differential effects on sensory attenuation to sounds and are indexed with different ERP components.
Collapse
Affiliation(s)
- Nathan Han
- School of Psychology, The University of New South Wales (UNSW Sydney), Sydney, Australia.
| | - Bradley N Jack
- Research School of Psychology, Australian National University, Canberra, Australia
| | - Gethin Hughes
- Department of Psychology, University of Essex, Colchester, UK
| | - Ruth B Elijah
- School of Psychology, The University of New South Wales (UNSW Sydney), Sydney, Australia
| | - Thomas J Whitford
- School of Psychology, The University of New South Wales (UNSW Sydney), Sydney, Australia
| |
Collapse
|
23
|
Harrison AW, Mannion DJ, Jack BN, Griffiths O, Hughes G, Whitford TJ. Sensory attenuation is modulated by the contrasting effects of predictability and control. Neuroimage 2021; 237:118103. [PMID: 33957233 DOI: 10.1016/j.neuroimage.2021.118103] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Revised: 03/18/2021] [Accepted: 04/23/2021] [Indexed: 11/22/2022] Open
Abstract
Self-generated stimuli have been found to elicit a reduced sensory response compared with externally-generated stimuli. However, much of the literature has not adequately controlled for differences in the temporal predictability and temporal control of stimuli. In two experiments, we compared the N1 (and P2) components of the auditory-evoked potential to self- and externally-generated tones that differed with respect to these two factors. In Experiment 1 (n = 42), we found that increasing temporal predictability reduced N1 amplitude in a manner that may often account for the observed reduction in sensory response to self-generated sounds. We also observed that reducing temporal control over the tones resulted in a reduction in N1 amplitude. The contrasting effects of temporal predictability and temporal control on N1 amplitude meant that sensory attenuation prevailed when controlling for each. Experiment 2 (n = 38) explored the potential effect of selective attention on the results of Experiment 1 by modifying task requirements such that similar levels of attention were allocated to the visual stimuli across conditions. The results of Experiment 2 replicated those of Experiment 1, and suggested that the observed effects of temporal control and sensory attenuation were not driven by differences in attention. Given that self- and externally-generated sensations commonly differ with respect to both temporal predictability and temporal control, findings of the present study may necessitate a re-evaluation of the experimental paradigms used to study sensory attenuation.
Collapse
|
24
|
Abstract
Over the past decade, many researchers have come up with different implementations of systems for decoding covert or imagined speech from EEG (electroencephalogram). They differ from each other in several aspects, from data acquisition to machine learning algorithms, due to which, a comparison between different implementations is often difficult. This review article puts together all the relevant works published in the last decade on decoding imagined speech from EEG into a single framework. Every important aspect of designing such a system, such as selection of words to be imagined, number of electrodes to be recorded, temporal and spatial filtering, feature extraction and classifier are reviewed. This helps a researcher to compare the relative merits and demerits of the different approaches and choose the one that is most optimal. Speech being the most natural form of communication which human beings acquire even without formal education, imagined speech is an ideal choice of prompt for evoking brain activity patterns for a BCI (brain-computer interface) system, although the research on developing real-time (online) speech imagery based BCI systems is still in its infancy. Covert speech based BCI can help people with disabilities to improve their quality of life. It can also be used for covert communication in environments that do not support vocal communication. This paper also discusses some future directions, which will aid the deployment of speech imagery based BCI for practical applications, rather than only for laboratory experiments.
Collapse
Affiliation(s)
- Jerrin Thomas Panachakel
- Medical Intelligence and Language Engineering Laboratory, Department of Electrical Engineering, Indian Institute of Science, Bangalore, India
| | | |
Collapse
|
25
|
Lu L, Sheng J, Liu Z, Gao JH. Neural representations of imagined speech revealed by frequency-tagged magnetoencephalography responses. Neuroimage 2021; 229:117724. [PMID: 33421593 DOI: 10.1016/j.neuroimage.2021.117724] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 12/25/2020] [Accepted: 01/03/2021] [Indexed: 10/22/2022] Open
Abstract
Speech mental imagery is a quasi-perceptual experience that occurs in the absence of real speech stimulation. How imagined speech with higher-order structures such as words, phrases and sentences is rapidly organized and internally constructed remains elusive. To address this issue, subjects were tasked with imagining and perceiving poems along with a sequence of reference sounds with a presentation rate of 4 Hz while magnetoencephalography (MEG) recording was conducted. Giving that a sentence in a traditional Chinese poem is five syllables, a sentential rhythm was generated at a distinctive frequency of 0.8 Hz. Using the frequency tagging we concurrently tracked the neural processing timescale to the top-down generation of rhythmic constructs embedded in speech mental imagery and the bottom-up sensory-driven activity that were precisely tagged at the sentence-level rate of 0.8 Hz and a stimulus-level rate of 4 Hz, respectively. We found similar neural responses induced by the internal construction of sentences from syllables with both imagined and perceived poems and further revealed shared and distinct cohorts of cortical areas corresponding to the sentence-level rhythm in imagery and perception. This study supports the view of a common mechanism between imagery and perception by illustrating the neural representations of higher-order rhythmic structures embedded in imagined and perceived speech.
Collapse
Affiliation(s)
- Lingxi Lu
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, 100871 China; Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871 China; Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing, 100083 China
| | - Jingwei Sheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871 China; Beijing Quanmag Healthcare, Beijing, 100195 China
| | - Zhaowei Liu
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871 China; Center for Excellence in Brain Science and Intelligence Technology (Institute of Neuroscience), Chinese Academy of Science, Shanghai, 200031 China
| | - Jia-Hong Gao
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, 100871 China; Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871 China; Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, 100871, China.
| |
Collapse
|
26
|
Pinheiro AP, Schwartze M, Kotz SA. Cerebellar circuitry and auditory verbal hallucinations: An integrative synthesis and perspective. Neurosci Biobehav Rev 2020; 118:485-503. [DOI: 10.1016/j.neubiorev.2020.08.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 06/30/2020] [Accepted: 08/07/2020] [Indexed: 02/06/2023]
|
27
|
Li Y, Luo H, Tian X. Mental operations in rhythm: Motor-to-sensory transformation mediates imagined singing. PLoS Biol 2020; 18:e3000504. [PMID: 33017389 PMCID: PMC7561264 DOI: 10.1371/journal.pbio.3000504] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 10/15/2020] [Accepted: 09/01/2020] [Indexed: 11/21/2022] Open
Abstract
What enables the mental activities of thinking verbally or humming in our mind? We hypothesized that the interaction between motor and sensory systems induces speech and melodic mental representations, and this motor-to-sensory transformation forms the neural basis that enables our verbal thinking and covert singing. Analogous with the neural entrainment to auditory stimuli, participants imagined singing lyrics of well-known songs rhythmically while their neural electromagnetic signals were recorded using magnetoencephalography (MEG). We found that when participants imagined singing the same song in similar durations across trials, the delta frequency band (1–3 Hz, similar to the rhythm of the songs) showed more consistent phase coherence across trials. This neural phase tracking of imagined singing was observed in a frontal-parietal-temporal network: the proposed motor-to-sensory transformation pathway, including the inferior frontal gyrus (IFG), insula (INS), premotor area, intra-parietal sulcus (IPS), temporal-parietal junction (TPJ), primary auditory cortex (Heschl’s gyrus [HG]), and superior temporal gyrus (STG) and sulcus (STS). These results suggest that neural responses can entrain the rhythm of mental activity. Moreover, the theta-band (4–8 Hz) phase coherence was localized in the auditory cortices. The mu (9–12 Hz) and beta (17–20 Hz) bands were observed in the right-lateralized sensorimotor systems that were consistent with the singing context. The gamma band was broadly manifested in the observed network. The coherent and frequency-specific activations in the motor-to-sensory transformation network mediate the internal construction of perceptual representations and form the foundation of neural computations for mental operations. What enables our mental activities for thinking verbally or humming in our mind? Using an imagined singing paradigm with magnetoencephalography recordings, this study shows that neural oscillations in the motor-to-sensory transformation network tracked inner speech and covert singing.
Collapse
Affiliation(s)
- Yanzhu Li
- New York University Shanghai, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - Huan Luo
- Peking University, Beijing, China
| | - Xing Tian
- New York University Shanghai, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
- * E-mail:
| |
Collapse
|
28
|
Getz LM, Toscano JC. The time-course of speech perception revealed by temporally-sensitive neural measures. Wiley Interdiscip Rev Cogn Sci 2020; 12:e1541. [PMID: 32767836 DOI: 10.1002/wcs.1541] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Revised: 05/28/2020] [Accepted: 06/26/2020] [Indexed: 11/07/2022]
Abstract
Recent advances in cognitive neuroscience have provided a detailed picture of the early time-course of speech perception. In this review, we highlight this work, placing it within the broader context of research on the neurobiology of speech processing, and discuss how these data point us toward new models of speech perception and spoken language comprehension. We focus, in particular, on temporally-sensitive measures that allow us to directly measure early perceptual processes. Overall, the data provide support for two key principles: (a) speech perception is based on gradient representations of speech sounds and (b) speech perception is interactive and receives input from higher-level linguistic context at the earliest stages of cortical processing. Implications for models of speech processing and the neurobiology of language more broadly are discussed. This article is categorized under: Psychology > Language Psychology > Perception and Psychophysics Neuroscience > Cognition.
Collapse
Affiliation(s)
- Laura M Getz
- Department of Psychological Sciences, University of San Diego, San Diego, California, USA
| | - Joseph C Toscano
- Department of Psychological and Brain Sciences, Villanova University, Villanova, Pennsylvania, USA
| |
Collapse
|
29
|
Zhang W, Liu Y, Wang X, Tian X. The dynamic and task-dependent representational transformation between the motor and sensory systems during speech production. Cogn Neurosci 2020; 11:194-204. [PMID: 32720845 DOI: 10.1080/17588928.2020.1792868] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The motor and sensory systems work collaboratively to fulfill cognitive tasks, such as speech. For example, it has been hypothesized that neural signals generated in the motor system can transfer directly to the sensory system along a neural pathway (termed as motor-to-sensory transformation). Previous studies have demonstrated that the motor-to-sensory transformation is crucial for speech production. However, it is still unclear how neural representation dynamically evolves among distinct neural systems and how such representational transformation depends on task demand and the degrees of motor involvement. Using three speech tasks - overt articulation, silent articulation, and imagined articulation, the present fMRI study systematically investigated the representational formats and their dynamics in the motor-to-sensory transformation. Frontal-parietal-temporal neural pathways were observed in all three speech tasks in univariate analyses. The extent of the motor-to-sensory transformation network differed when the degrees of motor engagement varied among tasks. The representational similarity analysis (RSA) revealed that articulatory and acoustic information was represented in motor and auditory regions, respectively, in all three tasks. Moreover, articulatory information was cross-represented in the somatosensory and auditory regions in overt and silent articulation tasks. These results provided evidence for the dynamics and task-dependent transformation between representational formats in the motor-to-sensory transformation.
Collapse
Affiliation(s)
- Wenjia Zhang
- Division of Arts and Sciences, New York University Shanghai , Shanghai, China.,Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University , Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai , Shanghai, China
| | - Yiling Liu
- Department of Educational Sciences, Tianjin Normal University , Tianjin, China
| | - Xuefei Wang
- Department of Computer Science, Fudan University , Shanghai, China
| | - Xing Tian
- Division of Arts and Sciences, New York University Shanghai , Shanghai, China.,Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University , Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai , Shanghai, China
| |
Collapse
|
30
|
Pinheiro AP, Schwartze M, Gutiérrez-Domínguez F, Kotz SA. Real and imagined sensory feedback have comparable effects on action anticipation. Cortex 2020; 130:290-301. [PMID: 32698087 DOI: 10.1016/j.cortex.2020.04.030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Revised: 03/23/2020] [Accepted: 04/13/2020] [Indexed: 01/08/2023]
Abstract
The forward model monitors the success of sensory feedback to an action and links it to an efference copy originating in the motor system. The Readiness Potential (RP) of the electroencephalogram has been denoted as a neural signature of the efference copy. An open question is whether imagined sensory feedback works similarly to real sensory feedback. We investigated the RP to audible and imagined sounds in a button-press paradigm and assessed the role of sound complexity (vocal vs. non-vocal sound). Sensory feedback (both audible and imagined) in response to a voluntary action modulated the RP amplitude time-locked to the button press. The RP amplitude increase was larger for actions with expected sensory feedback (audible and imagined) than those without sensory feedback, and associated with N1 suppression for audible sounds. Further, the early RP phase was increased when actions elicited an imagined vocal (self-voice) compared to non-vocal sound. Our results support the notion that sensory feedback is anticipated before voluntary actions. This is the case for both audible and imagined sensory feedback and confirms a role of overt and covert feedback in the forward model.
Collapse
Affiliation(s)
- Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal; Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands.
| | - Michael Schwartze
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| | | | - Sonja A Kotz
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| |
Collapse
|
31
|
Sierpowska J, León-Cabrera P, Camins À, Juncadella M, Gabarrós A, Rodríguez-Fornells A. The black box of global aphasia: Neuroanatomical underpinnings of remission from acute global aphasia with preserved inner language function. Cortex 2020; 130:340-350. [PMID: 32731197 DOI: 10.1016/j.cortex.2020.06.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 04/11/2020] [Accepted: 06/16/2020] [Indexed: 10/23/2022]
Abstract
OBJECTIVE We studied an unusual case of global aphasia (GA) occurring after brain tumor removal and remitting one-month after surgery. After recovering, the patient reported on her experience during the episode, which suggested a partial preservation of language abilities (such as semantic processing) and the presence of inner speech (IS) despite a failure in overt speech production. Thus, we explored the role of IS and preserved language functions in the acute phase and investigated the neuroanatomical underpinnings of this severe breakdown in language processing. METHOD A neuropsychological and language assessment tapping into language production, comprehension, attention and working memory was carried out both before and three months after surgery. In the acute stage a simplified protocol was tailored to assess the limited language abilities and further explore patient's performance on different semantic tasks. The neuroanatomical dimension of these abrupt changes was provided by perioperative structural neuroimaging. RESULTS Language and neuropsychological performance were normal/close to normal both before and three months after surgery. In the acute stage, the patient presented severe difficulties with comprehension, production and repetition, whereas she was able to correctly perform tasks that requested conceptual analysis and non-verbal operations. After recovering, the patient reported that she had been able to internally formulate her thoughts despite her overt phonological errors during the episode. Structural neuroimaging revealed that an extra-axial blood collection affected the middle frontal areas during the acute stage and that the white matter circuitry was left-lateralized before surgery. CONCLUSIONS We deemed that the global aphasia episode was produced by a combination of the post-operative extra-axial blood collection directly impacting left middle frontal areas and a left-lateralization of the arcuate and/or uncinated fasciculi before surgery. Additionally, we advocate for a comprehensive evaluation of linguistic function that includes the assessment of IS and non-expressive language functions in similar cases.
Collapse
Affiliation(s)
- Joanna Sierpowska
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands; Radboud University Medical Center, Donders Institute for Brain Cognition and Behaviour, Department of Medical Psychology, Nijmegen, The Netherlands; Cognition and Brain Plasticity Group [Bellvitge Biomedical Research Institute - IDIBELL], Barcelona, Spain; Dept. of Cognition, Development and Educational Psychology, Campus Bellvitge, University of Barcelona, Barcelona, Spain.
| | - Patricia León-Cabrera
- Cognition and Brain Plasticity Group [Bellvitge Biomedical Research Institute - IDIBELL], Barcelona, Spain; Dept. of Cognition, Development and Educational Psychology, Campus Bellvitge, University of Barcelona, Barcelona, Spain
| | - Àngels Camins
- Institut de Diagnostic per la Imatge, Centre Bellvitge, Hospital Universitari de Bellvitge, Barcelona, Spain
| | | | - Andreu Gabarrós
- Hospital Universitari de Bellvitge (HUB), Neurosurgery Section, Campus Bellvitge, University of Barcelona - IDIBELL, Barcelona, Spain
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group [Bellvitge Biomedical Research Institute - IDIBELL], Barcelona, Spain; Dept. of Cognition, Development and Educational Psychology, Campus Bellvitge, University of Barcelona, Barcelona, Spain; Catalan Institution for Research and Advanced Studies, ICREA, Barcelona, Spain
| |
Collapse
|
32
|
Nelson B, Lavoie S, Gawęda Ł, Li E, Sass L, Koren D, Mcgorry P, Jack B, Parnas J, Polari A, Allott K, Hartmann J, Whitford T. The neurophenomenology of early psychosis: An integrative empirical study. Conscious Cogn 2020; 77:102845. [DOI: 10.1016/j.concog.2019.102845] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 10/16/2019] [Accepted: 10/16/2019] [Indexed: 12/23/2022]
|
33
|
Abbasi O, Gross J. Beta-band oscillations play an essential role in motor-auditory interactions. Hum Brain Mapp 2019; 41:656-665. [PMID: 31639252 PMCID: PMC7268072 DOI: 10.1002/hbm.24830] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Revised: 10/02/2019] [Accepted: 10/06/2019] [Indexed: 12/30/2022] Open
Abstract
In the human brain, self‐generated auditory stimuli elicit smaller cortical responses compared to externally generated sounds. This sensory attenuation is thought to result from predictions about the sensory consequences of self‐generated actions that rely on motor commands. Previous research has implicated brain oscillations in this process. However, the specific role of these oscillations in motor–auditory interactions during sensory attenuation is still unclear. In this study, we aimed at addressing this question by using magnetoencephalography (MEG). We recorded MEG in 20 healthy participants during listening to passively presented and self‐generated tones. Our results show that the magnitude of sensory attenuation in bilateral auditory areas is significantly correlated with the modulation of beta‐band (15–30 Hz) amplitude in the motor cortex. Moreover, we observed a significant directional coupling (Granger causality) in beta‐band originating from the motor cortex toward bilateral auditory areas. Our findings indicate that beta‐band oscillations play an important role in mediating top–down interactions between motor and auditory cortex and, in our paradigm, suppress cortical responses to predicted sensory input.
Collapse
Affiliation(s)
- Omid Abbasi
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany.,Centre for Cognitive Neuroimaging, University of Glasgow, Glasgow, United Kingdom.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
34
|
Lu L, Wang Q, Sheng J, Liu Z, Qin L, Li L, Gao JH. Neural tracking of speech mental imagery during rhythmic inner counting. eLife 2019; 8:48971. [PMID: 31635693 PMCID: PMC6805153 DOI: 10.7554/elife.48971] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2019] [Accepted: 10/09/2019] [Indexed: 11/13/2022] Open
Abstract
The subjective inner experience of mental imagery is among the most ubiquitous human experiences in daily life. Elucidating the neural implementation underpinning the dynamic construction of mental imagery is critical to understanding high-order cognitive function in the human brain. Here, we applied a frequency-tagging method to isolate the top-down process of speech mental imagery from bottom-up sensory-driven activities and concurrently tracked the neural processing time scales corresponding to the two processes in human subjects. Notably, by estimating the source of the magnetoencephalography (MEG) signals, we identified isolated brain networks activated at the imagery-rate frequency. In contrast, more extensive brain regions in the auditory temporal cortex were activated at the stimulus-rate frequency. Furthermore, intracranial stereotactic electroencephalogram (sEEG) evidence confirmed the participation of the inferior frontal gyrus in generating speech mental imagery. Our results indicate that a disassociated neural network underlies the dynamic construction of speech mental imagery independent of auditory perception.
Collapse
Affiliation(s)
- Lingxi Lu
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China.,Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Qian Wang
- Department of Clinical Neuropsychology, Sanbo Brain Hospital, Capital Medical University, Beijing, China
| | - Jingwei Sheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Zhaowei Liu
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Lang Qin
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.,Department of Linguistics, The University of Hong Kong, Hong Kong, China
| | - Liang Li
- Speech and Hearing Research Center, School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - Jia-Hong Gao
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China.,Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.,Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing, China
| |
Collapse
|
35
|
Jack BN, Le Pelley ME, Han N, Harris AW, Spencer KM, Whitford TJ. Inner speech is accompanied by a temporally-precise and content-specific corollary discharge. Neuroimage 2019; 198:170-180. [DOI: 10.1016/j.neuroimage.2019.04.038] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Accepted: 04/11/2019] [Indexed: 11/29/2022] Open
|
36
|
Demarchi G, Sanchez G, Weisz N. Automatic and feature-specific prediction-related neural activity in the human auditory system. Nat Commun 2019; 10:3440. [PMID: 31371713 PMCID: PMC6672009 DOI: 10.1038/s41467-019-11440-1] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 07/11/2019] [Indexed: 12/04/2022] Open
Abstract
Prior experience enables the formation of expectations of upcoming sensory events. However, in the auditory modality, it is not known whether prediction-related neural signals carry feature-specific information. Here, using magnetoencephalography (MEG), we examined whether predictions of future auditory stimuli carry tonotopic specific information. Participants passively listened to sound sequences of four carrier frequencies (tones) with a fixed presentation rate, ensuring strong temporal expectations of when the next stimulus would occur. Expectation of which frequency would occur was parametrically modulated across the sequences, and sounds were occasionally omitted. We show that increasing the regularity of the sequence boosts carrier-frequency-specific neural activity patterns during both the anticipatory and omission periods, indicating that prediction-related neural activity is indeed feature-specific. Our results illustrate that even without bottom-up input, auditory predictions can activate tonotopically specific templates. After listening to a predictable sequence of sounds, we can anticipate and predict the next sound in the sequence. Here, the authors show that during expectation of a sound, the brain generates neural activity matching that which is produced by actually hearing the same sound.
Collapse
Affiliation(s)
- Gianpaolo Demarchi
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria.
| | - Gaëtan Sanchez
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria.,Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team, INSERM UMRS 1028, CNRS UMR 5292, Université Claude Bernard Lyon 1, Université de Lyon, F-69000, Lyon, France
| | - Nathan Weisz
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria
| |
Collapse
|
37
|
Abstract
Adaptive behavior relies on complex neural processing in multiple interacting networks of both motor and sensory systems. One such interaction employs intrinsic neuronal signals, so-called 'corollary discharge' or 'efference copy', that may be used to predict the sensory consequences of a specific behavioral action, thereby enabling self-generated (reafferent) sensory information and extrinsic (exafferent) sensory inflow to be dissociated. Here, by using well-established examples, we seek to identify the distinguishing features of corollary discharge and efference copy within the framework of predictive motor-to-sensory system coordination. We then extend the more general concept of predictive signaling by showing how neural replicas of a particular motor command not only inform sensory pathways in order to gate reafferent stimulation, but can also be used to directly coordinate distinct and otherwise independent behaviors to the original motor task. Moreover, this motor-to-motor pairing may additionally extend to a gating of sensory input to either or both of the coupled systems. The employment of predictive internal signaling in such motor systems coupling and remote sensory input control thus adds to our understanding of how an organism's central nervous system is able to coordinate the activity of multiple and generally disparate motor and sensory circuits in the production of effective behavior.
Collapse
|
38
|
Whitford TJ. Speaking-Induced Suppression of the Auditory Cortex in Humans and Its Relevance to Schizophrenia. Biol Psychiatry Cogn Neurosci Neuroimaging 2019; 4:791-804. [PMID: 31399393 DOI: 10.1016/j.bpsc.2019.05.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 05/21/2019] [Accepted: 05/22/2019] [Indexed: 01/13/2023]
Abstract
Speaking-induced suppression (SIS) is the phenomenon that the sounds one generates by overt speech elicit a smaller neurophysiological response in the auditory cortex than comparable sounds that are externally generated. SIS is a specific example of the more general phenomenon of self-suppression. SIS has been well established in nonhuman animals and is believed to involve the action of corollary discharges. This review summarizes, first, the evidence for SIS in heathy human participants, where it has been most commonly assessed with electroencephalography and/or magnetoencephalography using an experimental paradigm known as "Talk-Listen"; and second, the growing number of Talk-Listen studies that have reported subnormal levels of SIS in patients with schizophrenia. This result is theoretically significant, as it provides a plausible explanation for some of the most distinctive and characteristic symptoms of schizophrenia, namely the first-rank symptoms. In particular, while the failure to suppress the neural consequences of self-generated movements (such as those associated with overt speech) provides a prima facie explanation for delusions of control, the failure to suppress the neural consequences of self-generated inner speech provides a plausible explanation for certain classes of auditory-verbal hallucinations, such as audible thoughts. While the empirical evidence for a relationship between SIS and the first-rank symptoms is currently limited, I predict that future studies with more sensitive experimental designs will confirm its existence. Establishing the existence of a causal, mechanistic relationship would represent a major step forward in our understanding of schizophrenia, which is a necessary precursor to the development of novel treatments.
Collapse
Affiliation(s)
- Thomas J Whitford
- School of Psychology, The University of New South Wales, Sydney, New South Wales, Australia.
| |
Collapse
|
39
|
McCutcheon RA, Abi-Dargham A, Howes OD. Schizophrenia, Dopamine and the Striatum: From Biology to Symptoms. Trends Neurosci 2019; 42:205-220. [PMID: 30621912 PMCID: PMC6401206 DOI: 10.1016/j.tins.2018.12.004] [Citation(s) in RCA: 367] [Impact Index Per Article: 73.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Revised: 12/04/2018] [Accepted: 12/16/2018] [Indexed: 12/15/2022]
Abstract
The mesolimbic hypothesis has been a central dogma of schizophrenia for decades, positing that aberrant functioning of midbrain dopamine projections to limbic regions causes psychotic symptoms. Recently, however, advances in neuroimaging techniques have led to the unanticipated finding that dopaminergic dysfunction in schizophrenia is greatest within nigrostriatal pathways, implicating the dorsal striatum in the pathophysiology and calling into question the mesolimbic theory. At the same time our knowledge of striatal anatomy and function has progressed, suggesting new mechanisms via which striatal dysfunction may contribute to the symptoms of schizophrenia. This Review draws together these developments, to explore what they mean for our understanding of the pathophysiology, clinical manifestations, and treatment of the disorder. Techniques for characterising the mesostriatal dopamine system, both in humans and animal models, have advanced significantly over the past decade. In vivo imaging studies in schizophrenia patients demonstrate that dopaminergic dysfunction in schizophrenia is greatest in nigrostriatal as opposed to mesolimbic pathways. Better understanding of striatal structure and function has enhanced our insight into the neurobiological basis of psychotic symptoms. The role of other neurotransmitters in modulating striatal dopamine function merits further exploration, and modulating these neurotransmitter systems has potential to offer new therapeutic strategies.
Collapse
Affiliation(s)
- Robert A McCutcheon
- Department of Psychosis Studies, Institute of Psychiatry, Psychology & Neuroscience, King's College London, De Crespigny Park, London, SE5 8AF, UK; MRC London Institute of Medical Sciences, Hammersmith Hospital, London, W12 0NN, UK; Institute of Clinical Sciences, Faculty of Medicine, Imperial College London, London, W12 0NN, UK; South London and Maudsley NHS Foundation Trust, London, SE5 8AF, UK
| | - Anissa Abi-Dargham
- Department of Psychiatry, School of Medicine, Stony Brook University, New York, USA
| | - Oliver D Howes
- Department of Psychosis Studies, Institute of Psychiatry, Psychology & Neuroscience, King's College London, De Crespigny Park, London, SE5 8AF, UK; MRC London Institute of Medical Sciences, Hammersmith Hospital, London, W12 0NN, UK; Institute of Clinical Sciences, Faculty of Medicine, Imperial College London, London, W12 0NN, UK; South London and Maudsley NHS Foundation Trust, London, SE5 8AF, UK.
| |
Collapse
|
40
|
Alderson-Day B, Mitrenga K, Wilkinson S, McCarthy-Jones S, Fernyhough C. The varieties of inner speech questionnaire - Revised (VISQ-R): Replicating and refining links between inner speech and psychopathology. Conscious Cogn 2018; 65:48-58. [PMID: 30041067 PMCID: PMC6204885 DOI: 10.1016/j.concog.2018.07.001] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2018] [Revised: 05/21/2018] [Accepted: 07/02/2018] [Indexed: 11/24/2022]
Abstract
Inner speech is a common experience for many but hard to measure empirically. The Varieties of Inner Speech Questionnaire (VISQ) has been used to link everyday phenomenology of inner speech - such as inner dialogue - to various psychopathological traits. However, positive and supportive aspects of inner speech have not always been captured. This study presents a revised version of the scale - the VISQ-R - based on factor analyses in two large samples: respondents to a survey on inner speech and reading (N = 1412) and a sample of university students (N = 377). Exploratory factor analysis indicated a five-factor structure including three previous subscales (dialogic, condensed, and other people in inner speech), an evaluative/critical factor, and a new positive/regulatory factor. Confirmatory factor analysis then replicated this structure in sample 2. Hierarchical regression analyses also replicated a number of relations between inner speech, hallucination-proneness, anxiety, depression, self-esteem, and dissociation.
Collapse
Affiliation(s)
- Ben Alderson-Day
- Department of Psychology, Durham University, Science Laboratories, South Road, Durham, United Kingdom.
| | - Kaja Mitrenga
- Department of Psychology, Durham University, Science Laboratories, South Road, Durham, United Kingdom
| | - Sam Wilkinson
- School of Philosophy, Psychology, and Language Sciences, The University of Edinburgh, Dugald Stewart Building, 3 Charles Street, Edinburgh, United Kingdom
| | - Simon McCarthy-Jones
- Department of Psychiatry, Trinity College Dublin, Trinity Centre for Health Sciences, St. James Hospital, James's Street, Dublin, Ireland
| | - Charles Fernyhough
- Department of Psychology, Durham University, Science Laboratories, South Road, Durham, United Kingdom
| |
Collapse
|
41
|
Elijah RB, Le Pelley ME, Whitford TJ. Act Now, Play Later: Temporal Expectations Regarding the Onset of Self-initiated Sensations Can Be Modified with Behavioral Training. J Cogn Neurosci 2018; 30:1145-1156. [DOI: 10.1162/jocn_a_01269] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Mechanisms of motor-sensory prediction are dependent on expectations regarding when self-generated feedback will occur. Existing behavioral and electrophysiological research suggests that we have a default expectation for immediate sensory feedback after executing an action. However, studies investigating the adaptability of this temporal expectation have been limited in their ability to differentiate modified expectations per se from effects of stimulus repetition. Here, we use a novel, within-participant procedure that allowed us to disentangle the effect of repetition from expectation and allowed us to determine whether the default assumption for immediate feedback is fixed and resistant to modification or is amenable to change with experience. While EEG was recorded, 45 participants completed a task in which they repeatedly pressed a button to produce a tone that occurred immediately after the button press (immediate training) or after a 100-msec delay (delayed training). The results revealed significant differences in the patterns of cortical change across the two training conditions. Specifically, there was a significant reduction in the cortical response to tones across delayed training blocks but no significant change across immediate training blocks. Furthermore, experience with delayed training did not result in increased cortical activity in response to immediate feedback. These findings suggest that experience with action–sensation delays broadens the window of temporal expectations, allowing for the simultaneous anticipation of both delayed and immediate motor-sensory feedback. This research provides insights into the mechanisms underlying motor-sensory prediction and may represent a novel therapeutic avenue for psychotic symptoms, which are ostensibly associated with sensory prediction abnormalities.
Collapse
|
42
|
Taitz A, Shalom DE, Trevisan MA. Vocal effort modulates the motor planning of short speech structures. Phys Rev E 2018; 97:052406. [PMID: 29906900 DOI: 10.1103/physreve.97.052406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Indexed: 06/08/2023]
Abstract
Speech requires programming the sequence of vocal gestures that produce the sounds of words. Here we explored the timing of this program by asking our participants to pronounce, as quickly as possible, a sequence of consonant-consonant-vowel (CCV) structures appearing on screen. We measured the delay between visual presentation and voice onset. In the case of plosive consonants, produced by sharp and well defined movements of the vocal tract, we found that delays are positively correlated with the duration of the transition between consonants. We then used a battery of statistical tests and mathematical vocal models to show that delays reflect the motor planning of CCVs and transitions are proxy indicators of the vocal effort needed to produce them. These results support that the effort required to produce the sequence of movements of a vocal gesture modulates the onset of the motor plan.
Collapse
Affiliation(s)
- Alan Taitz
- Physics Institute of Buenos Aires (IFIBA) CONICET, Buenos Aires, Argentina
| | - Diego E Shalom
- Department of Physics, Universidad de Buenos Aires, Buenos Aires 1428EGA, Argentina
| | - Marcos A Trevisan
- Physics Institute of Buenos Aires (IFIBA) CONICET, Buenos Aires, Argentina
- Department of Physics, Universidad de Buenos Aires, Buenos Aires 1428EGA, Argentina
| |
Collapse
|
43
|
Kilteni K, Andersson BJ, Houborg C, Ehrsson HH. Motor imagery involves predicting the sensory consequences of the imagined movement. Nat Commun 2018; 9:1617. [PMID: 29691389 PMCID: PMC5915435 DOI: 10.1038/s41467-018-03989-0] [Citation(s) in RCA: 124] [Impact Index Per Article: 20.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Accepted: 03/27/2018] [Indexed: 11/09/2022] Open
Abstract
Research on motor imagery has identified many similarities between imagined and executed actions at the behavioral, physiological and neural levels, thus supporting their "functional equivalence". In contrast, little is known about their possible "computational equivalence"-specifically, whether the brain's internal forward models predict the sensory consequences of imagined movements as they do for overt movements. Here, we address this question by assessing whether imagined self-generated touch produces an attenuation of real tactile sensations. Previous studies have shown that self-touch feels less intense compared with touch of external origin because the forward models predict the tactile feedback based on a copy of the motor command. Our results demonstrate that imagined self-touch is attenuated just as real self-touch is and that the imagery-induced attenuation follows the same spatiotemporal principles as does the attenuation elicited by overt movements. We conclude that motor imagery recruits the forward models to predict the sensory consequences of imagined movements.
Collapse
Affiliation(s)
- Konstantina Kilteni
- Department of Neuroscience, Karolinska Institutet, Retzius väg 8, 17177, Stockholm, Sweden.
| | - Benjamin Jan Andersson
- Department of Neuroscience, Karolinska Institutet, Retzius väg 8, 17177, Stockholm, Sweden
| | - Christian Houborg
- Department of Neuroscience, Karolinska Institutet, Retzius väg 8, 17177, Stockholm, Sweden
| | - H Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Retzius väg 8, 17177, Stockholm, Sweden
| |
Collapse
|