1
|
Panagiotaropoulos TI. An integrative view of the role of prefrontal cortex in consciousness. Neuron 2024; 112:1626-1641. [PMID: 38754374 DOI: 10.1016/j.neuron.2024.04.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 04/16/2024] [Accepted: 04/24/2024] [Indexed: 05/18/2024]
Abstract
The involvement of the prefrontal cortex (PFC) in consciousness is an ongoing focus of intense investigation. An important question is whether representations of conscious contents and experiences in the PFC are confounded by post-perceptual processes related to cognitive functions. Here, I review recent findings suggesting that neuronal representations of consciously perceived contents-in the absence of post-perceptual processes-can indeed be observed in the PFC. Slower ongoing fluctuations in the electrophysiological state of the PFC seem to control the stability and updates of these prefrontal representations of conscious awareness. In addition to conscious perception, the PFC has been shown to play a critical role in controlling the levels of consciousness as observed during anesthesia, while prefrontal lesions can result in severe loss of perceptual awareness. Together, the convergence of these processes in the PFC suggests its integrative role in consciousness and highlights the complex nature of consciousness itself.
Collapse
|
2
|
Hou Y, Nanduri D, Granley J, Weiland JD, Beyeler M. Axonal stimulation affects the linear summation of single-point perception in three Argus II users. J Neural Eng 2024; 21:026031. [PMID: 38457841 PMCID: PMC11003296 DOI: 10.1088/1741-2552/ad31c4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 02/20/2024] [Accepted: 03/08/2024] [Indexed: 03/10/2024]
Abstract
Objective.Retinal implants use electrical stimulation to elicit perceived flashes of light ('phosphenes'). Single-electrode phosphene shape has been shown to vary systematically with stimulus parameters and the retinal location of the stimulating electrode, due to incidental activation of passing nerve fiber bundles. However, this knowledge has yet to be extended to paired-electrode stimulation.Approach.We retrospectively analyzed 3548 phosphene drawings made by three blind participants implanted with an Argus II Retinal Prosthesis. Phosphene shape (characterized by area, perimeter, major and minor axis length) and number of perceived phosphenes were averaged across trials and correlated with the corresponding single-electrode parameters. In addition, the number of phosphenes was correlated with stimulus amplitude and neuroanatomical parameters: electrode-retina and electrode-fovea distance as well as the electrode-electrode distance to ('between-axon') and along axon bundles ('along-axon'). Statistical analyses were conducted using linear regression and partial correlation analysis.Main results.Simple regression revealed that each paired-electrode shape descriptor could be predicted by the sum of the two corresponding single-electrode shape descriptors (p < .001). Multiple regression revealed that paired-electrode phosphene shape was primarily predicted by stimulus amplitude and electrode-fovea distance (p < .05). Interestingly, the number of elicited phosphenes tended to increase with between-axon distance (p < .05), but not with along-axon distance, in two out of three participants.Significance.The shape of phosphenes elicited by paired-electrode stimulation was well predicted by the shape of their corresponding single-electrode phosphenes, suggesting that two-point perception can be expressed as the linear summation of single-point perception. The impact of the between-axon distance on the perceived number of phosphenes provides further evidence in support of the axon map model for epiretinal stimulation. These findings contribute to the growing literature on phosphene perception and have important implications for the design of future retinal prostheses.
Collapse
Affiliation(s)
- Yuchen Hou
- Department of Computer Science, University of California, Santa Barbara, CA, United States of America
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, United States of America
| | - Devyani Nanduri
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Jacob Granley
- Department of Computer Science, University of California, Santa Barbara, CA, United States of America
| | - James D Weiland
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, United States of America
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States of America
| | - Michael Beyeler
- Department of Computer Science, University of California, Santa Barbara, CA, United States of America
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, United States of America
| |
Collapse
|
3
|
Mollard S, Wacongne C, Bohte SM, Roelfsema PR. Recurrent neural networks that learn multi-step visual routines with reinforcement learning. PLoS Comput Biol 2024; 20:e1012030. [PMID: 38683837 PMCID: PMC11081502 DOI: 10.1371/journal.pcbi.1012030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 05/09/2024] [Accepted: 04/01/2024] [Indexed: 05/02/2024] Open
Abstract
Many cognitive problems can be decomposed into series of subproblems that are solved sequentially by the brain. When subproblems are solved, relevant intermediate results need to be stored by neurons and propagated to the next subproblem, until the overarching goal has been completed. We will here consider visual tasks, which can be decomposed into sequences of elemental visual operations. Experimental evidence suggests that intermediate results of the elemental operations are stored in working memory as an enhancement of neural activity in the visual cortex. The focus of enhanced activity is then available for subsequent operations to act upon. The main question at stake is how the elemental operations and their sequencing can emerge in neural networks that are trained with only rewards, in a reinforcement learning setting. We here propose a new recurrent neural network architecture that can learn composite visual tasks that require the application of successive elemental operations. Specifically, we selected three tasks for which electrophysiological recordings of monkeys' visual cortex are available. To train the networks, we used RELEARNN, a biologically plausible four-factor Hebbian learning rule, which is local both in time and space. We report that networks learn elemental operations, such as contour grouping and visual search, and execute sequences of operations, solely based on the characteristics of the visual stimuli and the reward structure of a task. After training was completed, the activity of the units of the neural network elicited by behaviorally relevant image items was stronger than that elicited by irrelevant ones, just as has been observed in the visual cortex of monkeys solving the same tasks. Relevant information that needed to be exchanged between subroutines was maintained as a focus of enhanced activity and passed on to the subsequent subroutines. Our results demonstrate how a biologically plausible learning rule can train a recurrent neural network on multistep visual tasks.
Collapse
Affiliation(s)
- Sami Mollard
- Department of Vision & Cognition, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
| | - Catherine Wacongne
- Department of Vision & Cognition, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
- AnotherBrain, Paris, France
| | - Sander M. Bohte
- Machine Learning Group, Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
- Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | - Pieter R. Roelfsema
- Department of Vision & Cognition, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
- Laboratory of Visual Brain Therapy, Sorbonne Université, Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Institut de la Vision, Paris, France
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University, Amsterdam, The Netherlands
- Department of Neurosurgery, Academic Medical Center, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Machida I, Shishikura M, Yamane Y, Sakai K. Representation of Natural Contours by a Neural Population in Monkey V4. eNeuro 2024; 11:ENEURO.0445-23.2024. [PMID: 38423791 PMCID: PMC10946029 DOI: 10.1523/eneuro.0445-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 02/18/2024] [Accepted: 02/22/2024] [Indexed: 03/02/2024] Open
Abstract
The cortical visual area, V4, has been considered to code contours that contribute to the intermediate-level representation of objects. The neural responses to the complex contour features intrinsic to natural contours are expected to clarify the essence of the representation. To approach the cortical coding of natural contours, we investigated the simultaneous coding of multiple contour features in monkey (Macaca fuscata) V4 neurons and their population-level representation. A substantial number of neurons showed significant tuning for two or more features such as curvature and closure, indicating that a substantial number of V4 neurons simultaneously code multiple contour features. A large portion of the neurons responded vigorously to acutely curved contours that surrounded the center of classical receptive field, suggesting that V4 neurons tend to code prominent features of object contours. The analysis of mutual information (MI) between the neural responses and each contour feature showed that most neurons exhibited similar magnitudes for each type of MI, indicating that many neurons showing the responses depended on multiple contour features. We next examined the population-level representation by using multidimensional scaling analysis. The neural preferences to the multiple contour features and that to natural stimuli compared with silhouette stimuli increased along with the primary and secondary axes, respectively, indicating the contribution of the multiple contour features and surface textures in the population responses. Our analyses suggested that V4 neurons simultaneously code multiple contour features in natural images and represent contour and surface properties in population.
Collapse
Affiliation(s)
- Itsuki Machida
- Department of Computer Science, University of Tsukuba, Tsukuba 305-8573, Japan
| | - Motofumi Shishikura
- Department of Computer Science, University of Tsukuba, Tsukuba 305-8573, Japan
| | - Yukako Yamane
- Neural Computation Unit, Okinawa Institute of Science and Technology, Okinawa 904-0495, Japan
| | - Ko Sakai
- Department of Computer Science, University of Tsukuba, Tsukuba 305-8573, Japan
| |
Collapse
|
5
|
Verzhbinsky IA, Rubin DB, Kajfez S, Bu Y, Kelemen JN, Kapitonava A, Williams ZM, Hochberg LR, Cash SS, Halgren E. Co-occurring ripple oscillations facilitate neuronal interactions between cortical locations in humans. Proc Natl Acad Sci U S A 2024; 121:e2312204121. [PMID: 38157452 PMCID: PMC10769862 DOI: 10.1073/pnas.2312204121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 11/05/2023] [Indexed: 01/03/2024] Open
Abstract
How the human cortex integrates ("binds") information encoded by spatially distributed neurons remains largely unknown. One hypothesis suggests that synchronous bursts of high-frequency oscillations ("ripples") contribute to binding by facilitating integration of neuronal firing across different cortical locations. While studies have demonstrated that ripples modulate local activity in the cortex, it is not known whether their co-occurrence coordinates neural firing across larger distances. We tested this hypothesis using local field-potentials and single-unit firing from four 96-channel microelectrode arrays in the supragranular cortex of 3 patients. Neurons in co-rippling locations showed increased short-latency co-firing, prediction of each other's firing, and co-participation in neural assemblies. Effects were similar for putative pyramidal and interneurons, during non-rapid eye movement sleep and waking, in temporal and Rolandic cortices, and at distances up to 16 mm (the longest tested). Increased co-prediction during co-ripples was maintained when firing-rate changes were equated, indicating that it was not secondary to non-oscillatory activation. Co-rippling enhanced prediction was strongly modulated by ripple phase, supporting the most common posited mechanism for binding-by-synchrony. Co-ripple enhanced prediction is reciprocal, synergistic with local upstates, and further enhanced when multiple sites co-ripple, supporting re-entrant facilitation. Together, these results support the hypothesis that trans-cortical co-occurring ripples increase the integration of neuronal firing of neurons in different cortical locations and do so in part through phase-modulation rather than unstructured activation.
Collapse
Affiliation(s)
- Ilya A. Verzhbinsky
- Neurosciences Graduate Program, University of California San Diego, La Jolla, CA92093
- Medical Scientist Training Program, University of California San Diego, La Jolla, CA92093
| | - Daniel B. Rubin
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA02114
| | - Sophie Kajfez
- Department of Radiology, University of California San Diego, La Jolla, CA92093
| | - Yiting Bu
- Department of Neurosciences, University of California San Diego, La Jolla, CA92093
| | - Jessica N. Kelemen
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA02114
| | - Anastasia Kapitonava
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA02114
| | - Ziv M. Williams
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA02114
| | - Leigh R. Hochberg
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA02114
- Center for Neurorestoration and Neurotechnology, Department of Veterans Affairs, Providence, RI02908
- Carney Institute for Brain Science and School of Engineering, Brown University, Providence, RI02912
| | - Sydney S. Cash
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA02114
| | - Eric Halgren
- Department of Radiology, University of California San Diego, La Jolla, CA92093
- Department of Neurosciences, University of California San Diego, La Jolla, CA92093
| |
Collapse
|
6
|
Vacher J, Launay C, Mamassian P, Coen-Cagli R. Measuring uncertainty in human visual segmentation. PLoS Comput Biol 2023; 19:e1011483. [PMID: 37747914 PMCID: PMC10553811 DOI: 10.1371/journal.pcbi.1011483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 10/05/2023] [Accepted: 08/31/2023] [Indexed: 09/27/2023] Open
Abstract
Segmenting visual stimuli into distinct groups of features and visual objects is central to visual function. Classical psychophysical methods have helped uncover many rules of human perceptual segmentation, and recent progress in machine learning has produced successful algorithms. Yet, the computational logic of human segmentation remains unclear, partially because we lack well-controlled paradigms to measure perceptual segmentation maps and compare models quantitatively. Here we propose a new, integrated approach: given an image, we measure multiple pixel-based same-different judgments and perform model-based reconstruction of the underlying segmentation map. The reconstruction is robust to several experimental manipulations and captures the variability of individual participants. We demonstrate the validity of the approach on human segmentation of natural images and composite textures. We show that image uncertainty affects measured human variability, and it influences how participants weigh different visual features. Because any putative segmentation algorithm can be inserted to perform the reconstruction, our paradigm affords quantitative tests of theories of perception as well as new benchmarks for segmentation algorithms.
Collapse
Affiliation(s)
- Jonathan Vacher
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS, Paris, France
| | - Claire Launay
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, New-York, United States of America
| | - Pascal Mamassian
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS, Paris, France
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, New-York, United States of America
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New-York, United States of America
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, New-York, United States of America
| |
Collapse
|
7
|
Verzhbinsky IA, Rubin DB, Kajfez S, Bu Y, Kelemen JN, Kapitonava A, Williams ZM, Hochberg LR, Cash SS, Halgren E. Co-occurring ripple oscillations facilitate neuronal interactions between cortical locations in humans. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.20.541588. [PMID: 37292943 PMCID: PMC10245779 DOI: 10.1101/2023.05.20.541588] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Synchronous bursts of high frequency oscillations ('ripples') are hypothesized to contribute to binding by facilitating integration of neuronal firing across cortical locations. We tested this hypothesis using local field-potentials and single-unit firing from four 96-channel microelectrode arrays in supragranular cortex of 3 patients. Neurons in co-rippling locations showed increased short-latency co-firing, prediction of each-other's firing, and co-participation in neural assemblies. Effects were similar for putative pyramidal and interneurons, during NREM sleep and waking, in temporal and Rolandic cortices, and at distances up to 16mm. Increased co-prediction during co-ripples was maintained when firing-rate changes were equated, and were strongly modulated by ripple phase. Co-ripple enhanced prediction is reciprocal, synergistic with local upstates, and further enhanced when multiple sites co-ripple. Together, these results support the hypothesis that trans-cortical co-ripples increase the integration of neuronal firing of neurons in different cortical locations, and do so in part through phase-modulation rather than unstructured activation.
Collapse
Affiliation(s)
- Ilya A. Verzhbinsky
- Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92093, USA
- Medical Scientist Training Program, University of California San Diego, La Jolla, CA 92093, USA
| | - Daniel B. Rubin
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA 02114, USA
- Harvard Medical School, Boston, MA 02114, USA
| | - Sophie Kajfez
- Department of Radiology, University of California San Diego, La Jolla, CA 92093, USA
| | - Yiting Bu
- Department of Neurosciences, University of California San Diego, La Jolla, CA 92093, USA
| | - Jessica N. Kelemen
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Anastasia Kapitonava
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Ziv M. Williams
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA 02114
- Program in Neuroscience, Harvard-MIT Program in Health Sciences and Technology, Harvard Medical School, Boston, MA 02115
| | - Leigh R. Hochberg
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA 02114, USA
- Harvard Medical School, Boston, MA 02114, USA
- Center for Neurorestoration and Neurotechnology, Department of Veterans Affairs, Providence, RI 02908, USA
- Carney Institute for Brain Science and School of Engineering, Brown University, Providence, RI 02912, USA
| | - Sydney S. Cash
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA 02114, USA
- Harvard Medical School, Boston, MA 02114, USA
| | - Eric Halgren
- Department of Radiology, University of California San Diego, La Jolla, CA 92093, USA
- Department of Neurosciences, University of California San Diego, La Jolla, CA 92093, USA
| |
Collapse
|
8
|
Li X, Wang S. Toward a computational theory of manifold untangling: from global embedding to local flattening. Front Comput Neurosci 2023; 17:1197031. [PMID: 37324172 PMCID: PMC10264604 DOI: 10.3389/fncom.2023.1197031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 05/11/2023] [Indexed: 06/17/2023] Open
Abstract
It has been hypothesized that the ventral stream processing for object recognition is based on a mechanism called cortically local subspace untangling. A mathematical abstraction of object recognition by the visual cortex is how to untangle the manifolds associated with different object categories. Such a manifold untangling problem is closely related to the celebrated kernel trick in metric space. In this paper, we conjecture that there is a more general solution to manifold untangling in the topological space without artificially defining any distance metric. Geometrically, we can either embed a manifold in a higher-dimensional space to promote selectivity or flatten a manifold to promote tolerance. General strategies of both global manifold embedding and local manifold flattening are presented and connected with existing work on the untangling of image, audio, and language data. We also discuss the implications of untangling the manifold into motor control and internal representations.
Collapse
Affiliation(s)
- Xin Li
- Lane Department of Computer Science and Electrical Engineering (CSEE), West Virginia University, Morgantown, WV, United States
| | - Shuo Wang
- Department of Radiology, Washington University at St. Louis, St. Louis, MO, United States
| |
Collapse
|