1
|
Phangwiwat T, Phunchongharn P, Wongsawat Y, Chatnuntawech I, Wang S, Chunharas C, Sprague TC, Woodman GF, Itthipuripat S. Sustained attention operates via dissociable neural mechanisms across different eccentric locations. Sci Rep 2024; 14:11188. [PMID: 38755251 PMCID: PMC11099062 DOI: 10.1038/s41598-024-61171-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Accepted: 05/02/2024] [Indexed: 05/18/2024] Open
Abstract
In primates, foveal and peripheral vision have distinct neural architectures and functions. However, it has been debated if selective attention operates via the same or different neural mechanisms across eccentricities. We tested these alternative accounts by examining the effects of selective attention on the steady-state visually evoked potential (SSVEP) and the fronto-parietal signal measured via EEG from human subjects performing a sustained visuospatial attention task. With a negligible level of eye movements, both SSVEP and SND exhibited the heterogeneous patterns of attentional modulations across eccentricities. Specifically, the attentional modulations of these signals peaked at the parafoveal locations and such modulations wore off as visual stimuli appeared closer to the fovea or further away towards the periphery. However, with a relatively higher level of eye movements, the heterogeneous patterns of attentional modulations of these neural signals were less robust. These data demonstrate that the top-down influence of covert visuospatial attention on early sensory processing in human cortex depends on eccentricity and the level of saccadic responses. Taken together, the results suggest that sustained visuospatial attention operates differently across different eccentric locations, providing new understanding of how attention augments sensory representations regardless of where the attended stimulus appears.
Collapse
Affiliation(s)
- Tanagrit Phangwiwat
- Neuroscience Center for Research and Innovation (NX), Learning Institute, King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10140, Thailand
- Big Data Experience Center (BX), King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10600, Thailand
- Department of Computer Engineering, King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10140, Thailand
| | - Phond Phunchongharn
- Big Data Experience Center (BX), King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10600, Thailand
- Department of Computer Engineering, King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10140, Thailand
| | - Yodchanan Wongsawat
- Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Nakhon Pathom, 73170, Thailand
| | - Itthi Chatnuntawech
- National Nanotechnology Center, National Science and Technology Development Agency, Pathum Thani, 12120, Thailand
| | - Sisi Wang
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Department of Psychology, Vanderbilt University, Nashville, TN, 37235, USA
| | - Chaipat Chunharas
- Cognitive Clinical and Computational Neuroscience Center of Excellence, Department of Internal Medicine, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
- Chula Neuroscience Center, King Chulalongkorn Memorial Hospital, Thai Red Cross Society, Bangkok, 10330, Thailand
| | - Thomas C Sprague
- Department of Psychological and Brain Sciences, University of California Santa Barbara, Santa Barbara, CA, 93106, USA
| | - Geoffrey F Woodman
- Department of Psychology, Vanderbilt University, Nashville, TN, 37235, USA
| | - Sirawaj Itthipuripat
- Neuroscience Center for Research and Innovation (NX), Learning Institute, King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10140, Thailand.
- Big Data Experience Center (BX), King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, 10600, Thailand.
- Department of Psychology, Vanderbilt University, Nashville, TN, 37235, USA.
| |
Collapse
|
2
|
da Costa D, Kornemann L, Goebel R, Senden M. Convolutional neural networks develop major organizational principles of early visual cortex when enhanced with retinal sampling. Sci Rep 2024; 14:8980. [PMID: 38637554 PMCID: PMC11026486 DOI: 10.1038/s41598-024-59376-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 04/09/2024] [Indexed: 04/20/2024] Open
Abstract
Primate visual cortex exhibits key organizational principles: cortical magnification, eccentricity-dependent receptive field size and spatial frequency tuning as well as radial bias. We provide compelling evidence that these principles arise from the interplay of the non-uniform distribution of retinal ganglion cells, and a quasi-uniform convergence rate from the retina to the cortex. We show that convolutional neural networks outfitted with a retinal sampling layer, which resamples images according to retinal ganglion cell density, develop these organizational principles. Surprisingly, our results indicate that radial bias is spatial-frequency dependent and only manifests for high spatial frequencies. For low spatial frequencies, the bias shifts towards orthogonal orientations. These findings introduce a novel hypothesis about the origin of radial bias. Quasi-uniform convergence limits the range of spatial frequencies (in retinal space) that can be resolved, while retinal sampling determines the spatial frequency content throughout the retina.
Collapse
Affiliation(s)
- Danny da Costa
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands.
- Maastricht Brain Imaging Centre, Maastricht University, Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands.
| | - Lukas Kornemann
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands
- University of Bonn, Regina-Pacis-Weg 3, 53113, Bonn, Germany
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands
- Maastricht Brain Imaging Centre, Maastricht University, Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands
| | - Mario Senden
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands.
- Maastricht Brain Imaging Centre, Maastricht University, Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands.
| |
Collapse
|
3
|
Tu Y, Li X, Lu ZL, Wang Y. Adaptive smoothing of retinotopic maps based on Teichmüller parametrization. Med Image Anal 2024; 93:103074. [PMID: 38160658 DOI: 10.1016/j.media.2023.103074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Revised: 12/20/2023] [Accepted: 12/21/2023] [Indexed: 01/03/2024]
Abstract
Retinotopic mapping, the mapping between visual inputs on the retina and neural responses on the cortical surface, is one of the fundamental topics in visual neuroscience. In human studies, retinotopic maps are conventionally constructed and processed by decoding blood oxygenation-level dependent (BOLD) functional magnetic resonance imaging (fMRI) responses to designed visual stimuli on the cortical surface. However, these methods frequently generate retinotopic maps that do not preserve topology, contradicting a fundamental property of retinotopic maps observed in neurophysiology. To address this problem, we propose an integrated approach to simultaneously refine the flattening from the 3D cortical surface to the 2D parametric space and adaptively smooth retinotopic perception centers in the visual space to make the retinotopic maps topological. One key element of the approach is the enhanced error tolerant Teichmüller mapping, which refines the parametrization by minimizing angle distortions and maximizing alignment to noisy landmarks. We validated our overall approach with synthetic and real retinotopic mapping datasets and applied it to compute cortical magnification factor (CMF). The results showed that the proposed approach was superior to other conventional retinotopic mapping methods in predicting BOLD fMRI time series and preserving topology.
Collapse
Affiliation(s)
- Yanshuai Tu
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA
| | - Xin Li
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA
| | - Zhong-Lin Lu
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China; Center for Neural Science and Department of Psychology, New York University, New York, NY, USA; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China.
| | - Yalin Wang
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA.
| |
Collapse
|
4
|
Veríssimo IS, Nudelman Z, Olivers CNL. Does crowding predict conjunction search? An individual differences approach. Vision Res 2024; 216:108342. [PMID: 38198971 DOI: 10.1016/j.visres.2023.108342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 11/27/2023] [Accepted: 12/07/2023] [Indexed: 01/12/2024]
Abstract
Searching for objects in the visual environment is an integral part of human behavior. Most of the information used during such visual search comes from the periphery of our vision, and understanding the basic mechanisms of search therefore requires taking into account the inherent limitations of peripheral vision. Our previous work using an individual differences approach has shown that one of the major factors limiting peripheral vision (crowding) is predictive of single feature search, as reflected in response time and eye movement measures. Here we extended this work, by testing the relationship between crowding and visual search in a conjunction-search paradigm. Given that conjunction search involves more fine-grained discrimination and more serial behavior, we predicted it would be strongly affected by crowding. We tested sixty participants with regard to their sensitivity to both orientation and color-based crowding (as measured by critical spacing) and their efficiency in searching for a color/orientation conjunction (as indicated by manual response times and eye movements). While the correlations between the different crowding tasks were high, the correlations between the different crowding measures and search performance were relatively modest, and no higher than those previously observed for single-feature search. Instead, observers showed very strong color selectivity during search. The results suggest that conjunction search behavior relies more on top-down guidance (here by color) and is therefore relatively less determined by individual differences in sensory limitations as caused by crowding.
Collapse
Affiliation(s)
- Inês S Veríssimo
- Department of Experimental and Applied Psychology, Cognitive Psychology Section, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081 BT Amsterdam, The Netherlands; Institute for Brain and Behavior, Van der Boechorststraat 7, 1081 BT Amsterdam, The Netherlands.
| | - Zachary Nudelman
- Department of Experimental and Applied Psychology, Cognitive Psychology Section, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081 BT Amsterdam, The Netherlands
| | - Christian N L Olivers
- Department of Experimental and Applied Psychology, Cognitive Psychology Section, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081 BT Amsterdam, The Netherlands
| |
Collapse
|
5
|
Kreichman O, Gilaie‐Dotan S. Parafoveal vision reveals qualitative differences between fusiform face area and parahippocampal place area. Hum Brain Mapp 2024; 45:e26616. [PMID: 38379465 PMCID: PMC10879909 DOI: 10.1002/hbm.26616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 01/02/2024] [Accepted: 01/22/2024] [Indexed: 02/22/2024] Open
Abstract
The center-periphery visual field axis guides early visual system organization with enhanced resources devoted to central vision leading to reduced peripheral performance relative to that of central vision (i.e., behavioral eccentricity effect) for many visual functions. The center-periphery organization extends to high-order visual cortex where, for example, the well-studied face-sensitive fusiform face area (FFA) shows sensitivity to central vision and the place-sensitive parahippocampal place area (PPA) shows sensitivity to peripheral vision. As we have recently found that face perception is more sensitive to eccentricity than place perception, here we examined whether these behavioral findings reflect differences in FFA's and PPA's sensitivities to eccentricity. We assumed FFA would show higher sensitivity to eccentricity than PPA would, but that both regions' modulation by eccentricity would be invariant to the viewed category. We parametrically investigated (fMRI, n = 32) how FFA's and PPA's activations are modulated by eccentricity (≤8°) and category (upright/inverted faces/houses) while keeping stimulus size constant. As expected, FFA showed an overall higher sensitivity to eccentricity than PPA. However, both regions' activation modulations by eccentricity were dependent on the viewed category. In FFA, a reduction of activation with growing eccentricity ("BOLD eccentricity effect") was found (with different amplitudes) for all categories. In PPA however, qualitatively different BOLD eccentricity effect modulations were found (e.g., at 8° mild BOLD eccentricity effect for houses but a reverse BOLD eccentricity effect for faces and no modulation for inverted faces). Our results emphasize that peripheral vision investigations are critical to further our understanding of visual processing.
Collapse
Affiliation(s)
- Olga Kreichman
- School of Optometry and Vision Science, Faculty of Life ScienceBar Ilan UniversityRamat GanIsrael
- The Gonda Multidisciplinary Brain Research CenterBar Ilan UniversityRamat GanIsrael
| | - Sharon Gilaie‐Dotan
- School of Optometry and Vision Science, Faculty of Life ScienceBar Ilan UniversityRamat GanIsrael
- The Gonda Multidisciplinary Brain Research CenterBar Ilan UniversityRamat GanIsrael
- UCL Institute of Cognitive NeuroscienceLondonUK
| |
Collapse
|
6
|
Chen L, Wu B, Yu H, Sperandio I. Network dynamics underlying alterations in apparent object size. Brain Commun 2024; 6:fcae006. [PMID: 38250057 PMCID: PMC10799746 DOI: 10.1093/braincomms/fcae006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 12/14/2023] [Accepted: 01/09/2024] [Indexed: 01/23/2024] Open
Abstract
A target circle surrounded by small circles looks larger than an identical circle surrounded by large circles (termed as the Ebbinghaus illusion). While previous research has shown that both early and high-level visual regions are involved in the generation of the illusion, it remains unclear how these regions work together to modulate the illusion effect. Here, we used functional MRI and dynamic causal modelling to investigate the neural networks underlying the illusion in conditions where the focus of attention was manipulated via participants directing their attention to and maintain fixation on only one of the two illusory configurations at a time. Behavioural findings confirmed the presence of the illusion. Accordingly, functional MRI activity in the extrastriate cortex accounted for the illusory effects: apparently larger circles elicited greater activation than apparently smaller circles. Interestingly, this spread of activity for size overestimation was accompanied by a decrease in the inhibitory self-connection in the extrastriate region, and an increase in the feedback connectivity from the precuneus to the extrastriate region. These findings demonstrate that the representation of apparent object size relies on feedback projections from higher- to lower-level visual areas, highlighting the crucial role of top-down signals in conscious visual perception.
Collapse
Affiliation(s)
- Lihong Chen
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China
| | - Baoyu Wu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310058, China
| | - Haoyang Yu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China
| | - Irene Sperandio
- Department of Psychology and Cognitive Science, University of Trento, Rovereto 38068, Italy
| |
Collapse
|
7
|
Heitmann C, Zhan M, Linke M, Hölig C, Kekunnaya R, van Hoof R, Goebel R, Röder B. Early visual experience refines the retinotopic organization within and across visual cortical regions. Curr Biol 2023; 33:4950-4959.e4. [PMID: 37918397 DOI: 10.1016/j.cub.2023.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 09/29/2023] [Accepted: 10/06/2023] [Indexed: 11/04/2023]
Abstract
Early visual areas are retinotopically organized in human and non-human primates. Population receptive field (pRF) size increases with eccentricity and from lower- to higher-level visual areas. Furthermore, the cortical magnification factor (CMF), a measure of how much cortical space is devoted to each degree of visual angle, is typically larger for foveal as opposed to peripheral regions of the visual field. Whether this fine-scale organization within and across visual areas depends on early visual experience has yet been unknown. Here, we employed 7T functional magnetic resonance imaging pRF mapping to assess the retinotopic organization of early visual regions (i.e., V1, V2, and V3) in eight sight recovery individuals with a history of congenital blindness until a maximum of 4 years of age. Compared with sighted controls, foveal pRF sizes in these individuals were larger, and pRF sizes did not show the typical increase with eccentricity and down the visual cortical processing stream (V1-V2-V3). Cortical magnification was overall diminished and decreased less from foveal to parafoveal visual field locations. Furthermore, cortical magnification correlated with visual acuity in sight recovery individuals. The results of this study suggest that early visual experience is essential for refining a presumably innate prototypical retinotopic organization in humans within and across visual areas, which seems to be crucial for acquiring full visual capabilities.
Collapse
Affiliation(s)
- Carolin Heitmann
- Biological Psychology and Neuropsychology Lab, Faculty of Psychology and Movement Sciences, Universität Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany.
| | - Minye Zhan
- U992 (Cognitive neuroimaging unit), NeuroSpin, INSERM-CEA, 91191 Gif sur Yvette, France
| | - Madita Linke
- Biological Psychology and Neuropsychology Lab, Faculty of Psychology and Movement Sciences, Universität Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany
| | - Cordula Hölig
- Biological Psychology and Neuropsychology Lab, Faculty of Psychology and Movement Sciences, Universität Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany
| | - Ramesh Kekunnaya
- U992 (Cognitive neuroimaging unit), NeuroSpin, INSERM-CEA, 91191 Gif sur Yvette, France
| | - Rick van Hoof
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Universiteitssingel 40, 6229 ER Maastricht, the Netherlands
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Universiteitssingel 40, 6229 ER Maastricht, the Netherlands; Department of Development and Research, Brain Innovation B.V., Oxfordlaan 55, 6229 EV Maastricht, the Netherlands
| | - Brigitte Röder
- Biological Psychology and Neuropsychology Lab, Faculty of Psychology and Movement Sciences, Universität Hamburg, Von-Melle-Park 11, 20146 Hamburg, Germany; Child Sight Institute, Jasti V. Ramanamma Children's Eye Care Center, LV Prasad Eye Institute, Hyderabad, Telangana 500034, India.
| |
Collapse
|
8
|
Phangwiwat T, Punchongham P, Wongsawat Y, Chatnuntawech I, Wang S, Chunharas C, Sprague T, Woodman GF, Itthipuripat S. Sustained attention operates via dissociable neural mechanisms across different eccentric locations. RESEARCH SQUARE 2023:rs.3.rs-3562186. [PMID: 37986807 PMCID: PMC10659535 DOI: 10.21203/rs.3.rs-3562186/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
In primates, foveal and peripheral vision have distinct neural architectures and functions. However, it has been debated if selective attention operates via the same or different neural mechanisms across eccentricities. We tested these alternative accounts by examining the effects of selective attention on the steady-state visually evoked potential (SSVEP) and the fronto-parietal signal measured via EEG from human subjects performing a sustained visuospatial attention task. With a negligible level of eye movements, both SSVEP and SND exhibited the heterogeneous patterns of attentional modulations across eccentricities. Specifically, the attentional modulations of these signals peaked at the parafoveal locations and such modulations wore off as visual stimuli appeared closer to the fovea or further away towards the periphery. However, with a relatively higher level of eye movements, the heterogeneous patterns of attentional modulations of these neural signals were less robust. These data demonstrate that the top-down influence of covert visuospatial attention on early sensory processing in human cortex depends on eccentricity and the level of saccadic responses. Taken together, the results suggest that sustained visuospatial attention operates differently across different eccentric locations, providing new understanding of how attention augments sensory representations regardless of where the attended stimulus appears.
Collapse
Affiliation(s)
- Tanagrit Phangwiwat
- Department of Computer Engineering, King Mongkut's University of Technology Thonburi
| | - Phond Punchongham
- Department of Computer Engineering, King Mongkut's University of Technology Thonburi
| | - Yodchanan Wongsawat
- Department of Biomedical Engineering, Faculty of Engineering, Mahidol University
| | - Itthi Chatnuntawech
- National Nanotechnology Center, National Science and Technology Development Agency
| | - Sisi Wang
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam
| | - Chaipat Chunharas
- Chula Neuroscience Center, King Chulalongkorn Memorial Hospital, Thai Red Cross Society
| | - Thomas Sprague
- Psychological and Brain Science, 251, University of California Santa Barbara
| | | | - Sirawaj Itthipuripat
- Neuroscience Center for Research and Innovation (NX), Learning Institute, King Mongkut's University of Technology Thonburi
| |
Collapse
|
9
|
Klotzsche F, Gaebler M, Villringer A, Sommer W, Nikulin V, Ohl S. Visual short-term memory-related EEG components in a virtual reality setup. Psychophysiology 2023; 60:e14378. [PMID: 37393581 DOI: 10.1111/psyp.14378] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 05/12/2023] [Accepted: 06/09/2023] [Indexed: 07/04/2023]
Abstract
Virtual reality (VR) offers a powerful tool for investigating cognitive processes, as it allows researchers to gauge behaviors and mental states in complex, yet highly controlled, scenarios. The use of VR head-mounted displays in combination with physiological measures such as EEG presents new challenges and raises the question whether established findings also generalize to a VR setup. Here, we used a VR headset to assess the spatial constraints underlying two well-established EEG correlates of visual short-term memory: the amplitude of the contralateral delay activity (CDA) and the lateralization of induced alpha power during memory retention. We tested observers' visual memory in a change detection task with bilateral stimulus arrays of either two or four items while varying the horizontal eccentricity of the memory arrays (4, 9, or 14 degrees of visual angle). The CDA amplitude differed between high and low memory load at the two smaller eccentricities, but not at the largest eccentricity. Neither memory load nor eccentricity significantly influenced the observed alpha lateralization. We further fitted time-resolved spatial filters to decode memory load from the event-related potential as well as from its time-frequency decomposition. Classification performance during the retention interval was above-chance level for both approaches and did not vary significantly across eccentricities. We conclude that commercial VR hardware can be utilized to study the CDA and lateralized alpha power, and we provide caveats for future studies targeting these EEG markers of visual memory in a VR setup.
Collapse
Affiliation(s)
- Felix Klotzsche
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Faculty of Philosophy, Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Michael Gaebler
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Faculty of Philosophy, Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Arno Villringer
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Faculty of Philosophy, Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Werner Sommer
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Vadim Nikulin
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Sven Ohl
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
10
|
Yu H, Kwon M. Altered Eye Movements During Reading With Simulated Central and Peripheral Visual Field Defects. Invest Ophthalmol Vis Sci 2023; 64:21. [PMID: 37843494 PMCID: PMC10584020 DOI: 10.1167/iovs.64.13.21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Accepted: 09/22/2023] [Indexed: 10/17/2023] Open
Abstract
Purpose Although foveal vision provides fine spatial information, parafoveal and peripheral vision are also known to be important for efficient reading behaviors. Here we systematically investigate how different types and sizes of visual field defects affect the way visual information is acquired via eye movements during reading. Methods Using gaze-contingent displays, simulated scotomas were induced in 24 adults with normal or corrected-to-normal vision during a reading task. The study design included peripheral and central scotomas of varying sizes (aperture or scotoma size of 2°, 4°, 6°, 8°, and 10°) and no-scotoma conditions. Eye movements (e.g., forward/backward saccades, fixations, microsaccades) were plotted as a function of either the aperture or scotoma size, and their relationships were characterized by the best fitting model. Results When the aperture size of the peripheral scotoma decreased below 6° (11 visible letters), there were significant decreases in saccade amplitude and velocity, as well as substantial increases in fixation duration and the number of fixations. Its dependency on the aperture size is best characterized by an exponential decay or growth function in log-linear coordinates. However, saccade amplitude and velocity, fixation duration, and forward/regressive saccades increased more or less linearly with increasing central scotoma size in log-linear coordinates. Conclusions Our results showed differential impacts of central and peripheral vision loss on reading behaviors while lending further support for the importance of foveal and parafoveal vision in reading. These apparently deviated oculomotor behaviors may in part reflect optimal reading strategies to compensate for the loss of visual information.
Collapse
Affiliation(s)
- Haojue Yu
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States
| | - MiYoung Kwon
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States
| |
Collapse
|
11
|
Rodrigues T, Dib L, Bréthaut É, Matter MM, Matter-Sadzinski L, Matter JM. Increased neuron density in the midbrain of a foveate bird, pigeon, results from profound change in tissue morphogenesis. Dev Biol 2023; 502:77-98. [PMID: 37400051 DOI: 10.1016/j.ydbio.2023.06.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 06/18/2023] [Accepted: 06/29/2023] [Indexed: 07/05/2023]
Abstract
The increase of brain neuron number in relation with brain size is currently considered to be the major evolutionary path to high cognitive power in amniotes. However, how changes in neuron density did contribute to the evolution of the information-processing capacity of the brain remains unanswered. High neuron densities are seen as the main reason why the fovea located at the visual center of the retina is responsible for sharp vision in birds and primates. The emergence of foveal vision is considered as a breakthrough innovation in visual system evolution. We found that neuron densities in the largest visual center of the midbrain - i.e., the optic tectum - are two to four times higher in modern birds with one or two foveae compared to birds deprived of this specialty. Interspecies comparisons enabled us to identify elements of a hitherto unknown developmental process set up by foveate birds for increasing neuron density in the upper layers of their optic tectum. The late progenitor cells that generate these neurons proliferate in a ventricular zone that can expand only radially. In this particular context, the number of cells in ontogenetic columns increases, thereby setting the conditions for higher cell densities in the upper layers once neurons did migrate.
Collapse
Affiliation(s)
- Tania Rodrigues
- Department of Molecular Biology & Department of Biochemistry, Sciences III, University of Geneva, 30 quai Ernest-Ansermet, 1211, Geneva, 4, Switzerland
| | - Linda Dib
- Swiss Institute of Bioinformatics, Le Génopode, 1015, Lausanne, Switzerland
| | | | - Michel M Matter
- HEPIA, HES-SO, University of Applied Sciences and Arts Western Switzerland, 1202, Geneva, Switzerland
| | - Lidia Matter-Sadzinski
- Department of Molecular Biology & Department of Biochemistry, Sciences III, University of Geneva, 30 quai Ernest-Ansermet, 1211, Geneva, 4, Switzerland
| | - Jean-Marc Matter
- Department of Molecular Biology & Department of Biochemistry, Sciences III, University of Geneva, 30 quai Ernest-Ansermet, 1211, Geneva, 4, Switzerland.
| |
Collapse
|
12
|
Kruper J, Benson NC, Caffarra S, Owen J, Wu Y, Lee AY, Lee CS, Yeatman JD, Rokem A. Optic radiations representing different eccentricities age differently. Hum Brain Mapp 2023; 44:3123-3135. [PMID: 36896869 PMCID: PMC10171550 DOI: 10.1002/hbm.26267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 01/10/2023] [Accepted: 02/16/2023] [Indexed: 03/11/2023] Open
Abstract
The neural pathways that carry information from the foveal, macular, and peripheral visual fields have distinct biological properties. The optic radiations (OR) carry foveal and peripheral information from the thalamus to the primary visual cortex (V1) through adjacent but separate pathways in the white matter. Here, we perform white matter tractometry using pyAFQ on a large sample of diffusion MRI (dMRI) data from subjects with healthy vision in the U.K. Biobank dataset (UKBB; N = 5382; age 45-81). We use pyAFQ to characterize white matter tissue properties in parts of the OR that transmit information about the foveal, macular, and peripheral visual fields, and to characterize the changes in these tissue properties with age. We find that (1) independent of age there is higher fractional anisotropy, lower mean diffusivity, and higher mean kurtosis in the foveal and macular OR than in peripheral OR, consistent with denser, more organized nerve fiber populations in foveal/parafoveal pathways, and (2) age is associated with increased diffusivity and decreased anisotropy and kurtosis, consistent with decreased density and tissue organization with aging. However, anisotropy in foveal OR decreases faster with age than in peripheral OR, while diffusivity increases faster in peripheral OR, suggesting foveal/peri-foveal OR and peripheral OR differ in how they age.
Collapse
Affiliation(s)
- John Kruper
- Department of PsychologyUniversity of WashingtonSeattleWashingtonUSA
- eScience InstituteUniversity of WashingtonSeattleWashingtonUSA
| | - Noah C. Benson
- eScience InstituteUniversity of WashingtonSeattleWashingtonUSA
| | - Sendy Caffarra
- Graduate School of Education, Stanford University and Division of Developmental‐Behavioral Pediatrics, Stanford University School of MedicineStanford UniversityStanfordCaliforniaUSA
- Department of Biomedical, Metabolic and Neural SciencesUniversity of Modena and Reggio EmiliaModenaItaly
| | - Julia Owen
- Department of OphthalmologyUniversity of WashingtonSeattleWashingtonUSA
- Roger and Angie Karalis Johnson Retina CenterUniversity of WashingtonSeattleWashingtonUSA
| | - Yue Wu
- Department of OphthalmologyUniversity of WashingtonSeattleWashingtonUSA
- Roger and Angie Karalis Johnson Retina CenterUniversity of WashingtonSeattleWashingtonUSA
| | - Aaron Y. Lee
- Department of OphthalmologyUniversity of WashingtonSeattleWashingtonUSA
- Roger and Angie Karalis Johnson Retina CenterUniversity of WashingtonSeattleWashingtonUSA
| | - Cecilia S. Lee
- Department of OphthalmologyUniversity of WashingtonSeattleWashingtonUSA
- Roger and Angie Karalis Johnson Retina CenterUniversity of WashingtonSeattleWashingtonUSA
| | - Jason D. Yeatman
- Graduate School of Education, Stanford University and Division of Developmental‐Behavioral Pediatrics, Stanford University School of MedicineStanford UniversityStanfordCaliforniaUSA
| | - Ariel Rokem
- Department of PsychologyUniversity of WashingtonSeattleWashingtonUSA
- eScience InstituteUniversity of WashingtonSeattleWashingtonUSA
| | | |
Collapse
|
13
|
Himmelberg MM, Winawer J, Carrasco M. Polar angle asymmetries in visual perception and neural architecture. Trends Neurosci 2023; 46:445-458. [PMID: 37031051 PMCID: PMC10192146 DOI: 10.1016/j.tins.2023.03.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 03/06/2023] [Accepted: 03/10/2023] [Indexed: 04/10/2023]
Abstract
Human visual performance changes with visual field location. It is best at the center of gaze and declines with eccentricity, and also varies markedly with polar angle. These perceptual polar angle asymmetries are linked to asymmetries in the organization of the visual system. We review and integrate research quantifying how performance changes with visual field location and how this relates to neural organization at multiple stages of the visual system. We first briefly review how performance varies with eccentricity and the neural foundations of this effect. We then focus on perceptual polar angle asymmetries and their neural foundations. Characterizing perceptual and neural variations across and around the visual field contributes to our understanding of how the brain translates visual signals into neural representations which form the basis of visual perception.
Collapse
Affiliation(s)
- Marc M Himmelberg
- Department of Psychology, New York University, New York, NY 10003, USA; Center for Neural Science, New York University, New York, NY 10003, USA.
| | - Jonathan Winawer
- Department of Psychology, New York University, New York, NY 10003, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY 10003, USA; Center for Neural Science, New York University, New York, NY 10003, USA.
| |
Collapse
|
14
|
Adhanom IB, MacNeilage P, Folmer E. Eye Tracking in Virtual Reality: a Broad Review of Applications and Challenges. VIRTUAL REALITY 2023; 27:1481-1505. [PMID: 37621305 PMCID: PMC10449001 DOI: 10.1007/s10055-022-00738-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/30/2022] [Indexed: 08/26/2023]
Abstract
Eye tracking is becoming increasingly available in head-mounted virtual reality displays with various headsets with integrated eye trackers already commercially available. The applications of eye tracking in virtual reality are highly diversified and span multiple disciplines. As a result, the number of peer-reviewed publications that study eye tracking applications has surged in recent years. We performed a broad review to comprehensively search academic literature databases with the aim of assessing the extent of published research dealing with applications of eye tracking in virtual reality, and highlighting challenges, limitations and areas for future research.
Collapse
Affiliation(s)
| | - Paul MacNeilage
- University of Nevada Reno, 1664 N Virginia St, Reno, NV 89557, USA
| | - Eelke Folmer
- University of Nevada Reno, 1664 N Virginia St, Reno, NV 89557, USA
| |
Collapse
|
15
|
Sawetsuttipan P, Phunchongharn P, Ounjai K, Salazar A, Pongsuwan S, Intrachooto S, Serences JT, Itthipuripat S. Perceptual Difficulty Regulates Attentional Gain Modulations in Human Visual Cortex. J Neurosci 2023; 43:3312-3330. [PMID: 36963848 PMCID: PMC10162463 DOI: 10.1523/jneurosci.0519-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 02/18/2023] [Accepted: 03/13/2023] [Indexed: 03/26/2023] Open
Abstract
Perceptual difficulty is sometimes used to manipulate selective attention. However, these two factors are logically distinct. Selective attention is defined by priority given to specific stimuli based on their behavioral relevance, whereas perceptual difficulty is often determined by perceptual demands required to discriminate relevant stimuli. That said, both perceptual difficulty and selective attention are thought to modulate the gain of neural responses in early sensory areas. Previous studies found that selectively attending to a stimulus or increasing perceptual difficulty enhanced the gain of neurons in visual cortex. However, some other studies suggest that perceptual difficulty can have either a null or even reversed effect on gain modulations in visual cortex. According to Yerkes-Dodson's Law, it is possible that this discrepancy arises because of an interaction between perceptual difficulty and attentional gain modulations yielding a nonlinear inverted-U function. Here, we used EEG to measure modulations in the visual cortex of male and female human participants performing an attention-cueing task where we systematically manipulated perceptual difficulty across blocks of trials. The behavioral and neural data implicate a nonlinear inverted-U relationship between selective attention and perceptual difficulty: a focused-attention cue led to larger response gain in both neural and behavioral data at intermediate difficulty levels compared with when the task was more or less difficult. Moreover, difficulty-related changes in attentional gain positively correlated with those predicted by quantitative modeling of the behavioral data. These findings suggest that perceptual difficulty mediates attention-related changes in perceptual performance via selective neural modulations in human visual cortex.SIGNIFICANCE STATEMENT Both perceptual difficulty and selective attention are thought to influence perceptual performance by modulating response gain in early sensory areas. That said, less is known about how selective attention interacts with perceptual difficulty. Here, we measured neural gain modulations in the visual cortex of human participants performing an attention-cueing task where perceptual difficulty was systematically manipulated. Consistent with Yerkes-Dodson's Law, our behavioral and neural data implicate a nonlinear inverted-U relationship between selective attention and perceptual difficulty. These results suggest that perceptual difficulty mediates attention-related changes in perceptual performance via selective neural modulations in visual cortex, extending our understanding of the attentional operation under different levels of perceptual demands.
Collapse
Affiliation(s)
- Prapasiri Sawetsuttipan
- Neuroscience Center for Research and Innovation, Learning Institute, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
- Computer Engineering Department, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
- Big Data Experience Center, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
| | - Phond Phunchongharn
- Computer Engineering Department, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
- Big Data Experience Center, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
| | - Kajornvut Ounjai
- Neuroscience Center for Research and Innovation, Learning Institute, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
- Biological Engineering Program, Faculty of Engineering, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
| | - Annalisa Salazar
- Department of Psychology, University of California, San Diego, La Jolla, California 92093-1090
| | - Sarigga Pongsuwan
- Happiness Science Hub, Research & Innovation for Sustainability Center (RISC), Bangkok 10260, Thailand
| | - Singh Intrachooto
- Happiness Science Hub, Research & Innovation for Sustainability Center (RISC), Bangkok 10260, Thailand
| | - John T Serences
- Department of Psychology, University of California, San Diego, La Jolla, California 92093-1090
- Neurosciences Graduate Program and Kavli Foundation for the Brain and Mind, University of California, San Diego, La Jolla, California 92093-1090
| | - Sirawaj Itthipuripat
- Neuroscience Center for Research and Innovation, Learning Institute, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
- Big Data Experience Center, King Mongkut's University of Technology Thonburi, Bangkok 10140, Thailand
| |
Collapse
|
16
|
Purohit P, Roy PK. Interaction between spatial perception and temporal perception enables preservation of cause-effect relationship: Visual psychophysics and neuronal dynamics. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:9101-9134. [PMID: 37161236 DOI: 10.3934/mbe.2023400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
INTRODUCTION Visual perception of moving objects is integral to our day-to-day life, integrating visual spatial and temporal perception. Most research studies have focused on finding the brain regions activated during motion perception. However, an empirically validated general mathematical model is required to understand the modulation of the motion perception. Here, we develop a mathematical formulation of the modulation of the perception of a moving object due to a change in speed, under the formulation of the invariance of causality. METHODS We formulated the perception of a moving object as the coordinate transformation from a retinotopic space onto perceptual space and derived a quantitative relationship between spatiotemporal coordinates. To validate our model, we undertook the analysis of two experiments: (i) the perceived length of the moving arc, and (ii) the perceived time while observing moving stimuli. We performed a magnetic resonance imaging (MRI) tractography investigation of subjects to demarcate the anatomical correlation of the modulation of the perception of moving objects. RESULTS Our theoretical model shows that the interaction between visual-spatial and temporal perception, during the perception of moving object is described by coupled linear equations; and experimental observations validate our model. We observed that cerebral area V5 may be an anatomical correlate for this interaction. The physiological basis of interaction is shown by a Lotka-Volterra system delineating interplay between acetylcholine and dopamine neurotransmitters, whose concentrations vary periodically with the orthogonal phase shift between them, occurring at the axodendritic synapse of complex cells at area V5. CONCLUSION Under the invariance of causality in the representation of events in retinotopic space and perceptual space, the speed modulates the perception of a moving object. This modulation may be due to variations of the tuning properties of complex cells at area V5 due to the dynamic interaction between acetylcholine and dopamine. Our analysis is the first significant study, to our knowledge, that establishes a mathematical linkage between motion perception and causality invariance.
Collapse
Affiliation(s)
- Pratik Purohit
- School of Biomedical Engineering, Indian Institute of Technology (BHU), Varanasi 221005, India
| | - Prasun K Roy
- School of Biomedical Engineering, Indian Institute of Technology (BHU), Varanasi 221005, India
- Department of Life Sciences, Shiv Nadar University (SNU), Delhi NCR, Dadri 201314, India
| |
Collapse
|
17
|
Ceple I, Skilters J, Lyakhovetskii V, Jurcinska I, Krumina G. Figure–Ground Segmentation and Biological Motion Perception in Peripheral Visual Field. Brain Sci 2023; 13:brainsci13030380. [PMID: 36979190 PMCID: PMC10046209 DOI: 10.3390/brainsci13030380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 01/26/2023] [Accepted: 02/20/2023] [Indexed: 02/25/2023] Open
Abstract
Biological motion perception is a specific type of perceptual organization, during which a clear image of a moving human body is perceptually generated in virtue of certain core light dots representing the major joint movements. While the processes of biological motion perception have been studied extensively for almost a century, there is still a debate on whether biological motion task performance can be equally precise across all visual field or is central visual field specified for biological motion perception. The current study explores the processes of biological motion perception and figure–ground segmentation in the central and peripheral visual field, expanding the understanding of perceptual organization across different eccentricities. The method involved three different tasks of visual grouping: (1) a static visual grouping task, (2) a dynamic visual grouping task, and (3) a biological motion detection task. The stimuli in (1) and (2) were generated from 12–13 dots grouped by proximity and common fate, and, in (3), light dots representing human motion. All stimuli were embedded in static or dynamics visual noise and the threshold value for the number of noise dots in which the elements could still be grouped by proximity and/or common fate was determined. The results demonstrate that biological motion can be differentiated from the scrambled set of moving dots in a more intensive visual noise than static and dynamic visual grouping tasks. Furthermore, in all three visual tasks (static and dynamic grouping, and biological motion detection) the performance was significantly worse in the periphery than in the central visual field, and object magnification could not compensate for the reduced performance in any of the three grouping tasks. The preliminary results of nine participants indicate that (a) human motion perception involves specific perceptual processes, providing the high-accuracy perception of the human body and (b) the processes of figure–ground segmentation are governed by the bottom-up processes and the best performance can be achieved only when the object is demonstrated in the central visual field.
Collapse
Affiliation(s)
- Ilze Ceple
- Department of Optometry and Vision Science, University of Latvia, LV-1586 Rīga, Latvia
- Correspondence:
| | - Jurgis Skilters
- Laboratory for Perceptual and Cognitive Systems, Faculty of Computing, University of Latvia, LV-1586 Rīga, Latvia
| | | | - Inga Jurcinska
- Department of Optometry and Vision Science, University of Latvia, LV-1586 Rīga, Latvia
| | - Gunta Krumina
- Department of Optometry and Vision Science, University of Latvia, LV-1586 Rīga, Latvia
| |
Collapse
|
18
|
Spencer M, Kameneva T, Grayden DB, Burkitt AN, Meffin H. Quantifying visual acuity for pre-clinical testing of visual prostheses. J Neural Eng 2023; 20. [PMID: 36270430 DOI: 10.1088/1741-2552/ac9c95] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 10/21/2022] [Indexed: 01/31/2023]
Abstract
Objective.Visual prostheses currently restore only limited vision. More research and pre-clinical work are required to improve the devices and stimulation strategies that are used to induce neural activity that results in visual perception. Evaluation of candidate strategies and devices requires an objective way to convert measured and modelled patterns of neural activity into a quantitative measure of visual acuity.Approach.This study presents an approach that compares evoked patterns of neural activation with target and reference patterns. A d-prime measure of discriminability determines whether the evoked neural activation pattern is sufficient to discriminate between the target and reference patterns and thus provides a quantified level of visual perception in the clinical Snellen and MAR scales. The magnitude of the resulting value was demonstrated using scaled standardized 'C' and 'E' optotypes.Main results.The approach was used to assess the visual acuity provided by two alternative stimulation strategies applied to simulated retinal implants with different electrode pitch configurations and differently sized spreads of neural activity. It was found that when there is substantial overlap in neural activity generated by different electrodes, an estimate of acuity based only upon electrode pitch is incorrect; our proposed method gives an accurate result in both circumstances.Significance.Quantification of visual acuity using this approach in pre-clinical development will allow for more rapid and accurate prototyping of improved devices and neural stimulation strategies.
Collapse
Affiliation(s)
- Martin Spencer
- Department of Biomedical Engineering, The University of Melbourne, Parkville, Victoria, Australia.,Greame Clark Institute of Biomedical Engineering, The University of Melbourne, Parkville, Victoria, Australia
| | - Tatiana Kameneva
- Department of Biomedical Engineering, The University of Melbourne, Parkville, Victoria, Australia.,School of Science, Computing and Engineering Technologies, Swinburne University of Technology, Hawthorn, Victoria, Australia
| | - David B Grayden
- Department of Biomedical Engineering, The University of Melbourne, Parkville, Victoria, Australia.,Greame Clark Institute of Biomedical Engineering, The University of Melbourne, Parkville, Victoria, Australia
| | - Anthony N Burkitt
- Department of Biomedical Engineering, The University of Melbourne, Parkville, Victoria, Australia.,Greame Clark Institute of Biomedical Engineering, The University of Melbourne, Parkville, Victoria, Australia
| | - Hamish Meffin
- Department of Biomedical Engineering, The University of Melbourne, Parkville, Victoria, Australia.,Greame Clark Institute of Biomedical Engineering, The University of Melbourne, Parkville, Victoria, Australia.,National Vision Research Institute, Australian College of Optometry, Carlton, Victoria, Australia
| |
Collapse
|
19
|
Benson NC, Yoon JMD, Forenzo D, Engel SA, Kay KN, Winawer J. Variability of the Surface Area of the V1, V2, and V3 Maps in a Large Sample of Human Observers. J Neurosci 2022; 42:8629-8646. [PMID: 36180226 PMCID: PMC9671582 DOI: 10.1523/jneurosci.0690-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 09/08/2022] [Accepted: 09/16/2022] [Indexed: 11/21/2022] Open
Abstract
How variable is the functionally defined structure of early visual areas in human cortex and how much variability is shared between twins? Here we quantify individual differences in the best understood functionally defined regions of cortex: V1, V2, V3. The Human Connectome Project 7T Retinotopy Dataset includes retinotopic measurements from 181 subjects (109 female, 72 male), including many twins. We trained four "anatomists" to manually define V1-V3 using retinotopic features. These definitions were more accurate than automated anatomical templates and showed that surface areas for these maps varied more than threefold across individuals. This threefold variation was little changed when normalizing visual area size by the surface area of the entire cerebral cortex. In addition to varying in size, we find that visual areas vary in how they sample the visual field. Specifically, the cortical magnification function differed substantially among individuals, with the relative amount of cortex devoted to central vision varying by more than a factor of 2. To complement the variability analysis, we examined the similarity of visual area size and structure across twins. Whereas the twin sample sizes are too small to make precise heritability estimates (50 monozygotic pairs, 34 dizygotic pairs), they nonetheless reveal high correlations, consistent with strong effects of the combination of shared genes and environment on visual area size. Collectively, these results provide the most comprehensive account of individual variability in visual area structure to date, and provide a robust population benchmark against which new individuals and developmental and clinical populations can be compared.SIGNIFICANCE STATEMENT Areas V1, V2, and V3 are among the best studied functionally defined regions in human cortex. Using the largest retinotopy dataset to date, we characterized the variability of these regions across individuals and the similarity between twin pairs. We find that the size of visual areas varies dramatically (up to 3.5×) across healthy young adults, far more than the variability of the cerebral cortex size as a whole. Much of this variability appears to arise from inherited factors, as we find very high correlations in visual area size between monozygotic twin pairs, and lower but still substantial correlations between dizygotic twin pairs. These results provide the most comprehensive assessment of how functionally defined visual cortex varies across the population to date.
Collapse
Affiliation(s)
- Noah C Benson
- eScience Institute, University of Washington, Seattle, Washington 98195
| | - Jennifer M D Yoon
- Department of Psychology, New York University, New York, New York 10003
- Center for Neural Sciences, New York University, New York, New York 10003
| | - Dylan Forenzo
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
| | - Stephen A Engel
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455
| | - Kendrick N Kay
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, Minnesota 55455
| | - Jonathan Winawer
- Department of Psychology, New York University, New York, New York 10003
- Center for Neural Sciences, New York University, New York, New York 10003
| |
Collapse
|
20
|
Srikantharajah J, Ellard C. How central and peripheral vision influence focal and ambient processing during scene viewing. J Vis 2022; 22:4. [PMID: 36322076 PMCID: PMC9639699 DOI: 10.1167/jov.22.12.4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Central and peripheral vision carry out different functions during scene processing. The ambient mode of visual processing is more likely to involve peripheral visual processes, whereas the focal mode of visual processing is more likely to involve central visual processes. Although the ambient mode is responsible for navigating space and comprehending scene layout, the focal mode gathers detailed information as central vision is oriented to salient areas of the visual field. Previous work suggests that during the time course of scene viewing, there is a transition from ambient processing during the first few seconds to focal processing during later time intervals, characterized by longer fixations and shorter saccades. In this study, we identify the influence of central and peripheral vision on changes in eye movements and the transition from ambient to focal processing during the time course of scene processing. Using a gaze-contingent protocol, we restricted the visual field to central or peripheral vision while participants freely viewed scenes for 20 seconds. Results indicated that fixation durations are shorter when vision is restricted to central vision compared to normal vision. During late visual processing, fixations in peripheral vision were longer than those in central vision. We show that a transition from more ambient to more focal processing during scene viewing will occur even when vision is restricted to only central vision or peripheral vision.
Collapse
Affiliation(s)
| | - Colin Ellard
- Department of Psychology, University of Waterloo, Waterloo, Canada,
| |
Collapse
|
21
|
Tagoh S, Hamm LM, Schwarzkopf DS, Dakin SC. Motion adaptation improves acuity (but perceived size doesn't matter). J Vis 2022; 22:2. [PMID: 36194407 PMCID: PMC9547365 DOI: 10.1167/jov.22.11.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Recognition acuity—the minimum size of a high-contrast object that allows us to recognize it—is limited by optical and neural elements of the eye and by processing within the visual cortex. The perceived size of objects can be changed by motion-adaptation. Viewing receding or looming motion makes subsequently viewed stimuli appear to grow or shrink, respectively. It has been reported that resulting changes in perceived size impact recognition acuity. We set out to determine if such acuity changes are reliable and what drives this phenomenon. We measured the effect of adaptation to receding and looming motion on acuity for crowded tumbling-T stimuli (). We quantified the role of crowding, individuals’ susceptibility to motion-adaptation, and potentially confounding effects of pupil size and eye movements. Adaptation to receding motion made targets appear larger and improved acuity (–0.037 logMAR). Although adaptation to looming motion made targets appear smaller, it induced not the expected decrease in acuity but a modest acuity improvement (–0.018 logMAR). Further, each observer's magnitude of acuity change was not correlated with their individual perceived-size change following adaptation. Finally, we found no evidence that adaptation-induced acuity gains were related to crowding, fixation stability, or pupil size. Adaptation to motion modestly enhances visual acuity, but unintuitively, this is dissociated from perceived size. Ruling out fixation and pupillary behavior, we suggest that motion adaptation may improve acuity via incidental effects on sensitivity—akin to those arising from blur adaptation—which shift sensitivity to higher spatial frequency-tuned channels.
Collapse
Affiliation(s)
- Selassie Tagoh
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand.,
| | - Lisa M Hamm
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand.,
| | - Dietrich S Schwarzkopf
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand.,Department of Experimental Psychology, University College London, London, UK.,
| | - Steven C Dakin
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand.,UCL Institute of Ophthalmology, University College London, London, UK.,
| |
Collapse
|
22
|
Le Bec B, Troncoso XG, Desbois C, Passarelli Y, Baudot P, Monier C, Pananceau M, Frégnac Y. Horizontal connectivity in V1: Prediction of coherence in contour and motion integration. PLoS One 2022; 17:e0268351. [PMID: 35802625 PMCID: PMC9269411 DOI: 10.1371/journal.pone.0268351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Accepted: 04/26/2022] [Indexed: 11/30/2022] Open
Abstract
This study demonstrates the functional importance of the Surround context relayed laterally in V1 by the horizontal connectivity, in controlling the latency and the gain of the cortical response to the feedforward visual drive. We report here four main findings: 1) a centripetal apparent motion sequence results in a shortening of the spiking latency of V1 cells, when the orientation of the local inducer and the global motion axis are both co-aligned with the RF orientation preference; 2) this contextual effects grows with visual flow speed, peaking at 150–250°/s when it matches the propagation speed of horizontal connectivity (0.15–0.25 mm/ms); 3) For this speed range, the axial sensitivity of V1 cells is tilted by 90° to become co-aligned with the orientation preference axis; 4) the strength of modulation by the surround context correlates with the spatiotemporal coherence of the apparent motion flow. Our results suggest an internally-generated binding process, linking local (orientation /position) and global (motion/direction) features as early as V1. This long-range diffusion process constitutes a plausible substrate in V1 of the human psychophysical bias in speed estimation for collinear motion. Since it is demonstrated in the anesthetized cat, this novel form of contextual control of the cortical gain and phase is a built-in property in V1, whose expression does not require behavioral attention and top-down control from higher cortical areas. We propose that horizontal connectivity participates in the propagation of an internal “prediction” wave, shaped by visual experience, which links contour co-alignment and global axial motion at an apparent speed in the range of saccade-like eye movements.
Collapse
Affiliation(s)
- Benoit Le Bec
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
| | - Xoana G. Troncoso
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
| | - Christophe Desbois
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
- Ecole Nationale Vétérinaire d’Alfort, Maisons-Alfort, France
| | - Yannick Passarelli
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
| | - Pierre Baudot
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
| | - Cyril Monier
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
| | - Marc Pananceau
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
| | - Yves Frégnac
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
- * E-mail:
| |
Collapse
|
23
|
Qianchen L, Gallagher RM, Tsuchiya N. How much can we differentiate at a brief glance: revealing the truer limit in conscious contents through the massive report paradigm (MRP). ROYAL SOCIETY OPEN SCIENCE 2022; 9:210394. [PMID: 35619998 PMCID: PMC9128849 DOI: 10.1098/rsos.210394] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 04/27/2022] [Indexed: 06/15/2023]
Abstract
Upon a brief glance, how well can we differentiate what we see from what we do not? Previous studies answered this question as 'poorly'. This is in stark contrast with our everyday experience. Here, we consider the possibility that previous restriction in stimulus variability and response alternatives reduced what participants could express from what they consciously experienced. We introduce a novel massive report paradigm that probes the ability to differentiate what we see from what we do not. In each trial, participants viewed a natural scene image and judged whether a small image patch was a part of the original image. To examine the limit of discriminability, we also included subtler changes in the image as modification of objects. Neither the images nor patches were repeated per participant. Our results showed that participants were highly accurate (accuracy greater than 80%) in differentiating patches from the viewed images from patches that are not present. Additionally, the differentiation between original and modified objects was influenced by object sizes and/or the congruence between objects and the scene gists. Our massive report paradigm opens a door to quantitatively measure the limit of immense informativeness of a moment of consciousness.
Collapse
Affiliation(s)
- Liang Qianchen
- School of Psychological Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, Victoria, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Victoria, Australia
| | - Regan M. Gallagher
- School of Psychological Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, Victoria, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Victoria, Australia
| | - Naotsugu Tsuchiya
- School of Psychological Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, Victoria, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Victoria, Australia
- Center for Information and Neural Networks (CiNet), Osaka, Japan
- Advanced Telecommunications Research Computational Neuroscience Laboratories, Kyoto, Japan
| |
Collapse
|
24
|
Strasburger H. On the cortical mapping function - Visual space, cortical space, and crowding. Vision Res 2022; 194:107972. [PMID: 35182892 DOI: 10.1016/j.visres.2021.107972] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 10/07/2021] [Accepted: 10/25/2021] [Indexed: 11/30/2022]
Abstract
The retino-cortical visual pathway is retinotopically organized: Neighbourhood relationships on the retina are preserved in the mapping. Size relationships in that mapping are also highly regular: The size of a patch in the visual field that maps onto a cortical patch of fixed size follows, along any radius and over a wide range, simply a linear function with retinal eccentricity. As a consequence, the mapping of retinal to cortical locations follows a logarithmic function along that radius. While this has already been shown by Fischer (1973, Vision Research, 13, 2113-2120), the link between the linear function - which describes the local behaviour by the cortical magnification factor M - and the logarithmic location function for the global behaviour, has never been made explicit. The present paper provides such a link as a set of ready-to-use equations using Levi and Klein's E2 nomenclature, and examples for their validity and applicability in the mapping literature are discussed. The equations allow estimating M in the retinotopic centre; values thus derived from the literature show enormous, hitherto unnoticed, variability. A new structural parameter, d2, is proposed to characterize the cortical map, as a counterpart to E2; it shows much more stability. One pitfall is discussed and spelt out, namely the common myth that a pure logarithmic function, without constant term, will give an adequate map. The correct equations are finally extended to describe the cortical map of Bouma's law on visual crowding. The result contradicts recent suggestions that critical crowding distance corresponds to constant cortical distance.
Collapse
Affiliation(s)
- Hans Strasburger
- Ludwig-Maximilians-Universität München, Inst. f. Med. Psychologie, Georg-August-Universität Göttingen, Abt. Med. Psychologie & Med. Soziologie, Germany
| |
Collapse
|
25
|
Zhang J, Jiang Y, Song Y, Zhang P, He S. Spatial tuning of face part representations within face-selective areas revealed by high-field fMRI. eLife 2021; 10:e70925. [PMID: 34964711 PMCID: PMC8716104 DOI: 10.7554/elife.70925] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 12/11/2021] [Indexed: 11/20/2022] Open
Abstract
Regions sensitive to specific object categories as well as organized spatial patterns sensitive to different features have been found across the whole ventral temporal cortex (VTC). However, it is unclear that within each object category region, how specific feature representations are organized to support object identification. Would object features, such as object parts, be represented in fine-scale spatial tuning within object category-specific regions? Here, we used high-field 7T fMRI to examine the spatial tuning to different face parts within each face-selective region. Our results show consistent spatial tuning of face parts across individuals that within right posterior fusiform face area (pFFA) and right occipital face area (OFA), the posterior portion of each region was biased to eyes, while the anterior portion was biased to mouth and chin stimuli. Our results demonstrate that within the occipital and fusiform face processing regions, there exist systematic spatial tuning to different face parts that support further computation combining them.
Collapse
Affiliation(s)
- Jiedong Zhang
- Institute of Biophysics, Chinese Academy of SciencesBeijingChina
- University of Chinese Academy of SciencesBeijingChina
| | - Yong Jiang
- Institute of Biophysics, Chinese Academy of SciencesBeijingChina
- University of Chinese Academy of SciencesBeijingChina
| | - Yunjie Song
- Institute of Biophysics, Chinese Academy of SciencesBeijingChina
- University of Chinese Academy of SciencesBeijingChina
| | - Peng Zhang
- Institute of Biophysics, Chinese Academy of SciencesBeijingChina
- University of Chinese Academy of SciencesBeijingChina
| | - Sheng He
- Institute of Biophysics, Chinese Academy of SciencesBeijingChina
- University of Chinese Academy of SciencesBeijingChina
- Department of Psychology, University of MinnesotaMinneapolisUnited States
| |
Collapse
|
26
|
Visual field differences in temporal synchrony processing for audio-visual stimuli. PLoS One 2021; 16:e0261129. [PMID: 34914735 PMCID: PMC8675747 DOI: 10.1371/journal.pone.0261129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Accepted: 11/24/2021] [Indexed: 11/19/2022] Open
Abstract
Audio-visual integration relies on temporal synchrony between visual and auditory inputs. However, differences in traveling and transmitting speeds between visual and auditory stimuli exist; therefore, audio-visual synchrony perception exhibits flexible functions. The processing speed of visual stimuli affects the perception of audio-visual synchrony. The present study examined the effects of visual fields, in which visual stimuli are presented, for the processing of audio-visual temporal synchrony. The point of subjective simultaneity, the temporal binding window, and the rapid recalibration effect were measured using temporal order judgment, simultaneity judgment, and stream/bounce perception, because different mechanisms of temporal processing have been suggested among these three paradigms. The results indicate that auditory stimuli should be presented earlier for visual stimuli in the central visual field than in the peripheral visual field condition in order to perceive subjective simultaneity in the temporal order judgment task conducted in this study. Meanwhile, the subjective simultaneity bandwidth was broader in the central visual field than in the peripheral visual field during the simultaneity judgment task. In the stream/bounce perception task, neither the point of subjective simultaneity nor the temporal binding window differed between the two types of visual fields. Moreover, rapid recalibration occurred in both visual fields during the simultaneity judgment tasks. However, during the temporal order judgment task and stream/bounce perception, rapid recalibration occurred only in the central visual field. These results suggest that differences in visual processing speed based on the visual field modulate the temporal processing of audio-visual stimuli. Furthermore, these three tasks, temporal order judgment, simultaneity judgment, and stream/bounce perception, each have distinct functional characteristics for audio-visual synchrony perception. Future studies are necessary to confirm the effects of compensation regarding differences in the temporal resolution of the visual field in later cortical visual pathways on visual field differences in audio-visual temporal synchrony.
Collapse
|
27
|
Lukanov H, König P, Pipa G. Biologically Inspired Deep Learning Model for Efficient Foveal-Peripheral Vision. Front Comput Neurosci 2021; 15:746204. [PMID: 34880741 PMCID: PMC8645638 DOI: 10.3389/fncom.2021.746204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 10/27/2021] [Indexed: 11/13/2022] Open
Abstract
While abundant in biology, foveated vision is nearly absent from computational models and especially deep learning architectures. Despite considerable hardware improvements, training deep neural networks still presents a challenge and constraints complexity of models. Here we propose an end-to-end neural model for foveal-peripheral vision, inspired by retino-cortical mapping in primates and humans. Our model has an efficient sampling technique for compressing the visual signal such that a small portion of the scene is perceived in high resolution while a large field of view is maintained in low resolution. An attention mechanism for performing "eye-movements" assists the agent in collecting detailed information incrementally from the observed scene. Our model achieves comparable results to a similar neural architecture trained on full-resolution data for image classification and outperforms it at video classification tasks. At the same time, because of the smaller size of its input, it can reduce computational effort tenfold and uses several times less memory. Moreover, we present an easy to implement bottom-up and top-down attention mechanism which relies on task-relevant features and is therefore a convenient byproduct of the main architecture. Apart from its computational efficiency, the presented work provides means for exploring active vision for agent training in simulated environments and anthropomorphic robotics.
Collapse
Affiliation(s)
- Hristofor Lukanov
- Department of Neuroinformatics, Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Peter König
- Department of Neurobiopsychology, Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany.,Department of Neurophysiology and Pathophysiology, Center of Experimental Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Gordon Pipa
- Department of Neuroinformatics, Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| |
Collapse
|
28
|
Ribeiro FL, Bollmann S, Puckett AM. Predicting the retinotopic organization of human visual cortex from anatomy using geometric deep learning. Neuroimage 2021; 244:118624. [PMID: 34607019 DOI: 10.1016/j.neuroimage.2021.118624] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 09/13/2021] [Accepted: 09/27/2021] [Indexed: 10/20/2022] Open
Abstract
Whether it be in a single neuron or a more complex biological system like the human brain, form and function are often directly related. The functional organization of human visual cortex, for instance, is tightly coupled with the underlying anatomy with cortical shape having been shown to be a useful predictor of the retinotopic organization in early visual cortex. Although the current state-of-the-art in predicting retinotopic maps is able to account for gross individual differences, such models are unable to account for any idiosyncratic differences in the structure-function relationship from anatomical information alone due to their initial assumption of a template. Here we developed a geometric deep learning model capable of exploiting the actual structure of the cortex to learn the complex relationship between brain function and anatomy in human visual cortex such that more realistic and idiosyncratic maps could be predicted. We show that our neural network was not only able to predict the functional organization throughout the visual cortical hierarchy, but that it was also able to predict nuanced variations across individuals. Although we demonstrate its utility for modeling the relationship between structure and function in human visual cortex, our approach is flexible and well-suited for a range of other applications involving data structured in non-Euclidean spaces.
Collapse
Affiliation(s)
- Fernanda L Ribeiro
- School of Psychology, The University of Queensland, Saint Lucia, Brisbane, QLD 4072, Australia; Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia.
| | - Steffen Bollmann
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD 4072, Australia
| | - Alexander M Puckett
- School of Psychology, The University of Queensland, Saint Lucia, Brisbane, QLD 4072, Australia; Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia
| |
Collapse
|
29
|
Lehnert BP, Santiago C, Huey EL, Emanuel AJ, Renauld S, Africawala N, Alkislar I, Zheng Y, Bai L, Koutsioumpa C, Hong JT, Magee AR, Harvey CD, Ginty DD. Mechanoreceptor synapses in the brainstem shape the central representation of touch. Cell 2021; 184:5608-5621.e18. [PMID: 34637701 PMCID: PMC8556359 DOI: 10.1016/j.cell.2021.09.023] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 08/10/2021] [Accepted: 09/15/2021] [Indexed: 10/20/2022]
Abstract
Mammals use glabrous (hairless) skin of their hands and feet to navigate and manipulate their environment. Cortical maps of the body surface across species contain disproportionately large numbers of neurons dedicated to glabrous skin sensation, in part reflecting a higher density of mechanoreceptors that innervate these skin regions. Here, we find that disproportionate representation of glabrous skin emerges over postnatal development at the first synapse between peripheral mechanoreceptors and their central targets in the brainstem. Mechanoreceptor synapses undergo developmental refinement that depends on proximity of their terminals to glabrous skin, such that those innervating glabrous skin make synaptic connections that expand their central representation. In mice incapable of sensing gentle touch, mechanoreceptors innervating glabrous skin still make more powerful synapses in the brainstem. We propose that the skin region a mechanoreceptor innervates controls the developmental refinement of its central synapses to shape the representation of touch in the brain.
Collapse
Affiliation(s)
- Brendan P Lehnert
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Celine Santiago
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Erica L Huey
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Alan J Emanuel
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Sophia Renauld
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Nusrat Africawala
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Ilayda Alkislar
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Yang Zheng
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Ling Bai
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Charalampia Koutsioumpa
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Jennifer T Hong
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Alexandra R Magee
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Christopher D Harvey
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - David D Ginty
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA.
| |
Collapse
|
30
|
Żołubak A, Garcia-Suarez L. Shape discrimination in peripheral vision: Addressing pragmatic limitations of M-scaling radial frequency patterns. Vision Res 2021; 188:115-125. [PMID: 34315091 DOI: 10.1016/j.visres.2021.06.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Revised: 06/24/2021] [Accepted: 06/27/2021] [Indexed: 10/20/2022]
Abstract
Peripheral worsening in shape discrimination (SD) can be compensated by size-scaling of peripheral stimuli. However, such scaling results in production of large stimuli that occupy a vast range of eccentricities. We used six proportionally decreasing spatial scales to address this pragmatic limitation and to explore how shape discrimination varies with radius in the nasal visual field. Five participants with normal vision discriminated circles and radial frequency (RF) patterns presented nasally to the fixation point at 5°, 10°, 15° and 20°. Stimuli were scaled with the nasal cortical magnification factor (nCMF) from a central stimulus in six spatial scales, which varied from 0.125 to 1, where 1 corresponded to 1.2° radius. Thresholds expressed in Weber fractions remained constant at eccentricities up to 20° regardless of the spatial scale. Weber fractions for the smaller spatial scales (0.125-0.5) were higher and more variable than for the larger spatial scales (0.75-1), yet still constant across periphery. The results provide evidence that peripheral shape discrimination is constrained by low-level properties, such as eccentricity, and can be predicted by the cortical magnification theory. However, above the peripheral modulation resolution limits, RF shape discrimination is based on the proportion between the modulation amplitude and the radius for larger scales (0.75-1), and demonstrates peripheral scale invariance for these stimuli. For eccentric shape discrimination tests, stimuli with low spatial frequency, high contrast, and radii corresponding to SS 0.75-0.875 should be used to ensure constant Weber fractions, small variability, and peripheral stimuli that are not excessively magnified.
Collapse
Affiliation(s)
- Anna Żołubak
- School of Health Professions, University of Plymouth, Derriford Road, Plymouth PL6 8BH, United Kingdom.
| | - Luis Garcia-Suarez
- School of Health Professions, University of Plymouth, Derriford Road, Plymouth PL6 8BH, United Kingdom
| |
Collapse
|
31
|
Rolls ET. Learning Invariant Object and Spatial View Representations in the Brain Using Slow Unsupervised Learning. Front Comput Neurosci 2021; 15:686239. [PMID: 34366818 PMCID: PMC8335547 DOI: 10.3389/fncom.2021.686239] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 06/29/2021] [Indexed: 11/13/2022] Open
Abstract
First, neurophysiological evidence for the learning of invariant representations in the inferior temporal visual cortex is described. This includes object and face representations with invariance for position, size, lighting, view and morphological transforms in the temporal lobe visual cortex; global object motion in the cortex in the superior temporal sulcus; and spatial view representations in the hippocampus that are invariant with respect to eye position, head direction, and place. Second, computational mechanisms that enable the brain to learn these invariant representations are proposed. For the ventral visual system, one key adaptation is the use of information available in the statistics of the environment in slow unsupervised learning to learn transform-invariant representations of objects. This contrasts with deep supervised learning in artificial neural networks, which uses training with thousands of exemplars forced into different categories by neuronal teachers. Similar slow learning principles apply to the learning of global object motion in the dorsal visual system leading to the cortex in the superior temporal sulcus. The learning rule that has been explored in VisNet is an associative rule with a short-term memory trace. The feed-forward architecture has four stages, with convergence from stage to stage. This type of slow learning is implemented in the brain in hierarchically organized competitive neuronal networks with convergence from stage to stage, with only 4-5 stages in the hierarchy. Slow learning is also shown to help the learning of coordinate transforms using gain modulation in the dorsal visual system extending into the parietal cortex and retrosplenial cortex. Representations are learned that are in allocentric spatial view coordinates of locations in the world and that are independent of eye position, head direction, and the place where the individual is located. This enables hippocampal spatial view cells to use idiothetic, self-motion, signals for navigation when the view details are obscured for short periods.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, United Kingdom.,Department of Computer Science, University of Warwick, Coventry, United Kingdom
| |
Collapse
|
32
|
Errors in visuospatial working memory across space and time. Sci Rep 2021; 11:14449. [PMID: 34262103 PMCID: PMC8280190 DOI: 10.1038/s41598-021-93858-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 06/25/2021] [Indexed: 11/27/2022] Open
Abstract
Visuospatial working memory (VSWM) involves cortical regions along the dorsal visual pathway, which are topographically organized with respect to the visual space. However, it remains unclear how such functional organization may constrain VSWM behavior across space and time. Here, we systematically mapped VSWM performance across the 2-dimensional (2D) space in various retention intervals in human subjects using the memory-guided and visually guided saccade tasks in two experiments. Relative to visually guided saccades, memory-guided saccades showed significant increases in unsystematic errors, or response variability, with increasing target eccentricity (3°–13° of visual angle). Unsystematic errors also increased with increasing delay (1.5–3 s, Experiment 1; 0.5–5 s, Experiment 2), while there was little or no interaction between delay and eccentricity. Continuous bump attractor modeling suggested neurophysiological and functional organization factors in the increasing unsystematic errors in VSWM across space and time. These findings indicate that: (1) VSWM representation may be limited by the functional topology of the visual pathway for the 2D space; (2) Unsystematic errors may reflect accumulated noise from memory maintenance while systematic errors may originate from non-mnemonic processes such as noisy sensorimotor transformation; (3) There may be independent mechanisms supporting the spatial and temporal processing of VSWM.
Collapse
|
33
|
Linhardt D, Pawloff M, Hummer A, Woletz M, Tik M, Ritter M, Schmidt-Erfurth U, Windischberger C. Combining stimulus types for improved coverage in population receptive field mapping. Neuroimage 2021; 238:118240. [PMID: 34116157 DOI: 10.1016/j.neuroimage.2021.118240] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 06/04/2021] [Indexed: 10/21/2022] Open
Abstract
Retinotopy experiments using population receptive field (pRF) mapping are ideal for assigning regions in the visual field to cortical brain areas. While various designs for visual stimulation were suggested in the literature, all have specific shortcomings regarding visual field coverage. Here we acquired high-resolution 7 Tesla fMRI data to compare pRF-based coverage maps obtained with the two most commonly used stimulus variants: moving bars; rotating wedges and expanding rings. We find that stimulus selection biases the spatial distribution of pRF centres. In addition, eccentricity values and pRF sizes obtained from wedge/ring or bar stimulation runs show systematic differences. Wedge/ring stimulation results show lower eccentricity values and strongly reduced pRF sizes compared to bar stimulation runs. Statistical comparison shows significantly higher pRF centre numbers in the foveal 2° region of the visual field for wedge/ring compared to bar stimuli. We suggest and evaluate approaches for combining pRF data from different visual stimulus patterns to obtain improved mapping results.
Collapse
Affiliation(s)
- David Linhardt
- High Field MR Center, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Maximilian Pawloff
- Department of Ophthalmology, Medical University of Vienna, Vienna, Austria
| | - Allan Hummer
- High Field MR Center, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Michael Woletz
- High Field MR Center, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Martin Tik
- High Field MR Center, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Markus Ritter
- Department of Ophthalmology, Medical University of Vienna, Vienna, Austria
| | | | - Christian Windischberger
- High Field MR Center, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
34
|
Elshout JA, Bergsma DP, van den Berg AV, Haak KV. Functional MRI of visual cortex predicts training-induced recovery in stroke patients with homonymous visual field defects. NEUROIMAGE-CLINICAL 2021; 31:102703. [PMID: 34062384 PMCID: PMC8173295 DOI: 10.1016/j.nicl.2021.102703] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Revised: 05/17/2021] [Accepted: 05/18/2021] [Indexed: 12/28/2022]
Abstract
Damage to the visual brain typically leads to vision loss. Vision loss may be partially recovered with visual restitution training (VRT) Cortical responses to visual stimulation do not always lead to visual awareness. A mismatch between Humphrey and neural perimetry predicts training outcome. This finding has important implications for better rehabilitation strategies.
Post-chiasmatic damage to the visual system leads to homonymous visual field defects (HVDs), which can severely interfere with daily life activities. Visual Restitution Training (VRT) can recover parts of the affected visual field in patients with chronic HVDs, but training outcome is variable. An untested hypothesis suggests that training potential may be largest in regions with ‘neural reserve’, where cortical responses to visual stimulation do not lead to visual awareness as assessed by Humphrey perimetry—a standard behavioural visual field test. Here, we tested this hypothesis in a sample of twenty-seven hemianopic stroke patients, who participated in an assiduous 80-hour VRT program. For each patient, we collected Humphrey perimetry and wide-field fMRI-based retinotopic mapping data prior to training. In addition, we used Goal Attainment Scaling to assess whether personal activities in daily living improved. After training, we assessed with a second Humphrey perimetry measurement whether the visual field was improved and evaluated which personal goals were attained. Confirming the hypothesis, we found significantly larger improvements of visual sensitivity at field locations with neural reserve. These visual field improvements implicated both regions in primary visual cortex and higher order visual areas. In addition, improvement in daily life activities correlated with the extent of visual field enlargement. Our findings are an important step toward understanding the mechanisms of visual restitution as well as predicting training efficacy in stroke patients with chronic hemianopia.
Collapse
Affiliation(s)
- J A Elshout
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - D P Bergsma
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - A V van den Berg
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - K V Haak
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, The Netherlands.
| |
Collapse
|
35
|
Wang K, Hinz J, Zhang Y, Thiele TR, Arrenberg AB. Parallel Channels for Motion Feature Extraction in the Pretectum and Tectum of Larval Zebrafish. Cell Rep 2021; 30:442-453.e6. [PMID: 31940488 DOI: 10.1016/j.celrep.2019.12.031] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Revised: 07/27/2019] [Accepted: 12/09/2019] [Indexed: 11/18/2022] Open
Abstract
Non-cortical visual areas in vertebrate brains extract relevant stimulus features, such as motion, object size, and location, to support diverse behavioral tasks. The optic tectum and pretectum, two primary visual areas in zebrafish, are involved in motion processing, and yet their differential neural representation of behaviorally relevant visual features is unclear. Here, we characterize receptive fields (RFs) of motion-sensitive neurons in the diencephalon and midbrain. We show that RFs of many pretectal neurons are large and sample the lower visual field, whereas RFs of tectal neurons are mostly small-size selective and sample the upper nasal visual field more densely. Furthermore, optomotor swimming can reliably be evoked by presenting forward motion in the lower temporal visual field alone, matching the lower visual field bias of the pretectum. Thus, tectum and pretectum extract different visual features from distinct regions of visual space, which is likely a result of their adaptations to hunting and optomotor behavior, respectively.
Collapse
Affiliation(s)
- Kun Wang
- Werner Reichardt Centre for Integrative Neuroscience, Institute for Neurobiology, University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre for Neuroscience, University of Tübingen, 72074 Tübingen, Germany
| | - Julian Hinz
- Werner Reichardt Centre for Integrative Neuroscience, Institute for Neurobiology, University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre for Neuroscience, University of Tübingen, 72074 Tübingen, Germany
| | - Yue Zhang
- Werner Reichardt Centre for Integrative Neuroscience, Institute for Neurobiology, University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre for Neuroscience, University of Tübingen, 72074 Tübingen, Germany
| | - Tod R Thiele
- Department of Biological Sciences, University of Toronto Scarborough, Toronto, ON M1C 1A4, Canada
| | - Aristides B Arrenberg
- Werner Reichardt Centre for Integrative Neuroscience, Institute for Neurobiology, University of Tübingen, 72076 Tübingen, Germany.
| |
Collapse
|
36
|
Friedman R. Themes of advanced information processing in the primate brain. AIMS Neurosci 2020; 7:373-388. [PMID: 33263076 PMCID: PMC7701368 DOI: 10.3934/neuroscience.2020023] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 10/09/2020] [Indexed: 11/30/2022] Open
Abstract
Here is a review of several empirical examples of information processing that occur in the primate cerebral cortex. These include visual processing, object identification and perception, information encoding, and memory. Also, there is a discussion of the higher scale neural organization, mainly theoretical, which suggests hypotheses on how the brain internally represents objects. Altogether they support the general attributes of the mechanisms of brain computation, such as efficiency, resiliency, data compression, and a modularization of neural function and their pathways. Moreover, the specific neural encoding schemes are expectedly stochastic, abstract and not easily decoded by theoretical or empirical approaches.
Collapse
Affiliation(s)
- Robert Friedman
- Department of Biological Sciences, University of South Carolina, Columbia 29208, USA
| |
Collapse
|
37
|
Woertz EN, Wilk MA, Duwell EJ, Mathis JR, Carroll J, DeYoe EA. The relationship between retinal cone density and cortical magnification in human albinism. J Vis 2020; 20:10. [PMID: 32543650 PMCID: PMC7416892 DOI: 10.1167/jov.20.6.10] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
The human fovea lies at the center of the retina and supports high-acuity vision. In normal visual system development, the highest acuity is correlated with both a high density of cone photoreceptors in the fovea and a magnified retinotopic representation of the fovea in the visual cortex. Both cone density and the cortical area dedicated to each degree of visual space—the latter describing cortical magnification (CM)—steadily decrease with increasing eccentricity from the fovea. In albinism, peak cone density at the fovea and visual acuity are decreased, but seem to be within normal limits in the periphery, thus providing a model to explore the correlation between retinal structure, cortical structure, and behavior. Here, we used adaptive optics scanning light ophthalmoscopy to assess retinal cone density and functional magnetic resonance imaging to measure CM in the primary visual cortex of normal controls and individuals with albinism. We find that retinotopic organization is more varied among individuals with albinism than previously appreciated. Additionally, CM outside the fovea is similar to that in controls, but also more variable. CM in albinism and controls exceeds that which might be predicted based on cone density alone, but is more accurately predicted by retinal ganglion cell density. This finding suggests that decreased foveal cone density in albinism may be partially counteracted by nonuniform connectivity between cones and their downstream signaling partners. Together, these results emphasize that central as well as retinal factors must be included to provide a complete picture of aberrant structure and function in albinism.
Collapse
|
38
|
Kroner A, Senden M, Driessens K, Goebel R. Contextual encoder-decoder network for visual saliency prediction. Neural Netw 2020; 129:261-270. [PMID: 32563023 DOI: 10.1016/j.neunet.2020.05.004] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2019] [Revised: 03/19/2020] [Accepted: 05/04/2020] [Indexed: 11/28/2022]
Abstract
Predicting salient regions in natural images requires the detection of objects that are present in a scene. To develop robust representations for this challenging task, high-level visual features at multiple spatial scales must be extracted and augmented with contextual information. However, existing models aimed at explaining human fixation maps do not incorporate such a mechanism explicitly. Here we propose an approach based on a convolutional neural network pre-trained on a large-scale image classification task. The architecture forms an encoder-decoder structure and includes a module with multiple convolutional layers at different dilation rates to capture multi-scale features in parallel. Moreover, we combine the resulting representations with global scene information for accurately predicting visual saliency. Our model achieves competitive and consistent results across multiple evaluation metrics on two public saliency benchmarks and we demonstrate the effectiveness of the suggested approach on five datasets and selected examples. Compared to state of the art approaches, the network is based on a lightweight image classification backbone and hence presents a suitable choice for applications with limited computational resources, such as (virtual) robotic systems, to estimate human fixations across complex natural scenes. Our TensorFlow implementation is openly available at https://github.com/alexanderkroner/saliency.
Collapse
Affiliation(s)
- Alexander Kroner
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands; Maastricht Brain Imaging Centre, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.
| | - Mario Senden
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands; Maastricht Brain Imaging Centre, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.
| | - Kurt Driessens
- Department of Data Science and Knowledge Engineering, Faculty of Science and Engineering, Maastricht University, Maastricht, The Netherlands.
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands; Maastricht Brain Imaging Centre, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands; Department of Neuroimaging and Neuromodeling, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences (KNAW), Amsterdam, The Netherlands.
| |
Collapse
|
39
|
Grasso PA, Gallina J, Bertini C. Shaping the visual system: cortical and subcortical plasticity in the intact and the lesioned brain. Neuropsychologia 2020; 142:107464. [PMID: 32289349 DOI: 10.1016/j.neuropsychologia.2020.107464] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Accepted: 04/08/2020] [Indexed: 02/06/2023]
Abstract
Visual system is endowed with an incredibly complex organization composed of multiple visual pathway affording both hierarchical and parallel processing. Even if most of the visual information is conveyed by the retina to the lateral geniculate nucleus of the thalamus and then to primary visual cortex, a wealth of alternative subcortical pathways is present. This complex organization is experience dependent and retains plastic properties throughout the lifespan enabling the system with a continuous update of its functions in response to variable external needs. Changes can be induced by several factors including learning and experience but can also be promoted by the use non-invasive brain stimulation techniques. Furthermore, besides the astonishing ability of our visual system to spontaneously reorganize after injuries, we now know that the exposure to specific rehabilitative training can produce not only important functional modifications but also long-lasting changes within cortical and subcortical structures. The present review aims to update and address the current state of the art on these topics gathering studies that reported relevant modifications of visual functioning together with plastic changes within cortical and subcortical structures both in the healthy and in the lesioned visual system.
Collapse
Affiliation(s)
- Paolo A Grasso
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Florence, 50135, Italy.
| | - Jessica Gallina
- Department of Psychology, University of Bologna, Bologna, 40127, Italy; CsrNC, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Cesena, 47521, Italy
| | - Caterina Bertini
- Department of Psychology, University of Bologna, Bologna, 40127, Italy; CsrNC, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Cesena, 47521, Italy
| |
Collapse
|
40
|
Ananyev E, Yong Z, Hsieh PJ. Center-surround velocity-based segmentation: Speed, eccentricity, and timing of visual stimuli interact to determine interocular dominance. J Vis 2020; 19:3. [PMID: 31689716 DOI: 10.1167/19.13.3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We used a novel method to capture the spatial dominance pattern of competing motion fields at rivalry onset. When rivaling velocities were different, the participants reported center-surround segmentation: The slower stimuli often dominated in the center while faster motion persisted along the borders. The size of the central static/slow field scaled with the stimulus size. The central dominance was time-locked to the static stimulus onset but was disrupted if the dynamic stimulus was presented later. We then used the same stimuli as masks in an interocular suppression paradigm. The local suppression strengths were probed with targets at different eccentricities. Consistent with the center-surround segmentation, target speed and location interacted with mask velocities. Specifically, suppression power of the slower masks was nonhomogenous with eccentricity, providing a potential explanation for center-surround velocity-based segmentation. This interaction of speed, eccentricity, and timing has implications for motion processing and interocular suppression. The influence of different masks on which target features get suppressed predicts that some "unconscious effects" are not generalizable across masks and, thus, need to be replicated under various masking conditions.
Collapse
Affiliation(s)
- Egor Ananyev
- Nanyang Technological University, Department of Psychology, Singapore
| | - Zixin Yong
- Duke-NUS Medical School, Neuroscience and Behavioural Disorders Program, Singapore
| | - Po-Jang Hsieh
- National Taiwan University, Department of Psychology, Taipei, Taiwan
| |
Collapse
|
41
|
Chung STL. Reading in the presence of macular disease: a mini-review. Ophthalmic Physiol Opt 2020; 40:171-186. [PMID: 31925832 PMCID: PMC7093247 DOI: 10.1111/opo.12664] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Accepted: 12/04/2019] [Indexed: 12/02/2022]
Abstract
Purpose Reading is vital to full participation in modern society. To millions of people suffering from macular disease that results in a central scotoma, reading is difficult and inefficient, rendering reading as the primary goal for most patients seeking low vision rehabilitation. The goals of this review paper are to summarize the dependence of reading speed on several key visual and typographical factors and the current methods or technologies for improving reading performance for people with macular disease. Important findings In general, reading speed for people with macular disease depends on print size, text contrast, size of the visual span, temporal processing of letters and oculomotor control. Attempts at improving reading speed by reducing the crowding effect between letters, words or lines; or optimizing properties of typeface such as the presence of serifs or stroke‐width thickness proved to be futile, with any improvement being modest at best. Currently, the most promising method to improve reading speed for people with macular disease is training, including perceptual learning or oculomotor training. Summary The limitation on reading speed for people with macular disease is likely to be multi‐factorial. Future studies should try to understand how different factors interact to limit reading speed, and whether different methods could be combined to produce a much greater benefit.
Collapse
Affiliation(s)
- Susana T L Chung
- School of Optometry, University of California, Berkeley, California, USA
| |
Collapse
|
42
|
Backer KC, Kessler AS, Lawyer LA, Corina DP, Miller LM. A novel EEG paradigm to simultaneously and rapidly assess the functioning of auditory and visual pathways. J Neurophysiol 2019; 122:1312-1329. [PMID: 31268796 DOI: 10.1152/jn.00868.2018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
Objective assessment of the sensory pathways is crucial for understanding their development across the life span and how they may be affected by neurodevelopmental disorders (e.g., autism spectrum) and neurological pathologies (e.g., stroke, multiple sclerosis, etc.). Quick and passive measurements, for example, using electroencephalography (EEG), are especially important when working with infants and young children and with patient populations having communication deficits (e.g., aphasia). However, many EEG paradigms are limited to measuring activity from one sensory domain at a time, may be time consuming, and target only a subset of possible responses from that particular sensory domain (e.g., only auditory brainstem responses or only auditory P1-N1-P2 evoked potentials). Thus we developed a new multisensory paradigm that enables simultaneous, robust, and rapid (6-12 min) measurements of both auditory and visual EEG activity, including auditory brainstem responses, auditory and visual evoked potentials, as well as auditory and visual steady-state responses. This novel method allows us to examine neural activity at various stations along the auditory and visual hierarchies with an ecologically valid continuous speech stimulus, while an unrelated video is playing. Both the speech stimulus and the video can be customized for any population of interest. Furthermore, by using two simultaneous visual steady-state stimulation rates, we demonstrate the ability of this paradigm to track both parafoveal and peripheral visual processing concurrently. We report results from 25 healthy young adults, which validate this new paradigm.NEW & NOTEWORTHY A novel electroencephalography paradigm enables the rapid, reliable, and noninvasive assessment of neural activity along both auditory and visual pathways concurrently. The paradigm uses an ecologically valid continuous speech stimulus for auditory evaluation and can simultaneously track visual activity to both parafoveal and peripheral visual space. This new methodology may be particularly appealing to researchers and clinicians working with infants and young children and with patient populations with limited communication abilities.
Collapse
Affiliation(s)
- Kristina C Backer
- Center for Mind and Brain, University of California, Davis, California.,Department of Cognitive and Information Sciences, University of California, Merced, California
| | - Andrew S Kessler
- Center for Mind and Brain, University of California, Davis, California
| | - Laurel A Lawyer
- Center for Mind and Brain, University of California, Davis, California
| | - David P Corina
- Center for Mind and Brain, University of California, Davis, California.,Deptartment of Linguistics, University of California, Davis, California
| | - Lee M Miller
- Center for Mind and Brain, University of California, Davis, California.,Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| |
Collapse
|
43
|
Ahveninen J, Ingalls G, Yildirim F, Calabro FJ, Vaina LM. Peripheral visual localization is degraded by globally incongruent auditory-spatial attention cues. Exp Brain Res 2019; 237:2137-2143. [PMID: 31201472 DOI: 10.1007/s00221-019-05578-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2019] [Accepted: 06/07/2019] [Indexed: 11/26/2022]
Abstract
Global auditory-spatial orienting cues help the detection of weak visual stimuli, but it is not clear whether crossmodal attention cues also enhance the resolution of visuospatial discrimination. Here, we hypothesized that if anywhere, crossmodal modulations of visual localization should emerge in the periphery where the receptive fields are large. Subjects were presented with trials where a Visual Target, defined by a cluster of low-luminance dots, was shown for 220 ms at 25°-35° eccentricity in either the left or right hemifield. The Visual Target was either Uncued or it was presented 250 ms after a crossmodal Auditory Cue that was simulated either from the same or the opposite hemifield than the Visual Target location. After a whole-screen visual mask displayed for 800 ms, a pair of vertical Reference Bars was presented ipsilateral to the Visual Target. In a two-alternative forced choice task, subjects were asked to determine which of these two bars was closer to the center of the Visual Target. When the Auditory Cue and Visual Target were hemispatially incongruent, the speed and accuracy of visual localization performance was significantly impaired. However, hemispatially congruent Auditory Cues did not improve the localization of Visual Targets when compared to the Uncued condition. Further analyses suggested that the crossmodal Auditory Cues decreased the sensitivity (d') of the Visual Target localization without affecting post-perceptual decision biases. Our results suggest that in the visual periphery, the detrimental effect of hemispatially incongruent Auditory Cues is far greater than the benefit produced by hemispatially congruent cues. Our working hypothesis for future studies is that auditory-spatial attention cues suppress irrelevant visual locations in a global fashion, without modulating the local visual precision at relevant sites.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| | - Grace Ingalls
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Funda Yildirim
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Finnegan J Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Department of Psychiatry and Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Lucia M Vaina
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Department of Neurology, Harvard Medical School, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, MA, USA
| |
Collapse
|
44
|
Bornet A, Kaiser J, Kroner A, Falotico E, Ambrosano A, Cantero K, Herzog MH, Francis G. Running Large-Scale Simulations on the Neurorobotics Platform to Understand Vision - The Case of Visual Crowding. Front Neurorobot 2019; 13:33. [PMID: 31191291 PMCID: PMC6549494 DOI: 10.3389/fnbot.2019.00033] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 05/14/2019] [Indexed: 11/13/2022] Open
Abstract
Traditionally, human vision research has focused on specific paradigms and proposed models to explain very specific properties of visual perception. However, the complexity and scope of modern psychophysical paradigms undermine the success of this approach. For example, perception of an element strongly deteriorates when neighboring elements are presented in addition (visual crowding). As it was shown recently, the magnitude of deterioration depends not only on the directly neighboring elements but on almost all elements and their specific configuration. Hence, to fully explain human visual perception, one needs to take large parts of the visual field into account and combine all the aspects of vision that become relevant at such scale. These efforts require sophisticated and collaborative modeling. The Neurorobotics Platform (NRP) of the Human Brain Project offers a unique opportunity to connect models of all sorts of visual functions, even those developed by different research groups, into a coherently functioning system. Here, we describe how we used the NRP to connect and simulate a segmentation model, a retina model, and a saliency model to explain complex results about visual perception. The combination of models highlights the versatility of the NRP and provides novel explanations for inward-outward anisotropy in visual crowding.
Collapse
Affiliation(s)
- Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Jacques Kaiser
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Alexander Kroner
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Egidio Falotico
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | | | | | - Michael H. Herzog
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Gregory Francis
- Department of Psychological Sciences, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
45
|
Elshout JA, van den Berg AV, Haak KV. Human V2A: A map of the peripheral visual hemifield with functional connections to scene-selective cortex. J Vis 2019; 18:22. [PMID: 30267074 PMCID: PMC6159387 DOI: 10.1167/18.9.22] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Humans can recognize a scene in the blink of an eye. This gist-based visual scene perception is thought to be underpinned by specialized visual processing emphasizing the visual periphery at a cortical locus relatively low in the visual processing hierarchy. Using wide-field retinotopic mapping and population receptive field (pRF) modeling, we identified a new visual hemifield map anterior of area V2d and inferior to area V6, which we propose to call area V2A. Based on its location relative to other visual areas, V2A may correspond to area 23V described in nonhuman primates. The pRF analysis revealed unique receptive field properties for V2A: a large (FWHM ∼23°) and constant receptive field size across the central ∼70° of the visual field. Resting-state fMRI connectivity analysis further suggests that V2A is ideally suited to quickly feed the scene-processing network with information that is not biased towards the center of the visual field. Our findings not only indicate a likely cortical locus for the initial stages of gist-based visual scene perception, but also suggest a reappraisal of the organization of human dorsomedial occipital cortex with a strip of separate hemifield representations anterior to the early visual areas (V1, V2d, and V3d).
Collapse
Affiliation(s)
- Joris A Elshout
- Department of Cognitive Neuroscience, Radboud University Medical Centre, Nijmegen, the Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, the Netherlands
| | - Albert V van den Berg
- Department of Cognitive Neuroscience, Radboud University Medical Centre, Nijmegen, the Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, the Netherlands
| | - Koen V Haak
- Department of Cognitive Neuroscience, Radboud University Medical Centre, Nijmegen, the Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
46
|
Tomairek RH, Aboud SA, Hassan M, Mohamed AH. Studying the role of 10-2 visual field test in different stages of glaucoma. Eur J Ophthalmol 2019; 30:706-713. [DOI: 10.1177/1120672119836904] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Objective: To assess the role of 10-2 visual field (VF) test in different stages of glaucoma. Methods: In our prospective comparative study, 24-2 and 10-2 VF tests were done for 115 eyes with different stages of glaucomatous damage or glaucoma suspects. Optical coherence tomography (OCT) was performed in 79 eyes. We compared field changes of the central 10° on 10-2 and 24-2 tests and studied the correlation between the mean deviation (MD) measured by the two tests. Results: In seven glaucoma suspects, glaucoma diagnosis was missed by 24-2 test but was detected by 10-2 test and confirmed by OCT. In the eyes with early damage, there was no correlation between 10-2 and 24-2 tests regarding the MD of the central 10º. In moderate and severe stages, there was a significant correlation between the results of 24-2 and 10-2 tests. Conclusion: We concluded that 10-2 test could help confirm glaucoma diagnosis in glaucoma suspects missed by 24-2 test before resorting to the more expensive OCT. In early glaucoma, we noted that 10-2, as confirmed by OCT, was a beneficial addition to 24-2 test for precise measurement of the MD and detection of defects of the central 10º missed by 24-2 test, where more intense treatment should be considered to preserve the threatened central visual function. In moderate and severe cases, the role of 10-2 test was not as pivotal as in early cases, but still it was useful for assessment of residual central visual function in severe cases with absolute central 10º defects on 24-2 test for proper management.
Collapse
Affiliation(s)
| | | | - Mansour Hassan
- Faculty of Medicine, Beni-Suef University, Beni-Suef, Egypt
| | | |
Collapse
|
47
|
Aagten-Murphy D, Bays PM. Independent working memory resources for egocentric and allocentric spatial information. PLoS Comput Biol 2019; 15:e1006563. [PMID: 30789899 PMCID: PMC6400418 DOI: 10.1371/journal.pcbi.1006563] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 03/05/2019] [Accepted: 10/15/2018] [Indexed: 12/25/2022] Open
Abstract
Visuospatial working memory enables us to maintain access to visual information for processing even when a stimulus is no longer present, due to occlusion, our own movements, or transience of the stimulus. Here we show that, when localizing remembered stimuli, the precision of spatial recall does not rely solely on memory for individual stimuli, but additionally depends on the relative distances between stimuli and visual landmarks in the surroundings. Across three separate experiments, we consistently observed a spatially selective improvement in the precision of recall for items located near a persistent landmark. While the results did not require that the landmark be visible throughout the memory delay period, it was essential that it was visible both during encoding and response. We present a simple model that can accurately capture human performance by considering relative (allocentric) spatial information as an independent localization estimate which degrades with distance and is optimally integrated with egocentric spatial information. Critically, allocentric information was encoded without cost to egocentric estimation, demonstrating independent storage of the two sources of information. Finally, when egocentric and allocentric estimates were put in conflict, the model successfully predicted the resulting localization errors. We suggest that the relative distance between stimuli represents an additional, independent spatial cue for memory recall. This cue information is likely to be critical for spatial localization in natural settings which contain an abundance of visual landmarks.
Collapse
Affiliation(s)
- David Aagten-Murphy
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- * E-mail:
| | - Paul M. Bays
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
48
|
Linkage between retinal ganglion cell density and the nonuniform spatial integration across the visual field. Proc Natl Acad Sci U S A 2019; 116:3827-3836. [PMID: 30737290 PMCID: PMC6397585 DOI: 10.1073/pnas.1817076116] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The integration of visual information over space is critical to human pattern vision. For either luminance detection or object recognition, the position of the target in the visual field governs the size of a window within which visual information is integrated. Here we analyze the relationship between the topographic distribution of ganglion cell density and the nonuniform spatial integration across the visual field. We find that the variation in the retinal ganglion cell (RGC) density across the human retina is closely matched to the variation in the extent of spatial integration. Our study suggests that a fixed number of RGCs subserves spatial integration of visual input, independent of the visual-field location. The ability to integrate visual information over space is a fundamental component of human pattern vision. Regardless of whether it is for detecting luminance contrast or for recognizing objects in a cluttered scene, the position of the target in the visual field governs the size of a window within which visual information is integrated. Here we analyze the relationship between the topographic distribution of ganglion cell density and the nonuniform spatial integration across the visual field. The extent of spatial integration for luminance detection (Ricco’s area) and object recognition (crowding zone) are measured at various target locations. The number of retinal ganglion cells (RGCs) underlying Ricco’s area or crowding zone is estimated by computing the product of Ricco’s area (or crowding zone) and RGC density for a given target location. We find a quantitative agreement between the behavioral data and the RGC density: The variation in the sampling density of RGCs across the human retina is closely matched to the variation in the extent of spatial integration required for either luminance detection or object recognition. Our empirical data combined with the simulation results of computational models suggest that a fixed number of RGCs subserves spatial integration of visual input, independent of the visual-field location.
Collapse
|
49
|
Wong YT, Feleppa T, Mohan A, Browne D, Szlawski J, Rosenfeld JV, Lowery A. CMOS stimulating chips capable of wirelessly driving 473 electrodes for a cortical vision prosthesis. J Neural Eng 2019; 16:026025. [PMID: 30690434 DOI: 10.1088/1741-2552/ab021b] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
OBJECTIVE Implantable neural stimulating and recording devices have the potential to restore capabilities such as vision or motor control to disabled patients, improving quality of life. Implants with a large number of stimulating electrodes typically utilize implanted batteries and/or subcutaneous wiring to deal with their high-power consumption and high data throughput needed to address all electrodes with low latency. The use of batteries places severe limitations on the implant's size, usable duty cycle, device longevity while subcutaneous wiring increases the risk of infection and mechanical damage due to device movement. APPROACH To overcome these limitations, we have designed and implemented a system that supports up to 473 implanted stimulating microelectrodes, all wirelessly powered and individually controlled by micropower application specific integrated circuits (ASICs). MAIN RESULTS Each ASIC controls 43 electrodes and draws 3.18 mW of power when stimulating through 24 channels. We measured the linearity of the digital-to-analog convertors (DACs) to be 0.21 LSB (integrated non-linearity) and the variability in timing of stimulation pulses across ASICs to be 172 ns. SIGNIFICANCE This work demonstrates the feasibility of a new low power ASIC designed to be implanted in the visual cortex of humans. The fully implantable device will greatly reduce the risks of infection and damage due to mechanical issues.
Collapse
Affiliation(s)
- Yan T Wong
- Department of Electrical and Computer Systems Engineering, Monash University, Clayton, VIC 3800, Australia. Department of Physiology, Monash University, Clayton, VIC 3800, Australia
| | | | | | | | | | | | | |
Collapse
|
50
|
Painter DR, Dwyer MF, Kamke MR, Mattingley JB. Stimulus-Driven Cortical Hyperexcitability in Individuals with Charles Bonnet Hallucinations. Curr Biol 2018; 28:3475-3480.e3. [DOI: 10.1016/j.cub.2018.08.058] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2018] [Revised: 08/10/2018] [Accepted: 08/29/2018] [Indexed: 01/23/2023]
|