1
|
Mantegna F, Olivetti E, Schwedhelm P, Baldauf D. Covariance-based decoding reveals a category-specific functional connectivity network for imagined visual objects. Neuroimage 2025; 311:121171. [PMID: 40139516 DOI: 10.1016/j.neuroimage.2025.121171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2024] [Revised: 03/21/2025] [Accepted: 03/24/2025] [Indexed: 03/29/2025] Open
Abstract
The coordination of different brain regions is required for the visual imagery of complex objects (e.g., faces and places). Short-range connectivity within sensory areas is necessary to construct the mental image. Long-range connectivity between control and sensory areas is necessary to re-instantiate and maintain the mental image. While dynamic changes in functional connectivity are expected during visual imagery, it is unclear whether a category-specific network exists in which the strength and the spatial destination of the connections vary depending on the imagery target. In this magnetoencephalography study, we used a minimally constrained experimental paradigm wherein imagery categories were prompted using visual word cues only, and we decoded face versus place imagery based on their underlying functional connectivity patterns as estimated from the spatial covariance across brain regions. A subnetwork analysis further disentangled the contribution of different connections. The results show that face and place imagery can be decoded from both short-range and long-range connections. Overall, the results show that imagined object categories can be distinguished based on functional connectivity patterns observed in a category-specific network. Notably, functional connectivity estimates rely on purely endogenous brain signals suggesting that an external reference is not necessary to elicit such category-specific network dynamics.
Collapse
Affiliation(s)
- Francesco Mantegna
- Department of Psychology, New York University, New York, NY 10003, USA; Department of Engineering Science, Oxford University, Oxford, Oxfordshire, United Kingdom; CIMeC - Center for Mind and Brain Sciences, Mattarello, TN 38100, Italy.
| | - Emanuele Olivetti
- NeuroInformatics Laboratory (NILab), Bruno Kessler Foundation (FBK), Mattarello, TN 38100, Italy; CIMeC - Center for Mind and Brain Sciences, Mattarello, TN 38100, Italy
| | - Philipp Schwedhelm
- Functional Imaging Laboratory, German Primate Center - Leibniz Institute for Primate Research, Goettingen, 37077, Germany; CIMeC - Center for Mind and Brain Sciences, Mattarello, TN 38100, Italy
| | - Daniel Baldauf
- CIMeC - Center for Mind and Brain Sciences, Mattarello, TN 38100, Italy
| |
Collapse
|
2
|
Liu J, Zhan M, Hajhajate D, Spagna A, Dehaene S, Cohen L, Bartolomeo P. Visual mental imagery in typical imagers and in aphantasia: A millimeter-scale 7-T fMRI study. Cortex 2025; 185:113-132. [PMID: 40031090 DOI: 10.1016/j.cortex.2025.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Revised: 12/06/2024] [Accepted: 01/22/2025] [Indexed: 03/05/2025]
Abstract
Most of us effortlessly describe visual objects, whether seen or remembered. Yet, around 4% of people report congenital aphantasia: they struggle to visualize objects despite being able to describe their visual appearance. What neural mechanisms create this disparity between subjective experience and objective performance? Aphantasia can provide novel insights into conscious processing and awareness. We used ultra-high field 7T fMRI to establish the neural circuits involved in visual mental imagery and perception, and to elucidate the neural mechanisms associated with the processing of internally generated visual information in the absence of imagery experience in congenital aphantasia. Ten typical imagers and 10 aphantasic individuals performed imagery and perceptual tasks in five domains: object shape, object color, written words, faces, and spatial relationships. In typical imagers, imagery tasks activated left-hemisphere frontoparietal areas, the relevant domain-preferring areas in the ventral temporal cortex partly overlapping with the perceptual domain-preferring areas, and a domain-general area in the left fusiform gyrus (the Fusiform Imagery Node). The results were valid for each individual participant. In aphantasic individuals, imagery activated similar visual areas, but there was reduced functional connectivity between the Fusiform Imagery Node and frontoparietal areas. Our results unveil the domain-general and domain-specific circuits of visual mental imagery, their functional disorganization in aphantasia, and support the general hypothesis that conscious visual experience - whether perceived or imagined - depends on the integrated activity of high-level visual cortex and frontoparietal networks.
Collapse
Affiliation(s)
- Jianghao Liu
- Sorbonne Université, Inserm, CNRS, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, Paris, France; Dassault Systèmes, Vélizy-Villacoublay, France.
| | - Minye Zhan
- Sorbonne Université, Inserm, CNRS, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, Paris, France; Cognitive Neuroimaging Unit, Université Paris-Saclay, CEA, INSERM, CNRS ELR9003, NeuroSpin Center, Gif/Yvette, France
| | - Dounia Hajhajate
- Sorbonne Université, Inserm, CNRS, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, Paris, France; IRCCS SYNLAB SDN, Via E. Gianturco 113, Naples, Italy
| | - Alfredo Spagna
- Department of Psychology, Columbia University in the City of New York, NY, 10027, USA
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, Université Paris-Saclay, CEA, INSERM, CNRS ELR9003, NeuroSpin Center, Gif/Yvette, France; Collège de France, Université Paris-Sciences-Lettres (PSL), 11 Place Marcelin Berthelot, Paris, France
| | - Laurent Cohen
- Sorbonne Université, Inserm, CNRS, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, Paris, France; AP-HP, Hôpital de la Pitié Salpêtrière, Fédération de Neurologie, Paris, France
| | - Paolo Bartolomeo
- Sorbonne Université, Inserm, CNRS, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, Paris, France.
| |
Collapse
|
3
|
Zhao M, Xin Y, Deng H, Zuo Z, Wang X, Bi Y, Liu N. Object color knowledge representation occurs in the macaque brain despite the absence of a developed language system. PLoS Biol 2024; 22:e3002863. [PMID: 39466847 PMCID: PMC11542842 DOI: 10.1371/journal.pbio.3002863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 11/07/2024] [Accepted: 09/21/2024] [Indexed: 10/30/2024] Open
Abstract
Animals guide their behaviors through internal representations of the world in the brain. We aimed to understand how the macaque brain stores such general world knowledge, focusing on object color knowledge. Three functional magnetic resonance imaging (fMRI) experiments were conducted in macaque monkeys: viewing chromatic and achromatic gratings, viewing grayscale images of their familiar fruits and vegetables (e.g., grayscale strawberry), and viewing true- and false-colored objects (e.g., red strawberry and green strawberry). We observed robust object knowledge representations in the color patches, especially the one located around TEO: the activity patterns could classify grayscale pictures of objects based on their memory color and response patterns in these regions could translate between chromatic grating viewing and grayscale object viewing (e.g., red grating-grayscale images of strawberry), such that classifiers trained by viewing chromatic gratings could successfully classify grayscale object images according to their memory colors. Our results showed direct positive evidence of object color memory in macaque monkeys. These results indicate the perceptually grounded knowledge representation as a conservative memory mechanism and open a new avenue to study this particular (semantic) memory representation with macaque models.
Collapse
Affiliation(s)
- Minghui Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
| | - Yumeng Xin
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
| | - Haoyun Deng
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
| | - Zhentao Zuo
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
- Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Ning Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
4
|
Zhang B, Zhang R, Zhao J, Yang J, Xu S. The mechanism of human color vision and potential implanted devices for artificial color vision. Front Neurosci 2024; 18:1408087. [PMID: 38962178 PMCID: PMC11221215 DOI: 10.3389/fnins.2024.1408087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 05/31/2024] [Indexed: 07/05/2024] Open
Abstract
Vision plays a major role in perceiving external stimuli and information in our daily lives. The neural mechanism of color vision is complicated, involving the co-ordinated functions of a variety of cells, such as retinal cells and lateral geniculate nucleus cells, as well as multiple levels of the visual cortex. In this work, we reviewed the history of experimental and theoretical studies on this issue, from the fundamental functions of the individual cells of the visual system to the coding in the transmission of neural signals and sophisticated brain processes at different levels. We discuss various hypotheses, models, and theories related to the color vision mechanism and present some suggestions for developing novel implanted devices that may help restore color vision in visually impaired people or introduce artificial color vision to those who need it.
Collapse
Affiliation(s)
- Bingao Zhang
- Key Laboratory for the Physics and Chemistry of Nanodevices, Institute of Physical Electronics, Department of Electronics, Peking University, Beijing, China
| | - Rong Zhang
- Key Laboratory for the Physics and Chemistry of Nanodevices, Institute of Physical Electronics, Department of Electronics, Peking University, Beijing, China
| | - Jingjin Zhao
- Key Laboratory for the Physics and Chemistry of Nanodevices, Institute of Physical Electronics, Department of Electronics, Peking University, Beijing, China
| | - Jiarui Yang
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Department of Ophthalmology, Peking University Third Hospital, Beijing, China
| | - Shengyong Xu
- Key Laboratory for the Physics and Chemistry of Nanodevices, Institute of Physical Electronics, Department of Electronics, Peking University, Beijing, China
| |
Collapse
|
5
|
Hansen T, Conway BR. The color of fruits in photographs and still life paintings. J Vis 2024; 24:1. [PMID: 38691088 PMCID: PMC11077907 DOI: 10.1167/jov.24.5.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 03/14/2024] [Indexed: 05/03/2024] Open
Abstract
Still life paintings comprise a wealth of data on visual perception. Prior work has shown that the color statistics of objects show a marked bias for warm colors. Here, we ask about the relative chromatic contrast of these object-associated colors compared with background colors in still life paintings. We reasoned that, owing to the memory color effect, where the color of familiar objects is perceived more saturated, warm colors will be relatively more saturated than cool colors in still life paintings as compared with photographs. We analyzed color in 108 slides of still life paintings of fruit from the teaching slide collection of the Fogg University Art Museum and 41 color-calibrated photographs of fruit from the McGill data set. The results show that the relatively higher chromatic contrast of warm colors was greater for paintings compared with photographs, consistent with the hypothesis.
Collapse
Affiliation(s)
- Thorsten Hansen
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Bevil R Conway
- Laboratory of Sensorimotor Research, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
6
|
Spagna A, Heidenry Z, Miselevich M, Lambert C, Eisenstadt BE, Tremblay L, Liu Z, Liu J, Bartolomeo P. Visual mental imagery: Evidence for a heterarchical neural architecture. Phys Life Rev 2024; 48:113-131. [PMID: 38217888 DOI: 10.1016/j.plrev.2023.12.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 12/26/2023] [Indexed: 01/15/2024]
Abstract
Theories of Visual Mental Imagery (VMI) emphasize the processes of retrieval, modification, and recombination of sensory information from long-term memory. Yet, only few studies have focused on the behavioral mechanisms and neural correlates supporting VMI of stimuli from different semantic domains. Therefore, we currently have a limited understanding of how the brain generates and maintains mental representations of colors, faces, shapes - to name a few. Such an undetermined scenario renders unclear the organizational structure of neural circuits supporting VMI, including the role of the early visual cortex. We aimed to fill this gap by reviewing the scientific literature of five semantic domains: visuospatial, face, colors, shapes, and letters imagery. Linking theory to evidence from over 60 different experimental designs, this review highlights three main points. First, there is no consistent activity in the early visual cortex across all VMI domains, contrary to the prediction of the dominant model. Second, there is consistent activity of the frontoparietal networks and the left hemisphere's fusiform gyrus during voluntary VMI irrespective of the semantic domain investigated. We propose that these structures are part of a domain-general VMI sub-network. Third, domain-specific information engages specific regions of the ventral and dorsal cortical visual pathways. These regions partly overlap with those found in visual perception studies (e.g., fusiform face area for faces imagery; lingual gyrus for color imagery). Altogether, the reviewed evidence suggests the existence of domain-general and domain-specific mechanisms of VMI selectively engaged by stimulus-specific properties (e.g., colors or faces). These mechanisms would be supported by an organizational structure mixing vertical and horizontal connections (heterarchy) between sub-networks for specific stimulus domains. Such a heterarchical organization of VMI makes different predictions from current models of VMI as reversed perception. Our conclusions set the stage for future research, which should aim to characterize the spatiotemporal dynamics and interactions among key regions of this architecture giving rise to visual mental images.
Collapse
Affiliation(s)
- Alfredo Spagna
- Department of Psychology, Columbia University in the City of New York, NY, 10027, USA.
| | - Zoe Heidenry
- Department of Psychology, Columbia University in the City of New York, NY, 10027, USA
| | | | - Chloe Lambert
- Department of Psychology, Columbia University in the City of New York, NY, 10027, USA
| | | | - Laura Tremblay
- Department of Psychology, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California; Department of Neurology, VA Northern California Health Care System, Martinez, California
| | - Zixin Liu
- Department of Human Development, Teachers College, Columbia University, NY, 10027, USA
| | - Jianghao Liu
- Sorbonne Université, Inserm, CNRS, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, Paris 10027, France; Dassault Systèmes, Vélizy-Villacoublay, France
| | - Paolo Bartolomeo
- Sorbonne Université, Inserm, CNRS, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, Paris 10027, France
| |
Collapse
|
7
|
Takashima A, Carota F, Schoots V, Redmann A, Jehee J, Indefrey P. Tomatoes Are Red: The Perception of Achromatic Objects Elicits Retrieval of Associated Color Knowledge. J Cogn Neurosci 2024; 36:24-45. [PMID: 37847811 DOI: 10.1162/jocn_a_02068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri ("Human V4") correlated with a representational model encoding the red-green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named.
Collapse
Affiliation(s)
- Atsuko Takashima
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Francesca Carota
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Vincent Schoots
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Heinrich Heine University Düsseldorf, Germany
| | | | - Janneke Jehee
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Peter Indefrey
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Heinrich Heine University Düsseldorf, Germany
| |
Collapse
|
8
|
Taylor J, Xu Y. Comparing the Dominance of Color and Form Information across the Human Ventral Visual Pathway and Convolutional Neural Networks. J Cogn Neurosci 2023; 35:816-840. [PMID: 36877074 PMCID: PMC11283826 DOI: 10.1162/jocn_a_01979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2023]
Abstract
Color and form information can be decoded in every region of the human ventral visual hierarchy, and at every layer of many convolutional neural networks (CNNs) trained to recognize objects, but how does the coding strength of these features vary over processing? Here, we characterize for these features both their absolute coding strength-how strongly each feature is represented independent of the other feature-and their relative coding strength-how strongly each feature is encoded relative to the other, which could constrain how well a feature can be read out by downstream regions across variation in the other feature. To quantify relative coding strength, we define a measure called the form dominance index that compares the relative influence of color and form on the representational geometry at each processing stage. We analyze brain and CNN responses to stimuli varying based on color and either a simple form feature, orientation, or a more complex form feature, curvature. We find that while the brain and CNNs largely differ in how the absolute coding strength of color and form vary over processing, comparing them in terms of their relative emphasis of these features reveals a striking similarity: For both the brain and for CNNs trained for object recognition (but not for untrained CNNs), orientation information is increasingly de-emphasized, and curvature information is increasingly emphasized, relative to color information over processing, with corresponding processing stages showing largely similar values of the form dominance index.
Collapse
|
9
|
Aseyev N. Perception of color in primates: A conceptual color neurons hypothesis. Biosystems 2023; 225:104867. [PMID: 36792004 DOI: 10.1016/j.biosystems.2023.104867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 02/12/2023] [Accepted: 02/12/2023] [Indexed: 02/16/2023]
Abstract
Perception of color by humans and other primates is a complex problem, studied by neurophysiology, psychophysiology, psycholinguistics, and even philosophy. Being mostly trichromats, simian primates have three types of opsin proteins, expressed in cone neurons in the eye, which allow for the sensing of color as the physical wavelength of light. Further, in neural networks of the retina, the coding principle changes from three types of sensor proteins to two opponent channels: activity of one type of neuron encode the evolutionarily ancient blue-yellow axis of color stimuli, and another more recent evolutionary channel, encoding the axis of red-green color stimuli. Both color channels are distinctive in neural organization at all levels from the eye to the neocortex, where it is thought that the perception of color (as philosophical qualia) emerges from the activity of some neuron ensembles. Here, using data from neurophysiology as a starting point, we propose a hypothesis on how the perception of color can be encoded in the activity of certain neurons in the neocortex. These conceptual neurons, herein referred to as 'color neurons', code only the hue of the color of visual stimulus, similar to place cells and number neurons, already described in primate brains. A case study with preliminary, but direct, evidence for existing conceptual color neurons in the human brain was published in 2008. We predict that the upcoming studies in non-human primates will be more extensive and provide a more detailed description of conceptual color neurons.
Collapse
Affiliation(s)
- Nikolay Aseyev
- Institute Higher Nervous Activity and Neurophysiology, RAS, Moscow, 117485, Butlerova, 5A, Russian Federation.
| |
Collapse
|
10
|
Pennock IML, Racey C, Allen EJ, Wu Y, Naselaris T, Kay KN, Franklin A, Bosten JM. Color-biased regions in the ventral visual pathway are food selective. Curr Biol 2023; 33:134-146.e4. [PMID: 36574774 PMCID: PMC9976629 DOI: 10.1016/j.cub.2022.11.063] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 10/15/2022] [Accepted: 11/28/2022] [Indexed: 12/27/2022]
Abstract
Color-biased regions have been found between face- and place-selective areas in the ventral visual pathway. To investigate the function of the color-biased regions in a pathway responsible for object recognition, we analyzed the natural scenes dataset (NSD), a large 7T fMRI dataset from 8 participants who each viewed up to 30,000 trials of images of colored natural scenes over more than 30 scanning sessions. In a whole-brain analysis, we correlated the average color saturation of the images with voxel responses, revealing color-biased regions that diverge into two streams, beginning in V4 and extending medially and laterally relative to the fusiform face area in both hemispheres. We drew regions of interest (ROIs) for the two streams and found that the images for each ROI that evoked the largest responses had certain characteristics: they contained food, circular objects, warmer hues, and had higher color saturation. Further analyses showed that food images were the strongest predictor of activity in these regions, implying the existence of medial and lateral ventral food streams (VFSs). We found that color also contributed independently to voxel responses, suggesting that the medial and lateral VFSs use both color and form to represent food. Our findings illustrate how high-resolution datasets such as the NSD can be used to disentangle the multifaceted contributions of many visual features to the neural representations of natural scenes.
Collapse
Affiliation(s)
- Ian M L Pennock
- School of Psychology, University of Sussex, Falmer BN1 9QH, UK.
| | - Chris Racey
- School of Psychology, University of Sussex, Falmer BN1 9QH, UK
| | - Emily J Allen
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA; Department of Psychology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Yihan Wu
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Thomas Naselaris
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Kendrick N Kay
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Anna Franklin
- School of Psychology, University of Sussex, Falmer BN1 9QH, UK
| | - Jenny M Bosten
- School of Psychology, University of Sussex, Falmer BN1 9QH, UK.
| |
Collapse
|
11
|
Rapid Automatized Picture Naming in an Outpatient Concussion Center: Quantitative Eye Movements during the Mobile Universal Lexicon Evaluation System (MULES) Test. CLINICAL AND TRANSLATIONAL NEUROSCIENCE 2022. [DOI: 10.3390/ctn6030018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Number and picture rapid automatized naming (RAN) tests are useful sideline diagnostic tools. The main outcome measure of these RAN tests is the completion time, which is prolonged with a concussion, yet yields no information about eye movement behavior. We investigated eye movements during a digitized Mobile Universal Lexicon Evaluation System (MULES) test of rapid picture naming. A total of 23 participants with a history of concussion and 50 control participants performed MULES testing with simultaneous eye tracking. The test times were longer in participants with a concussion (32.4 s [95% CI 30.4, 35.8] vs. 26.9 s [95% CI 25.9, 28.0], t=6.1). The participants with a concussion made more saccades per picture than the controls (3.6 [95% CI 3.3, 4.1] vs. 2.7 [95% CI 2.5, 3.0]), and this increase was correlated with longer MULES times (r = 0.46, p = 0.026). The inter-saccadic intervals (ISI) did not differ between the groups, nor did they correlate with the test times. Following a concussion, eye movement behavior differs during number versus picture RAN performance. Prior studies have shown that ISI prolongation is the key finding for a number-based RAN test, whereas this study shows a primary finding of an increased saccade number per picture with a picture-based RAN test. Number-based and picture-based RAN tests may be complimentary in concussion detection, as they may detect different injury effects or compensatory strategies.
Collapse
|
12
|
Hermann KL, Singh SR, Rosenthal IA, Pantazis D, Conway BR. Temporal dynamics of the neural representation of hue and luminance polarity. Nat Commun 2022; 13:661. [PMID: 35115511 PMCID: PMC8814185 DOI: 10.1038/s41467-022-28249-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 01/12/2022] [Indexed: 11/09/2022] Open
Abstract
Hue and luminance contrast are basic visual features. Here we use multivariate analyses of magnetoencephalography data to investigate the timing of the neural computations that extract them, and whether they depend on common neural circuits. We show that hue and luminance-contrast polarity can be decoded from MEG data and, with lower accuracy, both features can be decoded across changes in the other feature. These results are consistent with the existence of both common and separable neural mechanisms. The decoding time course is earlier and more temporally precise for luminance polarity than hue, a result that does not depend on task, suggesting that luminance contrast is an updating signal that separates visual events. Meanwhile, cross-temporal generalization is slightly greater for representations of hue compared to luminance polarity, providing a neural correlate of the preeminence of hue in perceptual grouping and memory. Finally, decoding of luminance polarity varies depending on the hues used to obtain training and testing data. The pattern of results is consistent with observations that luminance contrast is mediated by both L-M and S cone sub-cortical mechanisms.
Collapse
Affiliation(s)
- Katherine L Hermann
- Laboratory of Sensorimotor Research, National Eye Institute, Bethesda, MD, 20892, USA
- Department of Psychology, Stanford University, Stanford, CA, 94305, USA
| | - Shridhar R Singh
- Laboratory of Sensorimotor Research, National Eye Institute, Bethesda, MD, 20892, USA
| | - Isabelle A Rosenthal
- Laboratory of Sensorimotor Research, National Eye Institute, Bethesda, MD, 20892, USA
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, 91125, USA
| | - Dimitrios Pantazis
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Bevil R Conway
- Laboratory of Sensorimotor Research, National Eye Institute, Bethesda, MD, 20892, USA.
- National Institute of Mental Health, Bethesda, MD, 20892, USA.
| |
Collapse
|
13
|
Taylor J, Xu Y. Representation of Color, Form, and their Conjunction across the Human Ventral Visual Pathway. Neuroimage 2022; 251:118941. [PMID: 35122966 PMCID: PMC9014861 DOI: 10.1016/j.neuroimage.2022.118941] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 01/25/2022] [Indexed: 11/25/2022] Open
Abstract
Despite decades of research, our understanding of the relationship
between color and form processing in the primate ventral visual pathway remains
incomplete. Using fMRI multivoxel pattern analysis, we examined coding of color
and form, using a simple form feature (orientation) and a mid-level form feature
(curvature), in human ventral visual processing regions. We found that both
color and form could be decoded from activity in early visual areas V1 to V4, as
well as in the posterior color-selective region and shape-selective regions in
ventral and lateral occipitotemporal cortex defined based on their univariate
selectivity to color or shape, respectively (the central color region only
showed color but not form decoding). Meanwhile, decoding biases towards one
feature or the other existed in the color- and shape-selective regions,
consistent with their univariate feature selectivity reported in past studies.
Additional extensive analyses show that while all these regions contain
independent (linearly additive) coding for both features, several early visual
regions also encode the conjunction of color and the simple, but not the
complex, form feature in a nonlinear, interactive manner. Taken together, the
results show that color and form are encoded in a biased distributed and largely
independent manner across ventral visual regions in the human brain.
Collapse
Affiliation(s)
- JohnMark Taylor
- Visual Inference Laboratory, Zuckerman Institute, Columbia University.
| | - Yaoda Xu
- Department of Psychology, Yale University
| |
Collapse
|
14
|
Lowndes R, Molz B, Warriner L, Herbik A, de Best PB, Raz N, Gouws A, Ahmadi K, McLean RJ, Gottlob I, Kohl S, Choritz L, Maguire J, Kanowski M, Käsmann-Kellner B, Wieland I, Banin E, Levin N, Hoffmann MB, Morland AB, Baseler HA. Structural Differences Across Multiple Visual Cortical Regions in the Absence of Cone Function in Congenital Achromatopsia. Front Neurosci 2021; 15:718958. [PMID: 34720857 PMCID: PMC8551799 DOI: 10.3389/fnins.2021.718958] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 09/16/2021] [Indexed: 11/13/2022] Open
Abstract
Most individuals with congenital achromatopsia (ACHM) carry mutations that affect the retinal phototransduction pathway of cone photoreceptors, fundamental to both high acuity vision and colour perception. As the central fovea is occupied solely by cones, achromats have an absence of retinal input to the visual cortex and a small central area of blindness. Additionally, those with complete ACHM have no colour perception, and colour processing regions of the ventral cortex also lack typical chromatic signals from the cones. This study examined the cortical morphology (grey matter volume, cortical thickness, and cortical surface area) of multiple visual cortical regions in ACHM (n = 15) compared to normally sighted controls (n = 42) to determine the cortical changes that are associated with the retinal characteristics of ACHM. Surface-based morphometry was applied to T1-weighted MRI in atlas-defined early, ventral and dorsal visual regions of interest. Reduced grey matter volume in V1, V2, V3, and V4 was found in ACHM compared to controls, driven by a reduction in cortical surface area as there was no significant reduction in cortical thickness. Cortical surface area (but not thickness) was reduced in a wide range of areas (V1, V2, V3, TO1, V4, and LO1). Reduction in early visual areas with large foveal representations (V1, V2, and V3) suggests that the lack of foveal input to the visual cortex was a major driving factor in morphological changes in ACHM. However, the significant reduction in ventral area V4 coupled with the lack of difference in dorsal areas V3a and V3b suggest that deprivation of chromatic signals to visual cortex in ACHM may also contribute to changes in cortical morphology. This research shows that the congenital lack of cone input to the visual cortex can lead to widespread structural changes across multiple visual areas.
Collapse
Affiliation(s)
- Rebecca Lowndes
- Department of Psychology, University of York, York, United Kingdom
- York Neuroimaging Centre, Department of Psychology, University of York, York, United Kingdom
| | - Barbara Molz
- Department of Psychology, University of York, York, United Kingdom
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | - Lucy Warriner
- Department of Psychology, University of York, York, United Kingdom
| | - Anne Herbik
- Department of Ophthalmology, University Hospital, Otto von Guericke University, Magdeburg, Germany
| | - Pieter B. de Best
- MRI Unit, Department of Neurology, Hadassah Medical Center, Jerusalem, Israel
| | - Noa Raz
- MRI Unit, Department of Neurology, Hadassah Medical Center, Jerusalem, Israel
| | - Andre Gouws
- York Neuroimaging Centre, Department of Psychology, University of York, York, United Kingdom
| | - Khazar Ahmadi
- Department of Ophthalmology, University Hospital, Otto von Guericke University, Magdeburg, Germany
| | - Rebecca J. McLean
- University of Leicester Ulverscroft Eye Unit, University of Leicester, Leicester Royal Infirmary, Leicester, United Kingdom
| | - Irene Gottlob
- University of Leicester Ulverscroft Eye Unit, University of Leicester, Leicester Royal Infirmary, Leicester, United Kingdom
| | - Susanne Kohl
- Molecular Genetics Laboratory, Institute for Ophthalmic Research, Centre for Ophthalmology, University Clinics Tübingen, Tübingen, Germany
| | - Lars Choritz
- Department of Ophthalmology, University Hospital, Otto von Guericke University, Magdeburg, Germany
| | - John Maguire
- School of Optometry and Vision Sciences, University of Bradford, Bradford, United Kingdom
| | - Martin Kanowski
- Department of Neurology, University Hospital, Otto von Guericke University, Magdeburg, Germany
| | - Barbara Käsmann-Kellner
- Department of Ophthalmology, Saarland University Hospital and Medical Faculty of the Saarland University Hospital, Homburg, Germany
| | - Ilse Wieland
- Department of Molecular Genetics, Institute for Human Genetics, University Hospital, Otto von Guericke University, Magdeburg, Germany
| | - Eyal Banin
- Degenerative Diseases of the Retina Unit, Department of Ophthalmology, Hadassah Medical Center, Jerusalem, Israel
| | - Netta Levin
- MRI Unit, Department of Neurology, Hadassah Medical Center, Jerusalem, Israel
| | - Michael B. Hoffmann
- Department of Ophthalmology, University Hospital, Otto von Guericke University, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Antony B. Morland
- Department of Psychology, University of York, York, United Kingdom
- York Biomedical Research Institute, University of York, York, United Kingdom
| | - Heidi A. Baseler
- Department of Psychology, University of York, York, United Kingdom
- York Biomedical Research Institute, University of York, York, United Kingdom
- Hull York Medical School, University of York, York, United Kingdom
| |
Collapse
|
15
|
Abstract
Selectivity for many basic properties of visual stimuli, such as orientation, is thought to be organized at the scale of cortical columns, making it difficult or impossible to measure directly with noninvasive human neuroscience measurement. However, computational analyses of neuroimaging data have shown that selectivity for orientation can be recovered by considering the pattern of response across a region of cortex. This suggests that computational analyses can reveal representation encoded at a finer spatial scale than is implied by the spatial resolution limits of measurement techniques. This potentially opens up the possibility to study a much wider range of neural phenomena that are otherwise inaccessible through noninvasive measurement. However, as we review in this article, a large body of evidence suggests an alternative hypothesis to this superresolution account: that orientation information is available at the spatial scale of cortical maps and thus easily measurable at the spatial resolution of standard techniques. In fact, a population model shows that this orientation information need not even come from single-unit selectivity for orientation tuning, but instead can result from population selectivity for spatial frequency. Thus, a categorical error of interpretation can result whereby orientation selectivity can be confused with spatial frequency selectivity. This is similarly problematic for the interpretation of results from numerous studies of more complex representations and cognitive functions that have built upon the computational techniques used to reveal stimulus orientation. We suggest in this review that these interpretational ambiguities can be avoided by treating computational analyses as models of the neural processes that give rise to measurement. Building upon the modeling tradition in vision science using considerations of whether population models meet a set of core criteria is important for creating the foundation for a cumulative and replicable approach to making valid inferences from human neuroscience measurements. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Justin L Gardner
- Department of Psychology, Stanford University, Stanford, California 94305, USA;
| | - Elisha P Merriam
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland 20892, USA;
| |
Collapse
|
16
|
Conway J, Moretti L, Nolan-Kenney R, Akhand O, Serrano L, Kurzweil A, Rucker JC, Galetta SL, Balcer LJ. Sleep-deprived residents and rapid picture naming performance using the Mobile Universal Lexicon Evaluation System (MULES) test. eNeurologicalSci 2021; 22:100323. [PMID: 33604461 PMCID: PMC7876539 DOI: 10.1016/j.ensci.2021.100323] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 12/31/2020] [Accepted: 01/31/2021] [Indexed: 12/03/2022] Open
Abstract
Objective The Mobile Universal Lexicon Evaluation System (MULES) is a rapid picture naming task that captures extensive brain networks involving neurocognitive, afferent/efferent visual, and language pathways. Many of the factors captured by MULES may be abnormal in sleep-deprived residents. This study investigates the effect of sleep deprivation in post-call residents on MULES performance. Methods MULES, consisting of 54 color photographs, was administered to a cohort of neurology residents taking 24-hour in-hospital call (n = 18) and a group of similar-aged controls not taking call (n = 18). Differences in times between baseline and follow-up MULES scores were compared between the two groups. Results MULES time change in call residents was significantly worse (slower) from baseline (mean 1.2 s slower) compared to non-call controls (mean 11.2 s faster) (P < 0.001, Wilcoxon rank sum test). The change in MULES time from baseline was significantly correlated to the change in subjective level of sleepiness for call residents and to the amount of sleep obtained in the 24 h prior to follow-up testing for the entire cohort. For call residents, the duration of sleep obtained during call did not significantly correlate with change in MULES scores. There was no significant correlation between MULES change and sleep quality questionnaire score for the entire cohort. Conclusion The MULES is a novel test for effects of sleep deprivation on neurocognition and vision pathways. Sleep deprivation significantly worsens MULES performance. Subjective sleepiness may also affect MULES performance. MULES may serve as a useful performance assessment tool for sleep deprivation in residents. MULES is a rapid picture naming test that captures extensive brain networks. MULES performance is impaired in sleep deprived residents. Subjective sleepiness may also affect MULES performance. MULES may serve as an assessment tool for sleep deprivation in residents.
Collapse
Affiliation(s)
- Jenna Conway
- Departments of Neurology, New York University Grossman School of Medicine, New York, NY, USA
| | - Luke Moretti
- Departments of Neurology, New York University Grossman School of Medicine, New York, NY, USA
| | - Rachel Nolan-Kenney
- Departments of Neurology, New York University Grossman School of Medicine, New York, NY, USA.,Departments of Population Health, New York University Grossman School of Medicine, New York, NY, USA
| | - Omar Akhand
- Departments of Neurology, New York University Grossman School of Medicine, New York, NY, USA
| | - Liliana Serrano
- Departments of Neurology, New York University Grossman School of Medicine, New York, NY, USA
| | - Arielle Kurzweil
- Departments of Neurology, New York University Grossman School of Medicine, New York, NY, USA
| | - Janet C Rucker
- Departments of Neurology, New York University Grossman School of Medicine, New York, NY, USA.,Departments of Ophthalmology, New York University Grossman School of Medicine, New York, NY, USA
| | - Steven L Galetta
- Departments of Neurology, New York University Grossman School of Medicine, New York, NY, USA.,Departments of Ophthalmology, New York University Grossman School of Medicine, New York, NY, USA
| | - Laura J Balcer
- Departments of Neurology, New York University Grossman School of Medicine, New York, NY, USA.,Departments of Ophthalmology, New York University Grossman School of Medicine, New York, NY, USA.,Departments of Population Health, New York University Grossman School of Medicine, New York, NY, USA
| |
Collapse
|
17
|
Abstract
Color is a fundamental aspect of normal visual experience. This chapter provides an overview of the role of color in human behavior, a survey of current knowledge regarding the genetic, retinal, and neural mechanisms that enable color vision, and a review of inherited and acquired defects of color vision including a discussion of diagnostic tests.
Collapse
Affiliation(s)
- Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, United States.
| | - Bevil R Conway
- Laboratory of Sensorimotor Research, National Eye Institute, National Institute of Mental Health, Bethesda, MD, United States.
| |
Collapse
|
18
|
Neural representations of perceptual color experience in the human ventral visual pathway. Proc Natl Acad Sci U S A 2020; 117:13145-13150. [PMID: 32457156 DOI: 10.1073/pnas.1911041117] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Color is a perceptual construct that arises from neural processing in hierarchically organized cortical visual areas. Previous research, however, often failed to distinguish between neural responses driven by stimulus chromaticity versus perceptual color experience. An unsolved question is whether the neural responses at each stage of cortical processing represent a physical stimulus or a color we see. The present study dissociated the perceptual domain of color experience from the physical domain of chromatic stimulation at each stage of cortical processing by using a switch rivalry paradigm that caused the color percept to vary over time without changing the retinal stimulation. Using functional MRI (fMRI) and a model-based encoding approach, we found that neural representations in higher visual areas, such as V4 and VO1, corresponded to the perceived color, whereas responses in early visual areas V1 and V2 were modulated by the chromatic light stimulus rather than color perception. Our findings support a transition in the ascending human ventral visual pathway, from a representation of the chromatic stimulus at the retina in early visual areas to responses that correspond to perceptually experienced colors in higher visual areas.
Collapse
|
19
|
Wang X, Men W, Gao J, Caramazza A, Bi Y. Two Forms of Knowledge Representations in the Human Brain. Neuron 2020; 107:383-393.e5. [PMID: 32386524 DOI: 10.1016/j.neuron.2020.04.010] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 03/05/2020] [Accepted: 04/06/2020] [Indexed: 01/09/2023]
Abstract
Sensory experience shapes what and how knowledge is stored in the brain-our knowledge about the color of roses depends in part on the activity of color-responsive neurons based on experiences of seeing roses. We compared the brain basis of color knowledge in congenitally (or early) blind individuals, whose color knowledge can only be obtained through language descriptions and/or cognitive inference, to that of sighted individuals whose color-knowledge benefits from both sensory experience and language. We found that some regions support color knowledge only in the sighted, whereas a region in the left dorsal anterior temporal lobe supports object-color knowledge in both the blind and sighted groups, indicating the existence of a sensory-independent knowledge coding system in both groups. Thus, there are (at least) two forms of object knowledge representations in the human brain: sensory-derived and language- and cognition-derived knowledge, supported by different brain systems.
Collapse
Affiliation(s)
- Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China
| | - Weiwei Men
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Jiahong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China; McGovern Institute for Brain Research, Peking University, Beijing 100871, China
| | - Alfonso Caramazza
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA; Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China.
| |
Collapse
|
20
|
Goddard E, Mullen KT. fMRI representational similarity analysis reveals graded preferences for chromatic and achromatic stimulus contrast across human visual cortex. Neuroimage 2020; 215:116780. [PMID: 32276074 DOI: 10.1016/j.neuroimage.2020.116780] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 03/18/2020] [Accepted: 03/24/2020] [Indexed: 01/23/2023] Open
Abstract
Human visual cortex is partitioned into different functional areas that, from lower to higher, become increasingly selective and responsive to complex feature dimensions. Here we use a Representational Similarity Analysis (RSA) of fMRI-BOLD signals to make quantitative comparisons across LGN and multiple visual areas of the low-level stimulus information encoded in the patterns of voxel responses. Our stimulus set was picked to target the four functionally distinct subcortical channels that input visual cortex from the LGN: two achromatic sinewave stimuli that favor the responses of the high-temporal magnocellular and high-spatial parvocellular pathways, respectively, and two chromatic stimuli isolating the L/M-cone opponent and S-cone opponent pathways, respectively. Each stimulus type had three spatial extents to sample both foveal and para-central visual field. With the RSA, we compare quantitatively the response specializations for individual stimuli and combinations of stimuli in each area and how these change across visual cortex. First, our results replicate the known response preferences for motion/flicker in the dorsal visual areas. In addition, we identify two distinct gradients along the ventral visual stream. In the early visual areas (V1-V3), the strongest differential representation is for the achromatic high spatial frequency stimuli, suitable for form vision, and a very weak differentiation of chromatic versus achromatic contrast. Emerging in ventral occipital areas (V4, VO1 and VO2), however, is an increasingly strong separation of the responses to chromatic versus achromatic contrast and a decline in the high spatial frequency representation. These gradients provide new insight into how visual information is transformed across the visual cortex.
Collapse
Affiliation(s)
- Erin Goddard
- McGill Vision Research, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC, H3G1A4, Canada
| | - Kathy T Mullen
- McGill Vision Research, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC, H3G1A4, Canada.
| |
Collapse
|
21
|
Rapid picture naming in Parkinson's disease using the Mobile Universal Lexicon Evaluation System (MULES). J Neurol Sci 2020; 410:116680. [PMID: 31945624 DOI: 10.1016/j.jns.2020.116680] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 12/30/2019] [Accepted: 01/08/2020] [Indexed: 11/21/2022]
Abstract
OBJECTIVE The Mobile Universal Lexicon Evaluation System (MULES) is a test of rapid picture naming that captures extensive brain networks, including cognitive, language and afferent/efferent visual pathways. MULES performance is slower in concussion and multiple sclerosis, conditions in which vision dysfunction is common. Visual aspects captured by the MULES may be impaired in Parkinson's disease (PD) including color discrimination, object recognition, visual processing speed, and convergence. The purpose of this study was to compare MULES time scores for a cohort of PD patients with those for a control group of participants of similar age. We also sought to examine learning effects for the MULES by comparing scores for two consecutive trials within the patient and control groups. METHODS MULES consists of 54 colored pictures (fruits, animals, random objects). The test was administered in a cohort of PD patients and in a group of similar aged controls. Wilcoxon rank-sum tests were used to determine statistical significance for differences in MULES time scores between PD patients and controls. Spearman rank-correlation coefficients were calculated to examine the relation between MULES time scores and PD motor symptom severity (UPDRS). Learning effects were assessed using Wilcoxon rank-sum tests. RESULTS Among 51 patients with PD (median age 70 years, range 52-82) and 20 disease-free control participants (median age 67 years, range 51-90), MULES scores were significantly slower (worse performance) in PD patients (median 63.2 s, range 37.3-296.3) vs. controls (median 53.9 s, range 37.5-128.6, P = .03, Wilcoxon rank-sum test). Slower MULES times were associated with increased motor symptom severity as measured by the Unified Parkinson's Disease Rating Scale, Section III (rs = 0.37, P = .02). Learning effects were greater among patients with PD (median improvement of 14.8 s between two MULES trials) compared to controls (median 7.4 s, P = .004). CONCLUSION The MULES is a complex test of rapid picture naming that captures numerous brain pathways including an extensive visual network. MULES performance is slower in patients with PD and our study suggests an association with the degree of motor impairment. Future studies will determine the relation of MULES time scores to other modalities that test visual function and structure in PD.
Collapse
|
22
|
|
23
|
Lalwani P, Brang D. Stochastic resonance model of synaesthesia. Philos Trans R Soc Lond B Biol Sci 2019; 374:20190029. [PMID: 31630652 DOI: 10.1098/rstb.2019.0029] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
In synaesthesia, stimulation of one sensory modality evokes additional experiences in another modality (e.g. sounds evoking colours). Along with these cross-sensory experiences, there are several cognitive and perceptual differences between synaesthetes and non-synaesthetes. For example, synaesthetes demonstrate enhanced imagery, increased cortical excitability and greater perceptual sensitivity in the concurrent modality. Previous models suggest that synaesthesia results from increased connectivity between corresponding sensory regions or disinhibited feedback from higher cortical areas. While these models explain how one sense can evoke qualitative experiences in another, they fail to predict the broader phenotype of differences observed in synaesthetes. Here, we propose a novel model of synaesthesia based on the principles of stochastic resonance. Specifically, we hypothesize that synaesthetes have greater neural noise in sensory regions, which allows pre-existing multisensory pathways to elicit supra-threshold activation (i.e. synaesthetic experiences). The strengths of this model are (a) it predicts the broader cognitive and perceptual differences in synaesthetes, (b) it provides a unified framework linking developmental and induced synaesthesias, and (c) it explains why synaesthetic associations are inconsistent at onset but stabilize over time. We review research consistent with this model and propose future studies to test its limits. This article is part of a discussion meeting issue 'Bridging senses: novel insights from synaesthesia'.
Collapse
Affiliation(s)
- Poortata Lalwani
- Department of Psychology, University of Michigan, 530 Church Street, Ann Arbor, MI 48109, USA
| | - David Brang
- Department of Psychology, University of Michigan, 530 Church Street, Ann Arbor, MI 48109, USA
| |
Collapse
|
24
|
Seay M, Akhand O, Galetta MS, Cobbs L, Hasanaj L, Amorapanth P, Rizzo JR, Nolan R, Serrano L, Rucker JC, Galetta SL, Balcer LJ. Mobile Universal Lexicon Evaluation System (MULES) in MS: Evaluation of a new visual test of rapid picture naming. J Neurol Sci 2018; 394:1-5. [PMID: 30193154 DOI: 10.1016/j.jns.2018.08.019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Revised: 08/02/2018] [Accepted: 08/21/2018] [Indexed: 10/28/2022]
Abstract
OBJECTIVE The Mobile Universal Lexicon Evaluation System (MULES) is a test of rapid picture naming that is under investigation for concussion. MULES captures an extensive visual network, including pathways for eye movements, color perception, memory and object recognition. The purpose of this study was to introduce the MULES to visual assessment of patients with MS, and to examine associations with other tests of afferent and efferent visual function. METHODS We administered the MULES in addition to binocular measures of low-contrast letter acuity (LCLA), high-contrast visual acuity (VA) and the King-Devick (K-D) test of rapid number naming in an MS cohort and in a group of disease-free controls. RESULTS Among 24 patients with MS (median age 36 years, range 20-72, 64% female) and 22 disease-free controls (median age 34 years, range 19-59, 57% female), MULES test times were greater (worse) among the patients (60.0 vs. 40.0 s). Accounting for age, MS vs. control status was a predictor of MULES test times (P = .01, logistic regression). Faster testing times were noted among patients with MS who had greater (better) performance on binocular LCLA at 2.5% contrast (P < .001, linear regression, accounting for age), binocular high-contrast VA (P < .001), and K-D testing (P < .001). Both groups demonstrated approximately 10-s improvements in MULES test times between trials 1 and 2 (P < .0001, paired t-tests). CONCLUSION The MULES test, a complex task of rapid picture naming involves an extensive visual network that captures eye movements, color perception and the characterization of objects. Color recognition, a key component of this novel assessment, is early in object processing and requires area V4 and the inferior temporal projections. MULES scores reflect performance of LCLA, a widely-used measure of visual function in MS clinical trials. These results provide evidence that the MULES test can add efficient visual screening to the assessment of patients with MS.
Collapse
Affiliation(s)
- Meagan Seay
- Departments of Neurolog, New York University School of Medicine, New York, NY, USA.
| | - Omar Akhand
- Departments of Neurolog, New York University School of Medicine, New York, NY, USA.
| | - Matthew S Galetta
- Departments of Neurolog, New York University School of Medicine, New York, NY, USA.
| | - Lucy Cobbs
- Departments of Neurolog, New York University School of Medicine, New York, NY, USA.
| | - Lisena Hasanaj
- Departments of Neurolog, New York University School of Medicine, New York, NY, USA.
| | - Prin Amorapanth
- Physical Medicine and Rehabilitation, New York University School of Medicine, New York, NY, USA.
| | - John-Ross Rizzo
- Departments of Neurolog, New York University School of Medicine, New York, NY, USA; Physical Medicine and Rehabilitation, New York University School of Medicine, New York, NY, USA.
| | - Rachel Nolan
- Departments of Neurolog, New York University School of Medicine, New York, NY, USA.
| | - Liliana Serrano
- Departments of Neurolog, New York University School of Medicine, New York, NY, USA.
| | - Janet C Rucker
- Departments of Neurolog, New York University School of Medicine, New York, NY, USA; Ophthalmology, New York University School of Medicine, New York, NY, USA.
| | - Steven L Galetta
- Departments of Neurolog, New York University School of Medicine, New York, NY, USA; Ophthalmology, New York University School of Medicine, New York, NY, USA.
| | - Laura J Balcer
- Departments of Neurolog, New York University School of Medicine, New York, NY, USA; Population Health, New York University School of Medicine, New York, NY, USA; Ophthalmology, New York University School of Medicine, New York, NY, USA.
| |
Collapse
|