26
|
Orekhova EV, Stroganova TA, Schneiderman JF, Lundström S, Riaz B, Sarovic D, Sysoeva OV, Brant G, Gillberg C, Hadjikhani N. Neural gain control measured through cortical gamma oscillations is associated with sensory sensitivity. Hum Brain Mapp 2019; 40:1583-1593. [PMID: 30549144 DOI: 10.1002/hbm.24469] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Revised: 10/21/2018] [Accepted: 10/29/2018] [Indexed: 12/17/2022] Open
Abstract
Gamma oscillations facilitate information processing by shaping the excitatory input/output of neuronal populations. Recent studies in humans and nonhuman primates have shown that strong excitatory drive to the visual cortex leads to suppression of induced gamma oscillations, which may reflect inhibitory-based gain control of network excitation. The efficiency of the gain control measured through gamma oscillations may in turn affect sensory sensitivity in everyday life. To test this prediction, we assessed the link between self-reported sensitivity and changes in magneto-encephalographic gamma oscillations as a function of motion velocity of high-contrast visual gratings. The induced gamma oscillations increased in frequency and decreased in power with increasing stimulation intensity. As expected, weaker suppression of the gamma response correlated with sensory hypersensitivity. Robustness of this result was confirmed by its replication in the two samples: neurotypical subjects and people with autism, who had generally elevated sensory sensitivity. We conclude that intensity-related suppression of gamma response is a promising biomarker of homeostatic control of the excitation-inhibition balance in the visual cortex.
Collapse
|
27
|
Song Y, Wang H. Motion-induced position mis-localization predicts the severity of Alzheimer's disease. J Neuropsychol 2019; 14:333-345. [PMID: 30859737 DOI: 10.1111/jnp.12181] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2018] [Revised: 01/31/2019] [Indexed: 01/16/2023]
Abstract
Patients with Alzheimer's disease (AD) often exhibit motion processing deficits. It is unclear whether the localization of moving objects - a perceptual process tightly linked to motion - is impaired or intact in AD. In this study, we used the phenomenon of illusory shift of position induced by motion as a behavioural paradigm to probe how the spatial representation differs between AD patients and healthy elderly controls. We measured the magnitudes of motion-induced position shift in a group of AD participants (N = 24) and age-matched elderly observers (N = 24). We found that AD patients showed weakened position mis-localization, but only for motion stimuli of slow speeds. For fast motion, the position mis-localization did not differ significantly between groups. Furthermore, we showed that the magnitudes of position mis-localization can predict the severity of AD; that is, patients with more severe symptoms had less preserved position mis-localization. Our results suggest that AD pathology impacts not only motion processing per se, but also the perceptual process related to motion such as the localization of moving objects.
Collapse
|
28
|
La Scaleia B, Lacquaniti F, Zago M. Body orientation contributes to modelling the effects of gravity for target interception in humans. J Physiol 2019; 597:2021-2043. [PMID: 30644996 DOI: 10.1113/jp277469] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Accepted: 01/09/2019] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS It is known that interception of targets accelerated by gravity involves internal models coupled with visual signals. Non-visual signals related to head and body orientation relative to gravity may also contribute, although their role is poorly understood. In a novel experiment, we asked pitched observers to hit a virtual target approaching with an acceleration that was either coherent or incoherent with their pitch-tilt. Initially, the timing errors were large and independent of the coherence between target acceleration and observer's pitch. With practice, however, the timing errors became substantially smaller in the coherent conditions. The results show that information about head and body orientation can contribute to modelling the effects of gravity on a moving target. Orientation cues from vestibular and somatosensory signals might be integrated with visual signals in the vestibular cortex, where the internal model of gravity is assumed to be encoded. ABSTRACT Interception of moving targets relies on visual signals and internal models. Less is known about the additional contribution of non-visual cues about head and body orientation relative to gravity. We took advantage of Galileo's law of motion along an incline to demonstrate the effects of vestibular and somatosensory cues about head and body orientation on interception timing. Participants were asked to hit a ball rolling in a gutter towards the eyes, resulting in image expansion. The scene was presented in a head-mounted display, without any visual information about gravity direction. In separate blocks of trials participants were pitched backwards by 20° or 60°, whereas ball acceleration was randomized across trials so as to be compatible with rolling down a slope of 20° or 60°. Initially, the timing errors were large, independently of the coherence between ball acceleration and pitch angle, consistent with responses based exclusively on visual information because visual stimuli were identical at both tilts. At the end of the experiment, however, the timing errors were systematically smaller in the coherent conditions than the incoherent ones. Moreover, the responses were significantly (P = 0.007) earlier when participants were pitched by 60° than when they were pitched by 20°. Therefore, practice with the task led to incorporation of information about head and body orientation relative to gravity for response timing. Instead, posture did not affect response timing in a control experiment in which participants hit a static target in synchrony with the last of a predictable series of stationary audiovisual stimuli.
Collapse
|
29
|
Lange-Malecki B, Treue S, Rothenberger A, Albrecht B. Cognitive Control Over Visual Motion Processing - Are Children With ADHD Especially Compromised? A Pilot Study of Flanker Task Event-Related Potentials. Front Hum Neurosci 2018; 12:491. [PMID: 30568588 PMCID: PMC6290085 DOI: 10.3389/fnhum.2018.00491] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Accepted: 11/21/2018] [Indexed: 11/26/2022] Open
Abstract
Performance deficits and diminished brain activity during cognitive control and error processing are frequently reported in attention deficit/hyperactivity disorder (ADHD), indicating a “top-down” deficit in executive attention. So far, these findings are almost exclusively based on the processing of static visual forms, neglecting the importance of visual motion processing in everyday life as well as important attentional and neuroanatomical differences between processing static forms and visual motion. For the current study, we contrasted performance and electrophysiological parameters associated with cognitive control from two Flanker-Tasks using static stimuli and moving random dot patterns. Behavioral data and event-related potentials were recorded from 16 boys with ADHD (combined type) and 26 controls (aged 8–15 years). The ADHD group showed less accuracy especially for moving stimuli, and prolonged response times for both stimulus types. Analyses of electrophysiological parameters of cognitive control revealed trends for diminished N2-enhancements and smaller error-negativities (indicating medium effect sizes), and we detected significantly lower error positivities (large effect sizes) compared to controls, similarly for both static and moving stimuli. Taken together, the study supports evidence that motion processing is not fully developed in childhood and that the cognitive control deficit in ADHD is of higher order and independent of stimulus type.
Collapse
|
30
|
Chaplin TA, Rosa MGP, Lui LL. Auditory and Visual Motion Processing and Integration in the Primate Cerebral Cortex. Front Neural Circuits 2018; 12:93. [PMID: 30416431 PMCID: PMC6212655 DOI: 10.3389/fncir.2018.00093] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 10/08/2018] [Indexed: 11/13/2022] Open
Abstract
The ability of animals to detect motion is critical for survival, and errors or even delays in motion perception may prove costly. In the natural world, moving objects in the visual field often produce concurrent sounds. Thus, it can highly advantageous to detect motion elicited from sensory signals of either modality, and to integrate them to produce more reliable motion perception. A great deal of progress has been made in understanding how visual motion perception is governed by the activity of single neurons in the primate cerebral cortex, but far less progress has been made in understanding both auditory motion and audiovisual motion integration. Here we, review the key cortical regions for motion processing, focussing on translational motion. We compare the representations of space and motion in the visual and auditory systems, and examine how single neurons in these two sensory systems encode the direction of motion. We also discuss the way in which humans integrate of audio and visual motion cues, and the regions of the cortex that may mediate this process.
Collapse
|
31
|
Yu X, Gu Y. Probing Sensory Readout via Combined Choice-Correlation Measures and Microstimulation Perturbation. Neuron 2018; 100:715-727.e5. [PMID: 30244884 DOI: 10.1016/j.neuron.2018.08.034] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2017] [Revised: 01/19/2018] [Accepted: 08/22/2018] [Indexed: 12/18/2022]
Abstract
It is controversial whether covariation between neuronal activity and perceptual choice (i.e., choice correlation) reflects the functional readout of sensory signals. Here, we combined choice-correlation measures and electrical microstimulation on a site-to-site basis in the medial superior temporal area (MST), middle temporal area (MT), and ventral intraparietal area (VIP) when macaques discriminated between motion directions in both fine and coarse tasks. Microstimulation generated comparable effects between tasks but heterogeneous effects across and within brain regions. Within the MST and MT, microstimulation significantly biased an animal's choice toward the sensory preference instead of choice-related signals of the stimulated units. This was particularly evident for sites with conflict preference of sensory and choice-related signals. In the VIP, microstimulation failed to produce significant effects in either task despite strong choice correlations presented in this area. Our results suggest that sensory readout may not be inferred from choice-related signals during perceptual decision-making tasks.
Collapse
|
32
|
Handa T, Mikami A. Neuronal correlates of motion-defined shape perception in primate dorsal and ventral streams. Eur J Neurosci 2018; 48:3171-3185. [PMID: 30118167 DOI: 10.1111/ejn.14121] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2018] [Revised: 07/24/2018] [Accepted: 07/27/2018] [Indexed: 11/30/2022]
Abstract
Human and non-human primates can readily perceive the shape of objects using visual motion. Classically, shape, and motion are considered to be separately processed via ventral and dorsal cortical pathways, respectively. However, many lines of anatomical and physiological evidence have indicated that these two pathways are likely to be interconnected at some stage. For motion-defined shape perception, these two pathways should interact with each other because the ventral pathway must utilize motion, which the dorsal pathway processes, to extract shape signal. However, it is unknown how interactions between cortical pathways are involved in neural mechanisms underlying motion-defined shape perception. We review evidence from psychophysical, lesion, neuroimaging and physiological research on motion-defined shape perception and then discuss the effects of behavioral demands on neural activity in ventral and dorsal cortical areas. Further, we discuss functions of two candidate sets of levels: early and higher-order cortical areas. The extrastriate area V4 and middle temporal (MT) area, which are reciprocally connected, at the early level are plausible areas for extracting the shape and/or constituent parts of shape from motion cues because neural dynamics are different from those during luminance-defined shape perception. On the other hand, among other higher-order visual areas, the anterior superior temporal sulcus likely contributes to the processing of cue-invariant shape recognition rather than cue-dependent shape processing. We suggest that sharing information about motion and shape between the early visual areas in the dorsal and ventral pathways is dependent on visual cues and behavioral requirements, indicating the interplay between the pathways.
Collapse
|
33
|
Visually Evoked Response Differences to Contrast and Motion in Children with Autism Spectrum Disorder. Brain Sci 2018; 8:brainsci8090160. [PMID: 30149500 PMCID: PMC6162529 DOI: 10.3390/brainsci8090160] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 08/09/2018] [Accepted: 08/21/2018] [Indexed: 12/26/2022] Open
Abstract
High-density electroencephalography (EEG) was used to examine the utility of the P1 event-related potential (ERP) as a marker of visual motion sensitivity to luminance defined low-spatial frequency drifting gratings in 16 children with autism and 16 neurotypical children. Children with autism displayed enhanced sensitivity to large, high-contrast low-spatial frequency stimuli as indexed by significantly shorter P1 response latencies to large vs. small gratings. The current study also found that children with autism had larger amplitude responses to large gratings irrespective of contrast. A linear regression established that P1 adaptive mean amplitude for large, high-contrast sinusoidal gratings significantly predicted hyperresponsiveness item mean scores on the Sensory Experiences Questionnaire for children with autism, but not for neurotypical children. We conclude that children with autism have differences in the mechanisms that underlie low-level visual processing potentially related to altered visual spatial suppression or contrast gain control.
Collapse
|
34
|
Chen Q, Wei W. Stimulus-dependent engagement of neural mechanisms for reliable motion detection in the mouse retina. J Neurophysiol 2018; 120:1153-1161. [PMID: 29897862 DOI: 10.1152/jn.00716.2017] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Direction selectivity is a fundamental computation in the visual system and is first computed by the direction-selective circuit in the mammalian retina. Although landmark discoveries on the neural basis of direction selectivity have been made in the rabbit, many technological advances designed for the mouse have emerged, making this organism a favored model for investigating the direction-selective circuit at the molecular, synaptic, and network levels. Studies using diverse motion stimuli in the mouse retina demonstrate that retinal direction selectivity is implemented by multilayered mechanisms. This review begins with a set of central mechanisms that are engaged under a wide range of visual conditions and then focuses on additional layers of mechanisms that are dynamically recruited under different visual stimulus conditions. Together, recent findings allude to an emerging theme: robust motion detection in the natural environment requires flexible neural mechanisms.
Collapse
|
35
|
Chang YCC, Khan S, Taulu S, Kuperberg G, Brown EN, Hämäläinen MS, Temereanca S. Left-Lateralized Contributions of Saccades to Cortical Activity During a One-Back Word Recognition Task. Front Neural Circuits 2018; 12:38. [PMID: 29867372 PMCID: PMC5964218 DOI: 10.3389/fncir.2018.00038] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Accepted: 04/20/2018] [Indexed: 01/08/2023] Open
Abstract
Saccadic eye movements are an inherent component of natural reading, yet their contribution to information processing at subsequent fixation remains elusive. Here we use anatomically-constrained magnetoencephalography (MEG) to examine cortical activity following saccades as healthy human subjects engaged in a one-back word recognition task. This activity was compared with activity following external visual stimulation that mimicked saccades. A combination of procedures was employed to eliminate saccadic ocular artifacts from the MEG signal. Both saccades and saccade-like external visual stimulation produced early-latency responses beginning ~70 ms after onset in occipital cortex and spreading through the ventral and dorsal visual streams to temporal, parietal and frontal cortices. Robust differential activity following the onset of saccades vs. similar external visual stimulation emerged during 150-350 ms in a left-lateralized cortical network. This network included: (i) left lateral occipitotemporal (LOT) and nearby inferotemporal (IT) cortex; (ii) left posterior Sylvian fissure (PSF) and nearby multimodal cortex; and (iii) medial parietooccipital (PO), posterior cingulate and retrosplenial cortices. Moreover, this left-lateralized network colocalized with word repetition priming effects. Together, results suggest that central saccadic mechanisms influence a left-lateralized language network in occipitotemporal and temporal cortex above and beyond saccadic influences at preceding stages of information processing during visual word recognition.
Collapse
|
36
|
Huk AC, Katz LN, Yates JL. The Role of the Lateral Intraparietal Area in (the Study of) Decision Making. Annu Rev Neurosci 2018; 40:349-372. [PMID: 28772104 DOI: 10.1146/annurev-neuro-072116-031508] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Over the past two decades, neurophysiological responses in the lateral intraparietal area (LIP) have received extensive study for insight into decision making. In a parallel manner, inferred cognitive processes have enriched interpretations of LIP activity. Because of this bidirectional interplay between physiology and cognition, LIP has served as fertile ground for developing quantitative models that link neural activity with decision making. These models stand as some of the most important frameworks for linking brain and mind, and they are now mature enough to be evaluated in finer detail and integrated with other lines of investigation of LIP function. Here, we focus on the relationship between LIP responses and known sensory and motor events in perceptual decision-making tasks, as assessed by correlative and causal methods. The resulting sensorimotor-focused approach offers an account of LIP activity as a multiplexed amalgam of sensory, cognitive, and motor-related activity, with a complex and often indirect relationship to decision processes. Our data-driven focus on multiplexing (and de-multiplexing) of various response components can complement decision-focused models and provides more detailed insight into how neural signals might relate to cognitive processes such as decision making.
Collapse
|
37
|
Raharijaona T, Mawonou R, Nguyen TV, Colonnier F, Boyron M, Diperi J, Viollet S. Local Positioning System Using Flickering Infrared LEDs. SENSORS 2017; 17:s17112518. [PMID: 29099743 PMCID: PMC5713101 DOI: 10.3390/s17112518] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Revised: 10/25/2017] [Accepted: 10/26/2017] [Indexed: 11/23/2022]
Abstract
A minimalistic optical sensing device for the indoor localization is proposed to estimate the relative position between the sensor and active markers using amplitude modulated infrared light. The innovative insect-based sensor can measure azimuth and elevation angles with respect to two small and cheap active infrared light emitting diodes (LEDs) flickering at two different frequencies. In comparison to a previous lensless visual sensor that we proposed for proximal localization (less than 30 cm), we implemented: (i) a minimalistic sensor in terms of small size (10 cm3), light weight (6 g) and low power consumption (0.4 W); (ii) an Arduino-compatible demodulator for fast analog signal processing requiring low computational resources; and (iii) an indoor positioning system for a mobile robotic application. Our results confirmed that the proposed sensor was able to estimate the position at a distance of 2 m with an accuracy as small as 2-cm at a sampling frequency of 100 Hz. Our sensor can be also suitable to be implemented in a position feedback loop for indoor robotic applications in GPS-denied environment.
Collapse
|
38
|
Page WK, Duffy CJ. Path perturbation detection tasks reduce MSTd neuronal self-movement heading responses. J Neurophysiol 2017; 119:124-133. [PMID: 29046430 DOI: 10.1152/jn.00958.2016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We presented optic flow and real movement heading stimuli while recording MSTd neuronal activity. Monkeys were alternately engaged in three tasks: visual detection of optic flow heading perturbations, vestibular detection of real movement heading perturbations, and auditory detection of brief tones. Push-button RTs were fastest for tones and slower for visual and vestibular heading perturbations, suggesting that the tone detection task was easier. Neuronal heading selectivity was strongest during the tone detection task, and weaker during the visual and vestibular heading perturbation detection tasks. Heading selectivity was weaker during visual and vestibular path perturbation detection, despite our presented heading cues only in the visual and vestibular modalities. We conclude that focusing on the self-movement transients of path perturbation distracted the monkeys from their heading and reduced neuronal responsiveness to heading direction. NEW & NOTEWORTHY Heading analysis is critical for steering and navigation. We recorded the activity of monkey cortical heading neurons during naturalistic self-movement. When the monkeys were required to respond to transient changes in their path, neuronal responses to heading direction were diminished. This suggests that the need to respond to momentary path perturbations reduces your ability to process your heading direction.
Collapse
|
39
|
Dopamine Activation Preserves Visual Motion Perception Despite Noise Interference of Human V5/MT. J Neurosci 2017; 36:9303-12. [PMID: 27605607 DOI: 10.1523/jneurosci.4452-15.2016] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2015] [Accepted: 06/27/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED When processing sensory signals, the brain must account for noise, both noise in the stimulus and that arising from within its own neuronal circuitry. Dopamine receptor activation is known to enhance both visual cortical signal-to-noise-ratio (SNR) and visual perceptual performance; however, it is unknown whether these two dopamine-mediated phenomena are linked. To assess this, we used single-pulse transcranial magnetic stimulation (TMS) applied to visual cortical area V5/MT to reduce the SNR focally and thus disrupt visual motion discrimination performance to visual targets located in the same retinotopic space. The hypothesis that dopamine receptor activation enhances perceptual performance by improving cortical SNR predicts that dopamine activation should antagonize TMS disruption of visual perception. We assessed this hypothesis via a double-blinded, placebo-controlled study with the dopamine receptor agonists cabergoline (a D2 agonist) and pergolide (a D1/D2 agonist) administered in separate sessions (separated by 2 weeks) in 12 healthy volunteers in a William's balance-order design. TMS degraded visual motion perception when the evoked phosphene and the visual stimulus overlapped in time and space in the placebo and cabergoline conditions, but not in the pergolide condition. This suggests that dopamine D1 or combined D1 and D2 receptor activation enhances cortical SNR to boost perceptual performance. That local visual cortical excitability was unchanged across drug conditions suggests the involvement of long-range intracortical interactions in this D1 effect. Because increased internal noise (and thus lower SNR) can impair visual perceptual learning, improving visual cortical SNR via D1/D2 agonist therapy may be useful in boosting rehabilitation programs involving visual perceptual training. SIGNIFICANCE STATEMENT In this study, we address the issue of whether dopamine activation improves visual perception despite increasing sensory noise in the visual cortex. We show specifically that dopamine D1 (or combined D1/D2) receptor activation enhances the cortical signal-to-noise-ratio to boost perceptual performance. Together with the previously reported effects of dopamine upon brain plasticity and learning (Wolf et al., 2003; Hansen and Manahan-Vaughan, 2014), our results suggest that combining rehabilitation with dopamine agonists could enhance both the saliency of the training signal and the long-term effects on brain plasticity to boost rehabilitation regimens for brain injury.
Collapse
|
40
|
Liu LD, Pack CC. The Contribution of Area MT to Visual Motion Perception Depends on Training. Neuron 2017; 95:436-446.e3. [PMID: 28689980 DOI: 10.1016/j.neuron.2017.06.024] [Citation(s) in RCA: 50] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2016] [Revised: 04/24/2017] [Accepted: 06/15/2017] [Indexed: 10/19/2022]
Abstract
Perceptual decisions require the transformation of raw sensory inputs into cortical representations suitable for stimulus discrimination. One of the best-known examples of this transformation involves the middle temporal area (MT) of the primate visual cortex. Area MT provides a robust representation of stimulus motion, and previous work has shown that it contributes causally to performance on motion discrimination tasks. Here we report that the strength of this contribution can be highly plastic: depending on the recent training history, pharmacological inactivation of MT can severely impair motion discrimination, or it can have little detectable influence. Further analysis of neural and behavioral data suggests that training moves the readout of motion information between MT and lower-level cortical areas. These results show that the contribution of individual brain regions to conscious perception can shift flexibly depending on sensory experience.
Collapse
|
41
|
Goddard E, Solomon SG, Carlson TA. Dynamic population codes of multiplexed stimulus features in primate area MT. J Neurophysiol 2017; 118:203-218. [PMID: 28381492 DOI: 10.1152/jn.00954.2016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2016] [Revised: 02/27/2017] [Accepted: 03/30/2017] [Indexed: 11/22/2022] Open
Abstract
The middle-temporal area (MT) of primate visual cortex is critical in the analysis of visual motion. Single-unit studies suggest that the response dynamics of neurons within area MT depend on stimulus features, but how these dynamics emerge at the population level, and how feature representations interact, is not clear. Here, we used multivariate classification analysis to study how stimulus features are represented in the spiking activity of populations of neurons in area MT of marmoset monkey. Using representational similarity analysis we distinguished the emerging representations of moving grating and dot field stimuli. We show that representations of stimulus orientation, spatial frequency, and speed are evident near the onset of the population response, while the representation of stimulus direction is slower to emerge and sustained throughout the stimulus-evoked response. We further found a spatiotemporal asymmetry in the emergence of direction representations. Representations for high spatial frequencies and low temporal frequencies are initially orientation dependent, while those for high temporal frequencies and low spatial frequencies are more sensitive to motion direction. Our analyses reveal a complex interplay of feature representations in area MT population response that may explain the stimulus-dependent dynamics of motion vision.NEW & NOTEWORTHY Simultaneous multielectrode recordings can measure population-level codes that previously were only inferred from single-electrode recordings. However, many multielectrode recordings are analyzed using univariate single-electrode analysis approaches, which fail to fully utilize the population-level information. Here, we overcome these limitations by applying multivariate pattern classification analysis and representational similarity analysis to large-scale recordings from middle-temporal area (MT) in marmoset monkeys. Our analyses reveal a dynamic interplay of feature representations in area MT population response.
Collapse
|
42
|
Duarte JV, Costa GN, Martins R, Castelo-Branco M. Pivotal role of hMT+ in long-range disambiguation of interhemispheric bistable surface motion. Hum Brain Mapp 2017; 38:4882-4897. [PMID: 28660667 DOI: 10.1002/hbm.23701] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2017] [Revised: 06/12/2017] [Accepted: 06/14/2017] [Indexed: 11/07/2022] Open
Abstract
It remains an open question whether long-range disambiguation of ambiguous surface motion can be achieved in early visual cortex or instead in higher level regions, which concerns object/surface segmentation/integration mechanisms. We used a bistable moving stimulus that can be perceived as a pattern comprehending both visual hemi-fields moving coherently downward or as two widely segregated nonoverlapping component objects (in each visual hemi-field) moving separately inward. This paradigm requires long-range integration across the vertical meridian leading to interhemispheric binding. Our fMRI study (n = 30) revealed a close relation between activity in hMT+ and perceptual switches involving interhemispheric segregation/integration of motion signals, crucially under nonlocal conditions where components do not overlap and belong to distinct hemispheres. Higher signal changes were found in hMT+ in response to spatially segregated component (incoherent) percepts than to pattern (coherent) percepts. This did not occur in early visual cortex, unlike apparent motion, which does not entail surface segmentation. We also identified a role for top-down mechanisms in state transitions. Deconvolution analysis of switch-related changes revealed prefrontal, insula, and cingulate areas, with the right superior parietal lobule (SPL) being particularly involved. We observed that directed influences could emerge either from left or right hMT+ during bistable motion integration/segregation. SPL also exhibited significant directed functional connectivity with hMT+, during perceptual state maintenance (Granger causality analysis). Our results suggest that long-range interhemispheric binding of ambiguous motion representations mainly reflect bottom-up processes from hMT+ during perceptual state maintenance. In contrast, state transitions maybe influenced by high-level regions such as the SPL. Hum Brain Mapp 38:4882-4897, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
|
43
|
Abstract
The perceived speed of a ring of equally spaced dots moving around a circular path appears faster as the number of dots increases (Ho & Anstis, 2013, Best Illusion of the Year contest). We measured this "spinner" effect with radial sinusoidal gratings, using a 2AFC procedure where participants selected the faster one between two briefly presented gratings of different spatial frequencies (SFs) rotating at various angular speeds. Compared with the reference stimulus with 4 c/rev (0.64 c/rad), participants consistently overestimated the angular speed for test stimuli of higher radial SFs but underestimated that for a test stimulus of lower radial SFs. The spinner effect increased in magnitude but saturated rapidly as the test radial SF increased. Similar effects were observed with translating linear sinusoidal gratings of different SFs. Our results support the idea that human speed perception is biased by temporal frequency, which physically goes up as SF increases when the speed is held constant. Hence, the more dots or lines, the greater the perceived speed when they are moving coherently in a defined area.
Collapse
|
44
|
Venezia JH, Vaden KI, Rong F, Maddox D, Saberi K, Hickok G. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus. Front Hum Neurosci 2017; 11:174. [PMID: 28439236 PMCID: PMC5383672 DOI: 10.3389/fnhum.2017.00174] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2016] [Accepted: 03/24/2017] [Indexed: 11/30/2022] Open
Abstract
The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.
Collapse
|
45
|
Gaglianese A, Harvey BM, Vansteensel MJ, Dumoulin SO, Ramsey NF, Petridou N. Separate spatial and temporal frequency tuning to visual motion in human MT+ measured with ECoG. Hum Brain Mapp 2016; 38:293-307. [PMID: 27647579 DOI: 10.1002/hbm.23361] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2016] [Revised: 08/10/2016] [Accepted: 08/21/2016] [Indexed: 11/11/2022] Open
Abstract
The human middle temporal complex (hMT+) has a crucial biological relevance for the processing and detection of direction and speed of motion in visual stimuli. Here, we characterized how neuronal populations in hMT+ encode the speed of moving visual stimuli. We evaluated human intracranial electrocorticography (ECoG) responses elicited by square-wave dartboard moving stimuli with different spatial and temporal frequency to investigate whether hMT+ neuronal populations encode the stimulus speed directly, or whether they separate motion into its spatial and temporal components. We extracted two components from the ECoG responses: (1) the power in the high-frequency band (HFB: 65-95 Hz) as a measure of the neuronal population spiking activity and (2) a specific spectral component that followed the frequency of the stimulus's contrast reversals (SCR responses). Our results revealed that HFB neuronal population responses to visual motion stimuli exhibit distinct and independent selectivity for spatial and temporal frequencies of the visual stimuli rather than direct speed tuning. The SCR responses did not encode the speed or the spatiotemporal frequency of the visual stimuli. We conclude that the neuronal populations measured in hMT+ are not directly tuned to stimulus speed, but instead encode speed through separate and independent spatial and temporal frequency tuning. Hum Brain Mapp 38:293-307, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
|
46
|
Forsberg LE, Bonde LH, Harvey MA, Roland PE. The Second Spiking Threshold: Dynamics of Laminar Network Spiking in the Visual Cortex. Front Syst Neurosci 2016; 10:65. [PMID: 27582693 PMCID: PMC4987378 DOI: 10.3389/fnsys.2016.00065] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Accepted: 07/12/2016] [Indexed: 11/30/2022] Open
Abstract
Most neurons have a threshold separating the silent non-spiking state and the state of producing temporal sequences of spikes. But neurons in vivo also have a second threshold, found recently in granular layer neurons of the primary visual cortex, separating spontaneous ongoing spiking from visually evoked spiking driven by sharp transients. Here we examine whether this second threshold exists outside the granular layer and examine details of transitions between spiking states in ferrets exposed to moving objects. We found the second threshold, separating spiking states evoked by stationary and moving visual stimuli from the spontaneous ongoing spiking state, in all layers and zones of areas 17 and 18 indicating that the second threshold is a property of the network. Spontaneous and evoked spiking, thus can easily be distinguished. In addition, the trajectories of spontaneous ongoing states were slow, frequently changing direction. In single trials, sharp as well as smooth and slow transients transform the trajectories to be outward directed, fast and crossing the threshold to become evoked. Although the speeds of the evolution of the evoked states differ, the same domain of the state space is explored indicating uniformity of the evoked states. All evoked states return to the spontaneous evoked spiking state as in a typical mono-stable dynamical system. In single trials, neither the original spiking rates, nor the temporal evolution in state space could distinguish simple visual scenes.
Collapse
|
47
|
Abstract
UNLABELLED Cortical areas, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), have been shown to integrate visual and vestibular self-motion signals. Area V6 is interconnected with areas MSTd and VIP, allowing for the possibility that V6 also integrates visual and vestibular self-motion cues. An alternative hypothesis in the literature is that V6 does not use these sensory signals to compute heading but instead discounts self-motion signals to represent object motion. However, the responses of V6 neurons to visual and vestibular self-motion cues have never been studied, thus leaving the functional roles of V6 unclear. We used a virtual reality system to examine the 3D heading tuning of macaque V6 neurons in response to optic flow and inertial motion stimuli. We found that the majority of V6 neurons are selective for heading defined by optic flow. However, unlike areas MSTd and VIP, V6 neurons are almost universally unresponsive to inertial motion in the absence of optic flow. We also explored the spatial reference frames of heading signals in V6 by measuring heading tuning for different eye positions, and we found that the visual heading tuning of most V6 cells was eye-centered. Similar to areas MSTd and VIP, the population of V6 neurons was best able to discriminate small variations in heading around forward and backward headings. Our findings support the idea that V6 is involved primarily in processing visual motion signals and does not appear to play a role in visual-vestibular integration for self-motion perception. SIGNIFICANCE STATEMENT To understand how we successfully navigate our world, it is important to understand which parts of the brain process cues used to perceive our direction of self-motion (i.e., heading). Cortical area V6 has been implicated in heading computations based on human neuroimaging data, but direct measurements of heading selectivity in individual V6 neurons have been lacking. We provide the first demonstration that V6 neurons carry 3D visual heading signals, which are represented in an eye-centered reference frame. In contrast, we found almost no evidence for vestibular heading signals in V6, indicating that V6 is unlikely to contribute to multisensory integration of heading signals, unlike other cortical areas. These findings provide important constraints on the roles of V6 in self-motion perception.
Collapse
|
48
|
Abstract
It is well established that ongoing cognitive functions affect the trajectories of limb movements mediated by corticospinal circuits, suggesting an interaction between cognition and motor action. Although there are also many demonstrations that decision formation is reflected in the ongoing neural activity in oculomotor brain circuits, it is not known whether the decision-related activity in those oculomotor structures interacts with eye movements that are decision irrelevant. Here we tested for an interaction between decisions and instructed saccades unrelated to the perceptual decision. Observers performed a direction-discrimination decision-making task, but made decision-irrelevant saccades before registering their motion decision with a button press. Probing the oculomotor circuits with these decision-irrelevant saccades during decision making revealed that saccade reaction times and peak velocities were influenced in proportion to motion strength, and depended on the directional congruence between decisions about visual motion and decision-irrelevant saccades. These interactions disappeared when observers passively viewed the motion stimulus but still made the same instructed saccades, and when manual reaction times were measured instead of saccade reaction times, confirming that these interactions result from decision formation as opposed to visual stimulation, and are specific to the oculomotor system. Our results demonstrate that oculomotor function can be affected by decision formation, even when decisions are communicated without eye movements, and that this interaction has a directionally specific component. These results not only imply a continuous and interactive mixture of motor and decision signals in oculomotor structures, but also suggest nonmotor recruitment of oculomotor machinery in decision making.
Collapse
|
49
|
Schwiedrzik CM, Bernstein B, Melloni L. Motion along the mental number line reveals shared representations for numerosity and space. eLife 2016; 5. [PMID: 26771249 PMCID: PMC4764558 DOI: 10.7554/elife.10806] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2015] [Accepted: 01/14/2016] [Indexed: 12/04/2022] Open
Abstract
Perception of number and space are tightly intertwined. It has been proposed that this is due to ‘cortical recycling’, where numerosity processing takes over circuits originally processing space. Do such ‘recycled’ circuits retain their original functionality? Here, we investigate interactions between numerosity and motion direction, two functions that both localize to parietal cortex. We describe a new phenomenon in which visual motion direction adapts nonsymbolic numerosity perception, giving rise to a repulsive aftereffect: motion to the left adapts small numbers, leading to overestimation of numerosity, while motion to the right adapts large numbers, resulting in underestimation. The reference frame of this effect is spatiotopic. Together with the tuning properties of the effect this suggests that motion direction-numerosity cross-adaptation may occur in a homolog of area LIP. ‘Cortical recycling’ thus expands but does not obliterate the functions originally performed by the recycled circuit, allowing for shared computations across domains. DOI:http://dx.doi.org/10.7554/eLife.10806.001 Our sense of number is thought to have emerged from the circuits of cortical neurons in the brain that originally represent space, a process known as 'cortical recycling'. Accordingly, our perception of space and number are tightly intertwined: for example, people think about numbers on a mental number line, where smaller numbers are mapped to the left, and larger numbers are mapped to the right. Also, damage to a brain region called the parietal cortex disrupts both space and number processing. If number processing recycles the neurons that encode space, which form does this appropriation take? Recycling could preserve the original behavior of the neurons (processing space), thus enriching the neurons’ functional repertoire with a new capacity (processing number). Alternatively, the newly developed role could replace the original one, such that space and number cohabitate the same brain area but use separate neurons. To disentangle these hypotheses, Schwiedrzik et al. used a technique called 'perceptual adaptation'. Here, continuously showing someone a particular feature eventually exhausts the neurons that respond to that feature. Neurons that respond to the opposite feature are however less exhausted and dominate perception. Consequently, people perceive the opposite of what they are adapted to. For example, after continuously seeing dots moving to the right, people perceive stationary dots as moving to the left. Similarly, after being exposed to large numbers of dots they underestimate how many dots they see. If the same neurons process numbers and space, then adapting to movement in a particular direction should influence number perception. During Schwiedrzik et al.’s experiments, volunteers watched moving dots on a computer screen. After seeing dots move to the right, they underestimated the number of dots that then appeared on the screen. This is likely to be because larger numbers are mentally mapped to the right, and seeing rightward motion for a long time exhausted these neurons. This means that neurons that respond to smaller numbers (mentally mapped to the left) were more active when the new dots were presented, leading the volunteers to underestimate how many dots they saw. Adapting to leftward motion led to the opposite effect, with volunteers overestimating the number of dots. Thus, motion can literally move us up and down the number line. These results indicate that the same neurons encode both space and numbers. Cortical recycling does not erase the neurons’ original behavior: instead, neurons may carry out the same computations when processing numbers or space. This would allow the brain to add new functionality without sacrificing any of the computational resources for processing space. DOI:http://dx.doi.org/10.7554/eLife.10806.002
Collapse
|
50
|
Abstract
UNLABELLED When an object moves in the visual field, its motion evokes a streak of activity on the retina and the incoming retinal signals lead to robust oculomotor commands because corrections are observed if the trajectory of the interceptive saccade is perturbed by a microstimulation in the superior colliculus. The present study complements a previous perturbation study by investigating, in the head-restrained monkey, the generation of saccades toward a transient moving target (100-200 ms). We tested whether the saccades land on the average of antecedent target positions or beyond the location where the target disappeared. Using target motions with different speed profiles, we also examined the sensitivity of the process that converts time-varying retinal signals into saccadic oculomotor commands. The results show that, for identical overall target displacements on the visual display, saccades toward a faster target land beyond the endpoint of saccades toward a target moving slower. The rate of change in speed matters in the visuomotor transformation. Indeed, in response to identical overall target displacements and durations, the saccades have smaller amplitude when they are made in response to an accelerating target than to a decelerating one. Moreover, the motion-related signals have different weights depending upon their timing relative to the target onset: early signals are more influential in the specification of saccade amplitude than later signals. We discuss the "predictive" properties of the visuo-saccadic system and the nature of this location where the saccades land, after providing some critical comments to the "hic-et-nunc" hypothesis (Fleuriet and Goffart, 2012). SIGNIFICANCE STATEMENT Complementing the work of Fleuriet and Goffart (2012), this study is a contribution to the more general scientific research aimed at understanding how ongoing action is dynamically and adaptively adjusted to the current spatiotemporal aspects of its goal. Using the saccadic eye movement as a probe, we provide results that are critical for investigating and understanding the neural basis of motion extrapolation and prediction.
Collapse
|