1
|
Crossmodal plasticity following short-term monocular deprivation. Neuroimage 2023; 274:120141. [PMID: 37120043 DOI: 10.1016/j.neuroimage.2023.120141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 04/21/2023] [Accepted: 04/27/2023] [Indexed: 05/01/2023] Open
Abstract
A brief period of monocular deprivation (MD) induces short-term plasticity of the adult visual system. Whether MD elicits neural changes beyond visual processing is yet unclear. Here, we assessed the specific impact of MD on neural correlates of multisensory processes. Neural oscillations associated with visual and audio-visual processing were measured for both the deprived and the non-deprived eye. Results revealed that MD changed neural activities associated with visual and multisensory processes in an eye-specific manner. Selectively for the deprived eye, alpha synchronization was reduced within the first 150 ms of visual processing. Conversely, gamma activity was enhanced in response to audio-visual events only for the non-deprived eye within 100-300 ms after stimulus onset. The analysis of gamma responses to unisensory auditory events revealed that MD elicited a crossmodal upweight for the non-deprived eye. Distributed source modeling suggested that the right parietal cortex played a major role in neural effects induced by MD. Finally, visual and audio-visual processing alterations emerged for the induced component of the neural oscillations, indicating a prominent role of feedback connectivity. Results reveal the causal impact of MD on both unisensory (visual and auditory) and multisensory (audio-visual) processes and, their frequency-specific profiles. These findings support a model in which MD increases excitability to visual events for the deprived eye and audio-visual and auditory input for the non-deprived eye.
Collapse
|
2
|
Effects of invisible lip movements on phonetic perception. Sci Rep 2023; 13:6478. [PMID: 37081084 PMCID: PMC10119180 DOI: 10.1038/s41598-023-33791-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Accepted: 04/19/2023] [Indexed: 04/22/2023] Open
Abstract
We investigated whether 'invisible' visual information, i.e., visual information that is not consciously perceived, could affect auditory speech perception. Repeated exposure to McGurk stimuli (auditory /ba/ with visual [ga]) temporarily changes the perception of the auditory /ba/ into a 'da' or 'ga'. This altered auditory percept persists even after the presentation of the McGurk stimuli when the auditory stimulus is presented alone (McGurk aftereffect). We used this and presented the auditory /ba/ either with or without (No Face) a masked face articulating a visual [ba] (Congruent Invisible) or a visual [ga] (Incongruent Invisible). Thus, we measured the extent to which the invisible faces could undo or prolong the McGurk aftereffects. In a further control condition, the incongruent faces remained unmasked and thus visible, resulting in four conditions in total. Visibility was defined by the participants' subjective dichotomous reports ('visible' or 'invisible'). The results showed that the Congruent Invisible condition reduced the McGurk aftereffects compared with the other conditions, while the Incongruent Invisible condition showed no difference with the No Face condition. These results suggest that 'invisible' visual information that is not consciously perceived can affect phonetic perception, but only when visual information is congruent with auditory information.
Collapse
|
3
|
Switching between visuomotor mappings: Learning absolute mappings or relative shifts. J Vis 2013; 13:13.2.26. [DOI: 10.1167/13.2.26] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
4
|
Back to the Future: Recalibration of visuomotor simultaneity perception to delayed and advanced visual feedback. J Vis 2012. [DOI: 10.1167/12.9.135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
5
|
|
6
|
|
7
|
Texture and haptic cues in slant discrimination: Measuring the effect of texture type on cue combination. J Vis 2010. [DOI: 10.1167/3.12.26] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
8
|
The Kalman Filter as a model of visuo-motor adaptation behavior. J Vis 2010. [DOI: 10.1167/6.6.930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
9
|
Do humans generate a representation of their pointing variability? J Vis 2010. [DOI: 10.1167/6.6.924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
10
|
Recruitment of an invisible depth cue. J Vis 2010. [DOI: 10.1167/9.8.34] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
11
|
The effect of walking on perceived visual speed depends on visual speed. J Vis 2010. [DOI: 10.1167/8.6.1146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
12
|
Integration of shape information from vision and touch: Optimal perception and neural correlates. J Vis 2010. [DOI: 10.1167/6.6.179] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
13
|
|
14
|
Simple stimulus metrics vs. Gestalt in high-level aftereffects. J Vis 2010. [DOI: 10.1167/5.8.250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
15
|
|
16
|
Variance predicts visual-haptic adaptation in shape perception. J Vis 2010. [DOI: 10.1167/2.7.670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
17
|
When does haptics rule in visual-haptic perception? J Vis 2010. [DOI: 10.1167/1.3.482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
18
|
Discriminating the odd: Boundaries of visual-haptic integration. J Vis 2010. [DOI: 10.1167/2.7.402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
19
|
Abstract
In this study, we investigate the influence of visual feedback on haptic exploration. A haptic search task was designed in which subjects had to haptically explore a virtual display using a force-feedback device and to determine whether a target was present among distractor items. Although the target was recognizable only haptically, visual feedback of finger position or possible target positions could be given. Our results show that subjects could use visual feedback on possible target positions even in the absence of feedback on finger position. When there was no feedback on possible target locations, subjects scanned the whole display systematically. When feedback on finger position was present, subjects could make well-directed movements back to areas of interest. This was not the case without feedback on finger position, indicating that showing finger position helps to form a spatial representation of the display. In addition, we show that response time models of visual serial search do not generally apply for haptic serial search. Consequently, in teleoperation systems, for instance, it is helpful to show the position of the probe even if visual information on the scene is poor.
Collapse
|
20
|
Looking in the mirror does not prevent multimodal integration. J Vis 2005. [DOI: 10.1167/5.8.750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
21
|
Localization, not perturbation, affects visuomotor recalibration. J Vis 2005. [DOI: 10.1167/5.8.871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
22
|
Re-learning the light source prior. J Vis 2004. [DOI: 10.1167/4.8.294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
23
|
The quality of feedback does not affect the rate of visuomotor adaptation. J Vis 2004. [DOI: 10.1167/4.8.286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
24
|
What is an inter-sensory object? Optimal combination of vision and touch depends on their spatial coincidence. J Vis 2004. [DOI: 10.1167/4.8.140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
25
|
|
26
|
|
27
|
Abstract
Humans use multiple sources of sensory information to estimate environmental properties. For example, the eyes and hands both provide relevant information about an object's shape. The eyes estimate shape using binocular disparity, perspective projection, etc. The hands supply haptic shape information by means of tactile and proprioceptive cues. Combining information across cues can improve estimation of object properties but may come at a cost: loss of single-cue information. We report that single-cue information is indeed lost when cues from within the same sensory modality (disparity and texture gradients in vision) are combined, but not when different modalities (vision and haptics) are combined.
Collapse
|
28
|
Abstract
On the whole, people recognize objects best when they see the objects from a familiar view and worse when they see the objects from views that were previously occluded from sight. Unexpectedly, we found haptic object recognition to be viewpoint-specific as well, even though hand movements were unrestricted. This viewpoint dependence was due to the hands preferring the back "view" of the objects. Furthermore, when the sensory modalities (visual vs. haptic) differed between learning an object and recognizing it, recognition performance was best when the objects were rotated back-to-front between learning and recognition. Our data indicate that the visual system recognizes the front view of objects best, whereas the hand recognizes objects best from the back.
Collapse
|
29
|
Abstract
The visual system uses several signals to deduce the three-dimensional structure of the environment, including binocular disparity, texture gradients, shading and motion parallax. Although each of these sources of information is independently insufficient to yield reliable three-dimensional structure from everyday scenes, the visual system combines them by weighting the available information; altering the weights would therefore change the perceived structure. We report that haptic feedback (active touch) increases the weight of a consistent surface-slant signal relative to inconsistent signals. Thus, appearance of a subsequently viewed surface is changed: the surface appears slanted in the direction specified by the haptically reinforced signal.
Collapse
|
30
|
Can Learning One Grasp Facilitate Novel Grasps? Perception 1997. [DOI: 10.1068/v970228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
We investigated whether knowledge acquired during repetitive grasping can be used to grasp a similar object differing in position or size. We conducted two experiments using a mirror to project a computer-generated image to the location of an object to be grasped. Subjects saw the image until initiation of the grasp but were unable to see either their hand or the real object. The training phase consisted of repetitive grasps to a single cube in a fixed position displaying a corresponding image. In the test phase we used the same cube in different positions but displayed only a small position-marker (experiment 1). In experiment 2 subjects grasped for differently sized cubes in the trained position. To indicate size changes we displayed appropriately sized cubes at a different location. In the subsequent control phase of each experiment subjects saw fully rendered cubes in appropriate positions and sizes instead of the position-marker or size cue. Performance in the test and control phase was similar for all measured grasp parameters, including maximum preshape aperture, maximum speed, and grasp duration. In experiment 2, in which the size of the cubes changed, variability in grasp duration (±110 ms vs ±40 ms) and maximum preshape aperture (±10 mm vs ±4 mm) was greater in the test phase than in the control phase, indicating increased uncertainty in grasping. Had subjects learned a single motor routine they would not have been able to grasp so well for objects differing in position or size. Together with our previous results (Ernst et al, 1997, paper presented at ARVO) these findings indicate that subjects can make use of stored representations of an object's position and size to produce an appropriate grasp under open-loop conditions.
Collapse
|