1
|
Wu C, Cui J, Xu X, Song D. The influence of virtual environment on thermal perception: physical reaction and subjective thermal perception on outdoor scenarios in virtual reality. INTERNATIONAL JOURNAL OF BIOMETEOROLOGY 2023:10.1007/s00484-023-02495-3. [PMID: 37414908 DOI: 10.1007/s00484-023-02495-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 04/20/2023] [Accepted: 05/14/2023] [Indexed: 07/08/2023]
Abstract
Positive thermal perception can affect users' climate-controlling behavior, indirectly reducing a building's operational carbon emissions. Studies show that some visual elements, such as window sizes and light colors, can influence thermal perception. However, until recently there has been little interest in the interaction of thermal perception and outdoor visual scenarios or natural elements like water or trees, and little quantitative evidence has been found associating visual natural elements and thermal comfort. This experiment explores and quantifies the extent to which visual scenarios outdoors affect thermal perception. The experiment used a double-blind clinical trial. All tests were done in a stable laboratory environment to eliminate temperature changes, and scenarios were shown through a virtual reality (VR) headset. Forty-three participants were divided into three groups randomly, separately watched VR-outdoor scenarios with natural elements, VR-indoor scenarios, and a control scenario of the real laboratory, then finished a subjective questionnaire conducted to evaluate their thermal, environmental, and overall perceptions while their physical data (heart rate, blood pressure, pulse) was real-time recorded. Results show that visual scenarios could significantly influence thermal perception (Cohen's d between groups > 0.8). Significant positive correlations were found between key thermal perception index, thermal comfort, and visual perception indexes including visual comfort, pleasantness, and relaxation (all PCCs ≤ 0.01). Outdoor scenarios, with better visual perception, rank higher average scores (M ± SD = 1.0 ± 0.7) in thermal comfort than indoor groups (average M ± SD = 0.3 ± 1.0) while the physical environment remains unchanged. This connection between thermal and environmental perception can be used in building design. By being visually exposed to pleasing outdoor environments, the positive thermal perception will increase, and thus reduce building energy consumption. Designing positive visual environments with outdoor natural elements is not only a requirement for health but also a feasible path toward a sustainable net-zero future.
Collapse
Affiliation(s)
- Chunya Wu
- Tongji University College of Architecture and Urban Planning, Shanghai, China.
| | - Jinyuan Cui
- Tongji University College of Architecture and Urban Planning, Shanghai, China
| | - Xiaowan Xu
- Tongji University College of Architecture and Urban Planning, Shanghai, China
| | - Dexuan Song
- Tongji University College of Architecture and Urban Planning, Shanghai, China
| |
Collapse
|
2
|
Brock K, Vine SJ, Ross JM, Trevarthen M, Harris DJ. Movement kinematic and postural control differences when performing a visuomotor skill in real and virtual environments. Exp Brain Res 2023:10.1007/s00221-023-06639-0. [PMID: 37222777 DOI: 10.1007/s00221-023-06639-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 05/15/2023] [Indexed: 05/25/2023]
Abstract
Immersive technologies, like virtual and mixed reality, pose a novel challenge for our sensorimotor systems as they deliver simulated sensory inputs that may not match those of the natural environment. These include reduced fields of view, missing or inaccurate haptic information, and distortions of 3D space; differences that may impact the control of motor actions. For instance, reach-to-grasp movements without end-point haptic feedback are characterised by slower and more exaggerated movements. A general uncertainty about sensory input may also induce a more conscious form of movement control. We tested whether a more complex skill like golf putting was also characterized by more consciously controlled movement. In a repeated-measures design, kinematics of the putter swing and postural control were compared between (i) real-world putting, (ii) VR putting, and (iii) VR putting with haptic feedback from a real ball (i.e., mixed reality). Differences in putter swing were observed both between the real world and VR, and between VR conditions with and without haptic information. Further, clear differences in postural control emerged between real and virtual putting, with both VR conditions characterised by larger postural movements, which were more regular and less complex, suggesting a more conscious form of balance control. Conversely, participants actually reported less conscious awareness of their movements in VR. These findings highlight how fundamental movement differences may exist between virtual and natural environments, which may pose challenges for transfer of learning within applications to motor rehabilitation and sport.
Collapse
Affiliation(s)
- K Brock
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - S J Vine
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - J M Ross
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - M Trevarthen
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - D J Harris
- School of Public Health and Sport Sciences, Faculty of Health and Life Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK.
| |
Collapse
|
3
|
Inhibition of intentional binding by an additional sound presentation. Exp Brain Res 2023; 241:301-311. [PMID: 36510035 DOI: 10.1007/s00221-022-06516-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 11/24/2022] [Indexed: 12/14/2022]
Abstract
When a voluntary action is followed by an effect after a short delay, the time distance between the action and its effect is perceived to be shorter than the actual time distance. This phenomenon is known as intentional binding (IB). We investigated the influence of presentation of an additional effect on IB between the action and the target effect, and investigated the influence of the presentation timing of the additional effect. One sound (target sound) was constantly presented 250 ms after the button was pressed, and the other sound (additional sound) was presented simultaneously when the button was pressed (Experiment 1) or at one of various timings that included moments both before and after the target sound (Experiment 2). The results showed that IB between the action and target sound was significantly inhibited only when the additional sound was presented prior to the target sound. This suggests that the prior effect has a greater advantage in connecting to the action compared to the posterior sound.
Collapse
|
4
|
Gansel KS. Neural synchrony in cortical networks: mechanisms and implications for neural information processing and coding. Front Integr Neurosci 2022; 16:900715. [PMID: 36262373 PMCID: PMC9574343 DOI: 10.3389/fnint.2022.900715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 09/13/2022] [Indexed: 11/13/2022] Open
Abstract
Synchronization of neuronal discharges on the millisecond scale has long been recognized as a prevalent and functionally important attribute of neural activity. In this article, I review classical concepts and corresponding evidence of the mechanisms that govern the synchronization of distributed discharges in cortical networks and relate those mechanisms to their possible roles in coding and cognitive functions. To accommodate the need for a selective, directed synchronization of cells, I propose that synchronous firing of distributed neurons is a natural consequence of spike-timing-dependent plasticity (STDP) that associates cells repetitively receiving temporally coherent input: the “synchrony through synaptic plasticity” hypothesis. Neurons that are excited by a repeated sequence of synaptic inputs may learn to selectively respond to the onset of this sequence through synaptic plasticity. Multiple neurons receiving coherent input could thus actively synchronize their firing by learning to selectively respond at corresponding temporal positions. The hypothesis makes several predictions: first, the position of the cells in the network, as well as the source of their input signals, would be irrelevant as long as their input signals arrive simultaneously; second, repeating discharge patterns should get compressed until all or some part of the signals are synchronized; and third, this compression should be accompanied by a sparsening of signals. In this way, selective groups of cells could emerge that would respond to some recurring event with synchronous firing. Such a learned response pattern could further be modulated by synchronous network oscillations that provide a dynamic, flexible context for the synaptic integration of distributed signals. I conclude by suggesting experimental approaches to further test this new hypothesis.
Collapse
|
5
|
Using Immersive Virtual Reality to Examine How Visual and Tactile Cues Drive the Material-Weight Illusion. Atten Percept Psychophys 2021; 84:509-518. [PMID: 34862589 PMCID: PMC8641965 DOI: 10.3758/s13414-021-02414-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/13/2021] [Indexed: 11/08/2022]
Abstract
The material-weight illusion (MWI) demonstrates how our past experience with material and weight can create expectations that influence the perceived heaviness of an object. Here we used mixed-reality to place touch and vision in conflict, to investigate whether the modality through which materials are presented to a lifter could influence the top-down perceptual processes driving the MWI. University students lifted equally-weighted polystyrene, cork and granite cubes whilst viewing computer-generated images of the cubes in virtual reality (VR). This allowed the visual and tactile material cues to be altered, whilst all other object properties were kept constant. Representation of the objects’ material in VR was manipulated to create four sensory conditions: visual-tactile matched, visual-tactile mismatched, visual differences only and tactile differences only. A robust MWI was induced across all sensory conditions, whereby the polystyrene object felt heavier than the granite object. The strength of the MWI differed across conditions, with tactile material cues having a stronger influence on perceived heaviness than visual material cues. We discuss how these results suggest a mechanism whereby multisensory integration directly impacts how top-down processes shape perception.
Collapse
|
6
|
Visuomotor impairments in complex regional pain syndrome during pointing tasks. Pain 2021; 162:811-822. [PMID: 32890256 DOI: 10.1097/j.pain.0000000000002068] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 08/28/2020] [Indexed: 11/26/2022]
Abstract
ABSTRACT Complex regional pain syndrome (CRPS) is thought to be characterized by cognitive deficits affecting patients' ability to represent, perceive, and use their affected limb as well as its surrounding space. This has been tested, among others, by straight-ahead tasks testing oneself's egocentric representation, but such experiments lead to inconsistent results. Because spatial cognitive abilities encompass various processes, we completed such evaluations by varying the sensory inputs used to perform the task. Complex regional pain syndrome and matched control participants were asked to assess their own body midline either visually (ie, by means of a moving visual cue) or manually (ie, by straight-ahead pointing with one of their upper limbs) and to reach and point to visual targets at different spatial locations. Although the 2 former tasks only required one single sensory input to be performed (ie, either visual or proprioceptive), the latter task was based on the ability to coordinate perception of the position of one's own limb with visuospatial perception. However, in this latter task, limb position could only be estimated by proprioception, as vision of the limb was prevented. Whereas in the 2 former tasks CRPS participants' performance was not different from that of controls, they made significantly more deviations errors during the visuospatial task, regardless of the limb used to point or the direction of pointing. Results suggest that CRPS patients are not specifically characterized by difficulties in representing their body but, more particularly, in integrating somatic information (ie, proprioception) during visually guided movements of the limb.
Collapse
|
7
|
Invitto S, Montinaro R, Ciccarese V, Venturella I, Fronda G, Balconi M. Smell and 3D Haptic Representation: A Common Pathway to Understand Brain Dynamics in a Cross-Modal Task. A Pilot OERP and fNIRS Study. Front Behav Neurosci 2019; 13:226. [PMID: 31616263 PMCID: PMC6775200 DOI: 10.3389/fnbeh.2019.00226] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Accepted: 09/11/2019] [Indexed: 11/13/2022] Open
Abstract
Cross-modal perception allows olfactory information to integrate with other sensory modalities. Olfactory representations are processed by multisensory cortical pathways, where the aspects related to the haptic sensations are integrated. This complex reality allows the development of an integrated perception, where olfactory aspects compete with haptic and/or trigeminal activations. It is assumed that this integration involves both perceptive electrophysiological and metabolic/hemodynamic aspects, but there are no studies evaluating these activations in parallel. The aim of this study was to investigate brain dynamics during a cross-modal olfactory and haptic attention task, preceded by an exploratory session. The assessment of cross-modal dynamics was conducted through simultaneous electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) recording, evaluating both electrophysiological and hemodynamic activities. The study consisted of two experimental sessions and was conducted with a sample of ten healthy subjects (mean age 25 ± 5.2 years). In Session 1, the subjects were trained to manipulate 3D haptic models (HC) and to smell different scents (SC). In Session 2, the subjects were tested during an attentive olfactory task, in order to investigate the olfactory event-related potentials (OERP) N1 and late positive component (LPC), and EEG rhythms associated with fNIRS components (oxy-Hb and deoxy-Hb). The main results of this study highlighted, in Task 1, a higher fNIRS oxy-Hb response during SC and a positive correlation with the delta rhythm in the central and parietal EEG region of interest. In Session 2, the N1 OERP highlighted a greater amplitude in SC. A negative correlation was found in HC for the deoxy-Hb parietal with frontal and central N1, and for the oxy-Hb frontal with N1 in the frontal, central and parietal regions of interests (ROIs). A negative correlation was found in parietal LPC amplitude with central deoxy-Hb. The data suggest that cross-modal valence modifies the attentional olfactory response and that the dorsal cortical/metabolic pathways are involved in these responses. This can be considered as an important starting point for understanding integrated cognition, as the subject could perceive in an ecological context.
Collapse
Affiliation(s)
- Sara Invitto
- Human Anatomy and Neuroscience Laboratory, Department of Biological and Environmental Sciences and Technologies, University of Salento, Lecce, Italy.,Laboratory of Interdisciplinary Research Applied to Medicine, University of Salento-Vito Fazzi Hospital, Lecce, Italy
| | - Roberta Montinaro
- Human Anatomy and Neuroscience Laboratory, Department of Biological and Environmental Sciences and Technologies, University of Salento, Lecce, Italy
| | | | - Irene Venturella
- Research Unit in Affective and Social Neuroscience, Department of Psychology, Catholic University of Milan, Milan, Italy
| | - Giulia Fronda
- Research Unit in Affective and Social Neuroscience, Department of Psychology, Catholic University of Milan, Milan, Italy
| | - Michela Balconi
- Research Unit in Affective and Social Neuroscience, Department of Psychology, Catholic University of Milan, Milan, Italy
| |
Collapse
|
8
|
Levin K. The Dance of Attention: Toward an Aesthetic Dimension of Attention-Deficit. Integr Psychol Behav Sci 2018; 52:129-151. [PMID: 29305762 DOI: 10.1007/s12124-017-9413-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
What role does the aesthetics of bodily movement play in the understanding of attention among children diagnosed with attention-deficit/hyperactivity disorder (ADHD)? This article animates a phenomenological approach to attention and embodiment with a special focus on the relation between aesthetic or expressive bodily movement and behavioral awareness in children diagnosed with ADHD. However, beyond this it is argued that the aesthetic aspect of movement calls for an expansion of the phenomenological perspective. In this context Gilles Deleuze's notion of aesthetics as a "science of the sensible" is activated and discussed in relation to the phenomenological concept of perception. Empirically the article takes point of departure in a qualitative study conducted with a group of children with attention-deficit practicing the Afro-Brazilian marital art, capoeira. Combining ethnographic and phenomenological methods, it is demonstrated that capoeira can be considered a form of aesthetic movement that offers a transition of attention-deficit into a productive force of expression that changes the notions of sensation and movement in ADHD.
Collapse
Affiliation(s)
- Kasper Levin
- Faculty of Social Sciences, Department of Psychology, University of Copenhagen, Øster Farimagsgade 2A, 1353, København K, Denmark.
| |
Collapse
|
9
|
Tugac N, Gonzalez D, Noguchi K, Niechwiej-Szwedo E. The role of somatosensory input in target localization during binocular and monocular viewing while performing a high precision reaching and placement task. Exp Eye Res 2018; 183:76-83. [PMID: 30125540 DOI: 10.1016/j.exer.2018.08.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Revised: 08/15/2018] [Accepted: 08/16/2018] [Indexed: 11/25/2022]
Abstract
Binocular vision provides the most accurate and precise depth information; however, many people have impairments in binocular visual function. It is possible that other sensory inputs could be used to obtain reliable depth information when binocular vision is not available. However, it is currently unknown whether depth information from another modality improves target localization in depth during action execution. Therefore, the goal of this study was to assess whether somatosensory input improves target localization during the performance of a precision placement task. Visually normal young adults (n = 15) performed a bead threading task during binocular and monocular viewing in two experimental conditions where needle location was specified by 1) vision only, or 2) vision and somatosensory input, which was provided by the non-dominant limb. Performance on the task was assessed using spatial and temporal kinematic measures. In accordance with the hypothesis, results showed that the interval spent placing the bead on the needle was significantly shorter during monocular viewing when somatosensory input was available in comparison to a vision only condition. In contrast, results showed no evidence to support that somatosensory input about the needle location affects trajectory control. These findings demonstrate that the central nervous system relies predominately on visual input during reach execution, however, somatosensory input can be used to facilitate the performance of the precision placement task.
Collapse
Affiliation(s)
- Naime Tugac
- Department of Kinesiology, University of Waterloo, Waterloo, Canada
| | - David Gonzalez
- Department of Kinesiology, University of Waterloo, Waterloo, Canada
| | - Kimihiro Noguchi
- Department of Mathematics, Western Washington University, Bellingham, USA
| | | |
Collapse
|
10
|
Rodríguez-Martínez GA, Castillo-Parra H. Bistable perception: neural bases and usefulness in psychological research. Int J Psychol Res (Medellin) 2018; 11:63-76. [PMID: 32612780 PMCID: PMC7110285 DOI: 10.21500/20112084.3375] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
Bistable images have the possibility of being perceived in two different ways. Due to their physical characteristics, these visual stimuli allow two different perceptions, associated with top-down and bottom-up modulating processes. Based on an extensive literature review, the present article aims to gather the conceptual models and the foundations of perceptual bistability. This theoretical article compiles not only notions that are intertwined with the understanding of this perceptual phenomenon, but also the diverse classification and uses of bistable images in psychological research, along with a detailed explanation of the neural correlates that are involved in perceptual reversibility. We conclude that the use of bistable images as a paradigmatic resource in psychological research might be extensive. In addition, due to their characteristics, visual bistable stimuli have the potential to be implemented as a resource in experimental tasks that seek to understand diverse concerns linked essentially to attention, sensory, perceptual and memory processes.
Collapse
Affiliation(s)
- Guillermo Andrés Rodríguez-Martínez
- Escuela de Publicidad - Universidad de Bogotá Jorge Tadeo Lozano, Bogotá, Colombia. Universidad de Bogotá Jorge Tadeo Lozano Universidad de Bogotá Jorge Tadeo Lozano Bogotá Colombia.,Facultad de Psicología - Universidad de San Buenaventura de Medellín, Colombia. Universidad de San Buenaventura Universidad de San Buenaventura de Medellín Colombia
| | - Henry Castillo-Parra
- Facultad de Psicología - Universidad de San Buenaventura de Medellín, Colombia. Universidad de San Buenaventura Universidad de San Buenaventura de Medellín Colombia
| |
Collapse
|
11
|
Mikula L, Gaveau V, Pisella L, Khan AZ, Blohm G. Learned rather than online relative weighting of visual-proprioceptive sensory cues. J Neurophysiol 2018; 119:1981-1992. [PMID: 29465322 DOI: 10.1152/jn.00338.2017] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand's specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.
Collapse
Affiliation(s)
- Laura Mikula
- Centre de Recherche en Neurosciences de Lyon, ImpAct Team, INSERM U1028, CNRS UMR 5292, Lyon 1 University, Bron Cedex, France.,School of Optometry, University of Montreal , Montreal, Quebec , Canada
| | - Valérie Gaveau
- Centre de Recherche en Neurosciences de Lyon, ImpAct Team, INSERM U1028, CNRS UMR 5292, Lyon 1 University, Bron Cedex, France
| | - Laure Pisella
- Centre de Recherche en Neurosciences de Lyon, ImpAct Team, INSERM U1028, CNRS UMR 5292, Lyon 1 University, Bron Cedex, France
| | - Aarlenne Z Khan
- School of Optometry, University of Montreal , Montreal, Quebec , Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University , Kingston, Ontario , Canada
| |
Collapse
|
12
|
Xiao X, Dupuis-Roy N, Jiang J, Du X, Zhang M, Zhang Q. The Neural Basis of Taste-visual Modal Conflict Control in Appetitive and Aversive Gustatory Context. Neuroscience 2017; 372:154-160. [PMID: 29294344 DOI: 10.1016/j.neuroscience.2017.12.042] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2017] [Revised: 12/13/2017] [Accepted: 12/23/2017] [Indexed: 11/30/2022]
Abstract
The functional magnetic resonance imaging (fMRI) technique was used to investigate brain activations related to conflict control in a taste-visual cross-modal pairing task. On each trial, participants had to decide whether the taste of a gustatory stimulus matched or did not match the expected taste of the food item depicted in an image. There were four conditions: Negative match (NM; sour gustatory stimulus and image of sour food), negative mismatch (NMM; sour gustatory stimulus and image of sweet food), positive match (PM; sweet gustatory stimulus and image of sweet food), positive mismatch (PMM; sweet gustatory stimulus and image of sour food). Blood oxygenation level-dependent (BOLD) contrasts between the NMM and the NM conditions revealed an increased activity in the middle frontal gyrus (MFG) (BA 6), the lingual gyrus (LG) (BA 18), and the postcentral gyrus. Furthermore, the NMM minus NM BOLD differences observed in the MFG were correlated with the NMM minus NM differences in response time. These activations were specifically associated with conflict control during the aversive gustatory stimulation. BOLD contrasts between the PMM and the PM condition revealed no significant positive activation, which supported the hypothesis that the human brain is especially sensitive to aversive stimuli. Altogether, these results suggest that the MFG is associated with the taste-visual cross-modal conflict control. A possible role of the LG as an information conflict detector at an early perceptual stage is further discussed, along with a possible involvement of the postcentral gyrus in the processing of the taste-visual cross-modal sensory contrast.
Collapse
Affiliation(s)
- Xiao Xiao
- School of Public Health and Management, Chongqing Medical University, Chongqing 400016, China; Research Center for Medicine and Social Development, Chongqing Medical University, Chongqing 400016, China; Innovation Center for Social Risk Governance in Health, Chongqing Medical University, Chongqing 400016, China.
| | - Nicolas Dupuis-Roy
- Départment de Psychologie, Université de Montréal, Montréal, Québec, Canada
| | - Jun Jiang
- Department of Basic Psychology, School of Psychology, Third Military Medical University, Chongqing, China
| | - Xue Du
- School of Education (The Key Laboratory of Psychological Diagnosis and Education Technology for Children with Special Needs), Chongqing Normal University, Chongqing, China
| | - Mingmin Zhang
- School of Public Health and Management, Chongqing Medical University, Chongqing 400016, China
| | - Qinglin Zhang
- Faculty of Psychological Science, Southwest University, Chongqing 400715, China
| |
Collapse
|
13
|
Hoffmann-Hensel SM, Freiherr J. Intramodal Olfactory Priming of Positive and Negative Odors in Humans Using Respiration-Triggered Olfactory Stimulation (RETROS). Chem Senses 2016; 41:567-78. [PMID: 27170666 DOI: 10.1093/chemse/bjw060] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Priming describes the principle of modified stimulus perception that occurs due to a previously presented stimulus. Although we have begun to understand the mechanisms of crossmodal priming, the concept of intramodal olfactory priming remains relatively unexplored. Therefore, we applied positive and negative odors using respiration-triggered olfactory stimulation (RETROS), enabling us to record the skin conductance response (SCR) and breathing data without a crossmodal cueing error and measure reaction times (RTs) for olfactory tasks. RT, SCR, and breathing data revealed that negative odors were perceived significantly more arousing than positive ones. In a second experiment, 2 odors were applied during consecutive respirations. Here, we observed intramodal olfactory priming effects: A negative odor preceded by a positive odor was rated as more pleasant than when the same odor was preceded by a negative odor. Additionally, a longer identification RT was found for the second compared with the first odor. We interpret this as increased "perceptual load" due to incomplete first odor processing while the second odor was presented. Furthermore, intramodal priming can be considered a possible reason for the increase of identification RT. The use of RETROS led to these novel insights into olfactory processing beyond crossmodal interaction by providing a noncued unimodal olfactory test, and therefore, RETROS can be used in the experimental design of future olfactory studies.
Collapse
Affiliation(s)
- Sonja Maria Hoffmann-Hensel
- Diagnostic and Interventional Neuroradiology, RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Jessica Freiherr
- Diagnostic and Interventional Neuroradiology, RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany Fraunhofer Institute for Process Engineering and Packaging IVV, Giggenhauserstr. 35, 85354 Freising, Germany
| |
Collapse
|
14
|
Maurage P, Campanella S. Experimental and clinical usefulness of crossmodal paradigms in psychiatry: an illustration from emotional processing in alcohol-dependence. Front Hum Neurosci 2013; 7:394. [PMID: 23898250 PMCID: PMC3722513 DOI: 10.3389/fnhum.2013.00394] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2013] [Accepted: 07/05/2013] [Indexed: 11/24/2022] Open
Abstract
Crossmodal processing (i.e., the construction of a unified representation stemming from distinct sensorial modalities inputs) constitutes a crucial ability in humans' everyday life. It has been extensively explored at cognitive and cerebral levels during the last decade among healthy controls. Paradoxically however, and while difficulties to perform this integrative process have been suggested in a large range of psychopathological states (e.g., schizophrenia and autism), these crossmodal paradigms have been very rarely used in the exploration of psychiatric populations. The main aim of the present paper is thus to underline the experimental and clinical usefulness of exploring crossmodal processes in psychiatry. We will illustrate this proposal by means of the recent data obtained in the crossmodal exploration of emotional alterations in alcohol-dependence. Indeed, emotional decoding impairments might have a role in the development and maintenance of alcohol-dependence, and have been extensively investigated by means of experiments using separated visual or auditory stimulations. Besides these unimodal explorations, we have recently conducted several studies using audio-visual crossmodal paradigms, which has allowed us to improve the ecological validity of the unimodal experimental designs and to offer new insights on the emotional alterations among alcohol-dependent individuals. We will show how these preliminary results can be extended to develop a coherent and ambitious research program using crossmodal designs in various psychiatric populations and sensory modalities. We will finally end the paper by underlining the various potential clinical applications and the fundamental implications that can be raised by this emerging project.
Collapse
Affiliation(s)
- Pierre Maurage
- Laboratory for Experimental Psychopathology, Faculty of Psychology, Institute of Psychology, Université Catholique de Louvain Louvain-la-Neuve, Belgium
| | | |
Collapse
|
15
|
The neural network sustaining crossmodal integration is impaired in alcohol-dependence: an fMRI study. Cortex 2012; 49:1610-26. [PMID: 22658706 DOI: 10.1016/j.cortex.2012.04.012] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2011] [Revised: 03/22/2012] [Accepted: 04/27/2012] [Indexed: 11/21/2022]
Abstract
INTRODUCTION Crossmodality (i.e., the integration of stimulations coming from different sensory modalities) is a crucial ability in everyday life and has been extensively explored in healthy adults. Still, it has not yet received much attention in psychiatry, and particularly in alcohol-dependence. The present study investigates the cerebral correlates of crossmodal integration deficits in alcohol-dependence to assess whether these deficits are due to the mere accumulation of unimodal impairments or rather to specific alterations in crossmodal areas. METHODS Twenty-eight subjects [14 alcohol-dependent subjects (ADS), 14 paired controls] were scanned using fMRI while performing a categorization task on faces (F), voices (V) and face-voice pairs (FV). A subtraction contrast [FV-(F+V)] and a conjunction analysis [(FV-F) ∩ (FV-V)] isolated the brain areas specifically involved in crossmodal face-voice integration. The functional connectivity between unimodal and crossmodal areas was explored using psycho-physiological interactions (PPI). RESULTS ADS presented only moderate alterations during unimodal processing. More centrally, in the subtraction contrast and conjunction analysis, they did not show any specific crossmodal brain activation while controls presented activations in specific crossmodal areas (inferior occipital gyrus, middle frontal gyrus, superior parietal lobule). Moreover, PPI analyses showed reduced connectivity between unimodal and crossmodal areas in alcohol-dependence. CONCLUSIONS This first fMRI exploration of crossmodal processing in alcohol-dependence showed a specific face-voice integration deficit indexed by reduced activation of crossmodal areas and reduced connectivity in the crossmodal integration network. Using crossmodal paradigms is thus crucial to correctly evaluate the deficits presented by ADS in real-life situations.
Collapse
|
16
|
Anderson SE, Chiu E, Huette S, Spivey MJ. On the temporal dynamics of language-mediated vision and vision-mediated language. Acta Psychol (Amst) 2011; 137:181-9. [PMID: 20961519 DOI: 10.1016/j.actpsy.2010.09.008] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2010] [Revised: 09/12/2010] [Accepted: 09/23/2010] [Indexed: 10/18/2022] Open
Abstract
Recent converging evidence suggests that language and vision interact immediately in non-trivial ways, although the exact nature of this interaction is still unclear. Not only does linguistic information influence visual perception in real-time, but visual information also influences language comprehension in real-time. For example, in visual search tasks, incremental spoken delivery of the target features (e.g., "Is there a red vertical?") can increase the efficiency of conjunction search because only one feature is heard at a time. Moreover, in spoken word recognition tasks, the visual presence of an object whose name is similar to the word being spoken (e.g., a candle present when instructed to "pick up the candy") can alter the process of comprehension. Dense sampling methods, such as eye-tracking and reach-tracking, richly illustrate the nature of this interaction, providing a semi-continuous measure of the temporal dynamics of individual behavioral responses. We review a variety of studies that demonstrate how these methods are particularly promising in further elucidating the dynamic competition that takes place between underlying linguistic and visual representations in multimodal contexts, and we conclude with a discussion of the consequences that these findings have for theories of embodied cognition.
Collapse
|
17
|
Parma V, Ghirardello D, Tirindelli R, Castiello U. Grasping a fruit. Hands do what flavour says. Appetite 2010; 56:249-54. [PMID: 21182884 DOI: 10.1016/j.appet.2010.12.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2010] [Revised: 10/14/2010] [Accepted: 12/13/2010] [Indexed: 11/18/2022]
Abstract
Previous research on multisensory integration during goal-directed natural actions reported that visual, proprioceptive, auditory and orthonasal olfactory stimulation has the ability to influence motor control. In this study, we used kinematics to investigate the integration between vision and flavour perception during reach-to-grasp movements. Participants were requested to drink a sip of flavoured solution and then grasp an object presented in central vision. The results indicate that when the objects evoked by the flavour and by the visual target were of a similar size (i.e., large or small) and evoked the same kind of hand shaping in order to be grasped (i.e., congruent condition) facilitation effects emerged. Conversely, when the object evoked by the flavour and by the visual target was of a different size and evoked a different kind of hand shaping in order to be grasped (i.e., incongruent condition) interference effects emerged. Interference effects, however, were only evident for the combination involving a large visual target and a 'small' flavour. When comparing hand kinematics between the congruent and a 'no flavour' condition (i.e., water), facilitation effects emerged in favour of the former condition. Taken together, these results indicate the contribution of complex chemosensory stimuli for the planning and execution of visually guided reach to grasp movements. And, contribute to the current debate regarding the multisensory nature of the sensorimotor transformations underlying motor performance.
Collapse
Affiliation(s)
- Valentina Parma
- Department of General Psychology, University of Padua, Via Venezia, 8, 35100 Padova, Italy
| | | | | | | |
Collapse
|
18
|
Eldridge M, Saltzman E, Lahav A. Seeing what you hear: Visual feedback improves pitch recognition. ACTA ACUST UNITED AC 2010. [DOI: 10.1080/09541440903316136] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
19
|
TANAKA HIROKAZU. Generalization in motor adaptation: A computational perspective on recent developments. JAPANESE PSYCHOLOGICAL RESEARCH 2010. [DOI: 10.1111/j.1468-5884.2010.00430.x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
20
|
Winges SA, Eonta SE, Soechting JF. Does temporal asynchrony affect multimodal curvature detection? Exp Brain Res 2010; 203:1-9. [PMID: 20213147 DOI: 10.1007/s00221-010-2200-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2010] [Accepted: 02/15/2010] [Indexed: 11/30/2022]
Abstract
Multiple sensory modalities gather information about our surroundings to plan appropriate movements based on the properties of the environment and the objects within it. This study was designed to examine the sensitivity of visual and haptic information alone and together for detecting curvature. When both visual and haptic information were present, temporal delays in signal onset were used to determine the effect of asynchronous sensory information on the interference of vision on the haptic estimate of curvature. Even under the largest temporal delays where visual and haptic information were clearly disparate, the presentation of visual information influenced the haptic perception of curvature. The uncertainty associated with the unimodal vision condition was smaller than that in the unimodal haptic condition, regardless of whether the haptic information was procured actively or under robot assistance for curvature detection. When both visual and haptic information were available, the uncertainty was not reduced; it was equal to that of the unimodal haptic condition. The weighting of the visual and haptic information was highly variable across subjects with some subjects making judgments based largely on haptic information, while others tended to rely on visual information equally or to a larger extent than the haptic information.
Collapse
Affiliation(s)
- Sara A Winges
- Department of Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA.
| | | | | |
Collapse
|
21
|
Miller LJ, Nielsen DM, Schoen SA, Brett-Green BA. Perspectives on sensory processing disorder: a call for translational research. Front Integr Neurosci 2009; 3:22. [PMID: 19826493 PMCID: PMC2759332 DOI: 10.3389/neuro.07.022.2009] [Citation(s) in RCA: 74] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2009] [Accepted: 09/03/2009] [Indexed: 11/13/2022] Open
Abstract
THIS ARTICLE EXPLORES THE CONVERGENCE OF TWO FIELDS, WHICH HAVE SIMILAR THEORETICAL ORIGINS: a clinical field originally known as sensory integration and a branch of neuroscience that conducts research in an area also called sensory integration. Clinically, the term was used to identify a pattern of dysfunction in children and adults, as well as a related theory, assessment, and treatment method for children who have atypical responses to ordinary sensory stimulation. Currently the term for the disorder is sensory processing disorder (SPD). In neuroscience, the term sensory integration refers to converging information in the brain from one or more sensory domains. A recent subspecialty in neuroscience labeled multisensory integration (MSI) refers to the neural process that occurs when sensory input from two or more different sensory modalities converge. Understanding the specific meanings of the term sensory integration intended by the clinical and neuroscience fields and the term MSI in neuroscience is critical. A translational research approach would improve exploration of crucial research questions in both the basic science and clinical science. Refinement of the conceptual model of the disorder and the related treatment approach would help prioritize which specific hypotheses should be studied in both the clinical and neuroscience fields. The issue is how we can facilitate a translational approach between researchers in the two fields. Multidisciplinary, collaborative studies would increase knowledge of brain function and could make a significant contribution to alleviating the impairments of individuals with SPD and their families.
Collapse
Affiliation(s)
- Lucy J Miller
- Sensory Processing Disorder Foundation Greenwood Village, CO, USA
| | | | | | | |
Collapse
|
22
|
Quian Quiroga R, Kraskov A, Koch C, Fried I. Explicit encoding of multimodal percepts by single neurons in the human brain. Curr Biol 2009; 19:1308-13. [PMID: 19631538 DOI: 10.1016/j.cub.2009.06.060] [Citation(s) in RCA: 117] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2009] [Revised: 05/13/2009] [Accepted: 06/05/2009] [Indexed: 11/29/2022]
Abstract
Different pictures of Marilyn Monroe can evoke the same percept, even if greatly modified as in Andy Warhol's famous portraits. But how does the brain recognize highly variable pictures as the same percept? Various studies have provided insights into how visual information is processed along the "ventral pathway," via both single-cell recordings in monkeys and functional imaging in humans. Interestingly, in humans, the same "concept" of Marilyn Monroe can be evoked with other stimulus modalities, for instance by hearing or reading her name. Brain imaging studies have identified cortical areas selective to voices and visual word forms. However, how visual, text, and sound information can elicit a unique percept is still largely unknown. By using presentations of pictures and of spoken and written names, we show that (1) single neurons in the human medial temporal lobe (MTL) respond selectively to representations of the same individual across different sensory modalities; (2) the degree of multimodal invariance increases along the hierarchical structure within the MTL; and (3) such neuronal representations can be generated within less than a day or two. These results demonstrate that single neurons can encode percepts in an explicit, selective, and invariant manner, even if evoked by different sensory modalities.
Collapse
|
23
|
Versace R, Labeye É, Badard G, Rose M. The contents of long-term memory and the emergence of knowledge. ACTA ACUST UNITED AC 2009. [DOI: 10.1080/09541440801951844] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
24
|
Tubaldi F, Ansuini C, Tirindelli R, Castiello U. The grasping side of odours. PLoS One 2008; 3:e1795. [PMID: 18350137 PMCID: PMC2266792 DOI: 10.1371/journal.pone.0001795] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2007] [Accepted: 02/12/2008] [Indexed: 11/18/2022] Open
Abstract
Background Research on multisensory integration during natural tasks such as reach-to-grasp is still in its infancy. Crossmodal links between vision, proprioception and audition have been identified, but how olfaction contributes to plan and control reach-to-grasp movements has not been decisively shown. We used kinematics to explicitly test the influence of olfactory stimuli on reach-to-grasp movements. Methodology/Principal Findings Subjects were requested to reach towards and grasp a small or a large visual target (i.e., precision grip, involving the opposition of index finger and thumb for a small size target and a power grip, involving the flexion of all digits around the object for a large target) in the absence or in the presence of an odour evoking either a small or a large object that if grasped would require a precision grip and a whole hand grasp, respectively. When the type of grasp evoked by the odour did not coincide with that for the visual target, interference effects were evident on the kinematics of hand shaping and the level of synergies amongst fingers decreased. When the visual target and the object evoked by the odour required the same type of grasp, facilitation emerged and the intrinsic relations amongst individual fingers were maintained. Conclusions/Significance This study demonstrates that olfactory information contains highly detailed information able to elicit the planning for a reach-to-grasp movement suited to interact with the evoked object. The findings offer a substantial contribution to the current debate about the multisensory nature of the sensorimotor transformations underlying grasping.
Collapse
Affiliation(s)
- Federico Tubaldi
- Department of General Psychology, University of Padua, Padua, Italy
| | - Caterina Ansuini
- Department of General Psychology, University of Padua, Padua, Italy
| | | | - Umberto Castiello
- Department of General Psychology, University of Padua, Padua, Italy
- Department of Psychology, Royal Holloway, University of London, Egham, United Kingdom
- * E-mail:
| |
Collapse
|
25
|
Anderson B. Neglect as a disorder of prior probability. Neuropsychologia 2008; 46:1566-9. [DOI: 10.1016/j.neuropsychologia.2007.12.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2007] [Accepted: 12/04/2007] [Indexed: 11/27/2022]
|
26
|
Dan H, Okamoto M, Wada Y, Dan I, Kohyama K. First bite for hardness judgment as haptic exploratory procedure. Physiol Behav 2007; 92:601-10. [PMID: 17555776 DOI: 10.1016/j.physbeh.2007.05.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2006] [Revised: 04/27/2007] [Accepted: 05/02/2007] [Indexed: 11/17/2022]
Abstract
This study examines whether the modulation of biting behavior while subjects are engaged in food texture judgment can be explained as an intra-oral exploratory procedure optimized for recognizing a specified sensory attribute. Subjects were asked to compare two cheese samples for "the force required to penetrate the sample with the molar teeth" (the definition for "hardness" used in this study). Based on this definition, we hypothesize that the subjects targeted the first peak of the bite time-force profile (i.e. the intra-oral phenomenon of the initial fracture) as an essential property for judgment. We observed significant elongation of the first peak in the judgmental biting, compared to the biting without judgment, for all subjects. Shortening of the second peak (teeth-to-teeth contact) duration and decrease of the second peak force were also observed for all subjects. These active biting modulations suggested that the first peak was targeted for judgment, whereas the second peak was not targeted. The sample with greater maximum force or time-integral of the bite force at the first peak was also judged as requiring greater force; these agreements were statistically significant. This result confirmed that the parameters related to the first peak were targeted as judgmental cues. We concluded that the biting behavior in hardness judgment functions as the exploratory procedure and was optimized for encoding the target sensory properties.
Collapse
Affiliation(s)
- Haruka Dan
- National Food Research Institute, 2-1-12 Kannondai, Tsukuba 305-8642, Japan.
| | | | | | | | | |
Collapse
|
27
|
van Atteveldt NM, Formisano E, Goebel R, Blomert L. Top–down task effects overrule automatic multisensory responses to letter–sound pairs in auditory association cortex. Neuroimage 2007; 36:1345-60. [PMID: 17513133 DOI: 10.1016/j.neuroimage.2007.03.065] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2007] [Revised: 03/20/2007] [Accepted: 03/24/2007] [Indexed: 10/23/2022] Open
Abstract
In alphabetic scripts, letters and speech sounds are the basic elements of correspondence between spoken and written language. In two previous fMRI studies, we showed that the response to speech sounds in the auditory association cortex was enhanced by congruent letters and suppressed by incongruent letters. Interestingly, temporal synchrony was critical for this congruency effect to occur. We interpreted these results as a neural correlate of letter-sound integration, driven by the learned congruency of letter-sound pairs. The present event-related fMRI study was designed to address two questions that could not directly be addressed in the previous studies, due to their passive nature and blocked design. Specifically: (1) to examine whether the enhancement/suppression of auditory cortex are truly multisensory integration effects or can be explained by different attention levels during congruent/incongruent blocks, and (2) to examine the effect of top-down task demands on the neural integration of letter-sound pairs. Firstly, we replicated the previous results with random stimulus presentation, which rules out an explanation of the congruency effect in auditory cortex solely in terms of attention. Secondly, we showed that the effects of congruency and temporal asynchrony in the auditory association cortex were absent during active matching. This indicates that multisensory responses in the auditory association cortex heavily depend on task demands. Without task instructions, the auditory cortex is modulated to favor the processing of congruent and synchronous information. This modulation is overruled during explicit matching when all audiovisual stimuli are equally relevant, independent of congruency and temporal relation.
Collapse
Affiliation(s)
- Nienke M van Atteveldt
- University of Maastricht, Faculty of Psychology, Department of Cognitive Neuroscience, 6200 MD Maastricht, The Netherlands.
| | | | | | | |
Collapse
|
28
|
Hauser PC, Dye MWG, Boutla M, Green CS, Bavelier D. Deafness and visual enumeration: not all aspects of attention are modified by deafness. Brain Res 2007; 1153:178-87. [PMID: 17467671 PMCID: PMC1934506 DOI: 10.1016/j.brainres.2007.03.065] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2006] [Revised: 03/14/2007] [Accepted: 03/23/2007] [Indexed: 11/23/2022]
Abstract
Previous studies have demonstrated that early deafness causes enhancements in peripheral visual attention. Here, we ask if this cross-modal plasticity of visual attention is accompanied by an increase in the number of objects that can be grasped at once. In a first experiment using an enumeration task, Deaf adult native signers and hearing non-signers performed comparably, suggesting that deafness does not enhance the number of objects one can attend to simultaneously. In a second experiment using the Multiple Object Tracking task, Deaf adult native signers and hearing non-signers also performed comparably when required to monitor several, distinct, moving targets among moving distractors. The results of these experiments suggest that deafness does not significantly alter the ability to allocate attention to several objects at once. Thus, early deafness does not enhance all facets of visual attention, but rather its effects are quite specific.
Collapse
Affiliation(s)
- Peter C Hauser
- Department of Research and Teacher Education, National Technical Institute of the Deaf, Rochester Institute of Technology, Rochester, NY 14623-5604, USA.
| | | | | | | | | |
Collapse
|