1
|
Peiso JR, Palmer SE, Shevell SK. Perceptual Resolution of Ambiguity: Can Tuned, Divisive Normalization Account for both Interocular Similarity Grouping and Difference Enhancement. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.01.587646. [PMID: 38617235 PMCID: PMC11014560 DOI: 10.1101/2024.04.01.587646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Our visual system usually provides a unique and functional representation of the external world. At times, however, the visual system has more than one compelling interpretation of the same retinal stimulus; in this case, neural populations compete for perceptual dominance to resolve ambiguity. Spatial and temporal context can guide perceptual experience. Recent evidence shows that ambiguous retinal stimuli are sometimes resolved by enhancing either similarity or differences among multiple percepts. Divisive normalization is a canonical neural computation that enables context-dependent sensory processing by attenuating a neuron's response by other neurons. Experiments here show that divisive normalization can account for perceptual representations of either similarity enhancement (so-called grouping) or difference enhancement, offering a unified framework for opposite perceptual outcomes.
Collapse
Affiliation(s)
- Jaelyn R Peiso
- University of Chicago, Department of Psychology, Chicago, IL
| | - Stephanie E Palmer
- University of Chicago, Department of Organismal Biology & Anatomy, Department of Physics, Chicago, IL
| | | |
Collapse
|
2
|
Cai B, Tang X, Wang A, Zhang M. Semantically congruent bimodal presentation modulates cognitive control over attentional guidance by working memory. Mem Cognit 2024:10.3758/s13421-024-01521-y. [PMID: 38308161 DOI: 10.3758/s13421-024-01521-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/13/2024] [Indexed: 02/04/2024]
Abstract
Although previous studies have well established that audiovisual enhancement has a promoting effect on working memory and selective attention, there remains an open question about the influence of audiovisual enhancement on attentional guidance by working memory. To address this issue, the present study adopted a dual-task paradigm that combines a working memory task and a visual search task, in which the content of working memory was presented in audiovisual or visual modalities. Given the importance of search speed in memory-driven attentional suppression, we divided participants into two groups based on their reaction time (RT) in neutral trials and examined whether audiovisual enhancement in attentional suppression was modulated by search speed. The results showed that the slow search group exhibited a robust memory-driven attentional suppression effect, and the suppression effect started earlier and its magnitude was greater in the audiovisual condition than in the visual-only condition. However, among the faster search group, the suppression effect only occurred in the trials with longer RTs in the visual-only condition, and its temporal dynamics were selectively improved in the audiovisual condition. Furthermore, audiovisual enhancement of memory-driven attention evolved over time. These findings suggest that semantically congruent bimodal presentation can progressively facilitate the strength and temporal dynamics of memory-driven attentional suppression, and that search speed plays an important role in this process. This may be due to a synergistic effect between multisensory working memory representation and top-down suppression mechanism. The present study demonstrates the flexible role of audiovisual enhancement on cognitive control over memory-driven attention.
Collapse
Affiliation(s)
- Biye Cai
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
3
|
Zhao S, Zhou Y, Ma F, Xie J, Feng C, Feng W. The dissociation of semantically congruent and incongruent cross-modal effects on the visual attentional blink. Front Neurosci 2023; 17:1295010. [PMID: 38161792 PMCID: PMC10755906 DOI: 10.3389/fnins.2023.1295010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024] Open
Abstract
Introduction Recent studies have found that the sound-induced alleviation of visual attentional blink, a well-known phenomenon exemplifying the beneficial influence of multisensory integration on time-based attention, was larger when that sound was semantically congruent relative to incongruent with the second visual target (T2). Although such an audiovisual congruency effect has been attributed mainly to the semantic conflict carried by the incongruent sound restraining that sound from facilitating T2 processing, it is still unclear whether the integrated semantic information carried by the congruent sound benefits T2 processing. Methods To dissociate the congruence-induced benefit and incongruence-induced reduction in the alleviation of visual attentional blink at the behavioral and neural levels, the present study combined behavioral measures and event-related potential (ERP) recordings in a visual attentional blink task wherein the T2-accompanying sound, when delivered, could be semantically neutral in addition to congruent or incongruent with respect to T2. Results The behavioral data clearly showed that compared to the neutral sound, the congruent sound improved T2 discrimination during the blink to a higher degree while the incongruent sound improved it to a lesser degree. The T2-locked ERP data revealed that the early occipital cross-modal N195 component (192-228 ms after T2 onset) was uniquely larger in the congruent-sound condition than in the neutral-sound and incongruent-sound conditions, whereas the late parietal cross-modal N440 component (400-500 ms) was prominent only in the incongruent-sound condition. Discussion These findings provide strong evidence that the modulating effect of audiovisual semantic congruency on the sound-induced alleviation of visual attentional blink contains not only a late incongruence-induced cost but also an early congruence-induced benefit, thereby demonstrating for the first time an unequivocal congruent-sound-induced benefit in alleviating the limitation of time-based visual attention.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Yuxin Zhou
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Fangfang Ma
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Jimei Xie
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
- Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
4
|
Song T, Xu L, Peng Z, Wang L, Dai C, Xu M, Shao Y, Wang Y, Li S. Total sleep deprivation impairs visual selective attention and triggers a compensatory effect: evidence from event-related potentials. Cogn Neurodyn 2023; 17:621-631. [PMID: 37265652 PMCID: PMC10229502 DOI: 10.1007/s11571-022-09861-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 07/10/2022] [Accepted: 07/21/2022] [Indexed: 11/24/2022] Open
Abstract
Many studies have demonstrated the impairment of sustained attention due to total sleep deprivation (TSD). However, it remains unclear whether and how TSD affects the processing of visual selective attention. In the current study, 24 volunteers performed a visual search task before and after TSD over a period of 36 h while undergoing spontaneous electroencephalography. Paired-sample t-tests of behavioral performance revealed that, compared with baseline values, the participants showed lower accuracy and higher variance in response time in visual search tasks performed after TSD. Analysis of the event-related potentials (ERPs) showed that the mean amplitude of the N2-posterior-contralateral (N2pc) difference wave after TSD was less negative than that at baseline and the mean amplitude of P3 after TSD was more positive than that at baseline. Our findings suggest that TSD significantly attenuates attentional direction/orientation processing and triggers a compensatory effect in the parietal brain to partially offset the impairments. These findings provide new evidence and improve our understanding of the effects of sleep loss.
Collapse
Affiliation(s)
- Tao Song
- School of Psychology, Beijing Sport University, Beijing, China
| | - Lin Xu
- School of Psychology, Beijing Sport University, Beijing, China
| | - Ziyi Peng
- School of Psychology, Beijing Sport University, Beijing, China
| | - Letong Wang
- School of Psychology, Beijing Sport University, Beijing, China
| | - Cimin Dai
- School of Psychology, Beijing Sport University, Beijing, China
| | - Mengmeng Xu
- School of Psychology, Beijing Sport University, Beijing, China
| | - Yongcong Shao
- School of Psychology, Beijing Sport University, Beijing, China
| | - Yi Wang
- Department of Physical Education, Renmin University of China, Beijing, China
- School of Life Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Shijun Li
- Department of Radiology, First Medical Center, Chinese PLA General Hospital, Beijing, China
| |
Collapse
|
5
|
Simon-Martinez C, Antoniou MP, Bouthour W, Bavelier D, Levi D, Backus BT, Dornbos B, Blaha JJ, Kropp M, Müller H, Murray M, Thumann G, Steffen H, Matusz PJ. Stereoptic serious games as a visual rehabilitation tool for individuals with a residual amblyopia (AMBER trial): a protocol for a crossover randomized controlled trial. BMC Ophthalmol 2023; 23:220. [PMID: 37198558 DOI: 10.1186/s12886-023-02944-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 04/25/2023] [Indexed: 05/19/2023] Open
Abstract
BACKGROUND Amblyopia is the most common developmental vision disorder in children. The initial treatment consists of refractive correction. When insufficient, occlusion therapy may further improve visual acuity. However, the challenges and compliance issues associated with occlusion therapy may result in treatment failure and residual amblyopia. Virtual reality (VR) games developed to improve visual function have shown positive preliminary results. The aim of this study is to determine the efficacy of these games to improve vision, attention, and motor skills in patients with residual amblyopia and identify brain-related changes. We hypothesize that a VR-based training with the suggested ingredients (3D cues and rich feedback), combined with increasing the difficulty level and the use of various games in a home-based environment is crucial for treatment efficacy of vision recovery, and may be particularly effective in children. METHODS The AMBER study is a randomized, cross-over, controlled trial designed to assess the effect of binocular stimulation (VR-based stereoptic serious games) in individuals with residual amblyopia (n = 30, 6-35 years of age), compared to refractive correction on vision, selective attention and motor control skills. Additionally, they will be compared to a control group of age-matched healthy individuals (n = 30) to account for the unique benefit of VR-based serious games. All participants will play serious games 30 min per day, 5 days per week, for 8 weeks. The games are delivered with the Vivid Vision Home software. The amblyopic cohort will receive both treatments in a randomized order according to the type of amblyopia, while the control group will only receive the VR-based stereoscopic serious games. The primary outcome is visual acuity in the amblyopic eye. Secondary outcomes include stereoacuity, functional vision, cortical visual responses, selective attention, and motor control. The outcomes will be measured before and after each treatment with 8-week follow-up. DISCUSSION The VR-based games used in this study have been conceived to deliver binocular visual stimulation tailored to the individual visual needs of the patient, which will potentially result in improved basic and functional vision skills as well as visual attention and motor control skills. TRIAL REGISTRATION This protocol is registered on ClinicalTrials.gov (identifier: NCT05114252) and in the Swiss National Clinical Trials Portal (identifier: SNCTP000005024).
Collapse
Affiliation(s)
- Cristina Simon-Martinez
- University of Applied Sciences Western Switzerland (HES-SO) Valais-Wallis, Rue de Technopole 3, 3960, Sierre, Switzerland.
- Department of Ophthalmology, University Hospitals of Geneva, Geneva, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Sion, Switzerland.
- Experimental Ophthalmology, University of Geneva, Geneva, Switzerland.
| | - Maria-Paraskevi Antoniou
- University of Applied Sciences Western Switzerland (HES-SO) Valais-Wallis, Rue de Technopole 3, 3960, Sierre, Switzerland
- Department of Ophthalmology, University Hospitals of Geneva, Geneva, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Sion, Switzerland
- Experimental Ophthalmology, University of Geneva, Geneva, Switzerland
| | - Walid Bouthour
- Department of Ophthalmology, University Hospitals of Geneva, Geneva, Switzerland
- Experimental Ophthalmology, University of Geneva, Geneva, Switzerland
| | - Daphne Bavelier
- Faculty of Psychology and Education Sciences, University of Geneva, Geneva, Switzerland
| | - Dennis Levi
- Herbert Wertheim School of Optometry & Vision Science, Helen Wills Neuroscience Institute, University of California Berkley, Berkley, CA, USA
| | - Benjamin T Backus
- Vivid Vision, Inc, 424 Treat Ave., Ste B, San Francisco, CA, 94110, USA
| | - Brian Dornbos
- Vivid Vision, Inc, 424 Treat Ave., Ste B, San Francisco, CA, 94110, USA
| | - James J Blaha
- Vivid Vision, Inc, 424 Treat Ave., Ste B, San Francisco, CA, 94110, USA
| | - Martina Kropp
- Department of Ophthalmology, University Hospitals of Geneva, Geneva, Switzerland
- Experimental Ophthalmology, University of Geneva, Geneva, Switzerland
| | - Henning Müller
- University of Applied Sciences Western Switzerland (HES-SO) Valais-Wallis, Rue de Technopole 3, 3960, Sierre, Switzerland
| | - Micah Murray
- The Sense Innovation and Research Center, Lausanne and Sion, Sion, Switzerland
- Institute of Health Sciences, School of Health Sciences, HES-SO Valais-Wallis, Sion, Switzerland
- Laboratory for Investigative Neurophysiology, Department of Radiology, Lausanne University Hospital, University of Lausanne (CHUV-UNIL), Lausanne, Switzerland
| | - Gabriele Thumann
- Department of Ophthalmology, University Hospitals of Geneva, Geneva, Switzerland
- Experimental Ophthalmology, University of Geneva, Geneva, Switzerland
| | - Heimo Steffen
- Department of Ophthalmology, University Hospitals of Geneva, Geneva, Switzerland
- Experimental Ophthalmology, University of Geneva, Geneva, Switzerland
| | - Pawel J Matusz
- University of Applied Sciences Western Switzerland (HES-SO) Valais-Wallis, Rue de Technopole 3, 3960, Sierre, Switzerland
- Department of Ophthalmology, University Hospitals of Geneva, Geneva, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Sion, Switzerland
- Experimental Ophthalmology, University of Geneva, Geneva, Switzerland
- Institute of Health Sciences, School of Health Sciences, HES-SO Valais-Wallis, Sion, Switzerland
- Laboratory for Investigative Neurophysiology, Department of Radiology, Lausanne University Hospital, University of Lausanne (CHUV-UNIL), Lausanne, Switzerland
- Department of Hearing & Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
6
|
Nambiar K, Bhargava P. An Exploration of the Effects of Cross-Modal Tasks on Selective Attention. Behav Sci (Basel) 2023; 13:bs13010051. [PMID: 36661623 PMCID: PMC9854760 DOI: 10.3390/bs13010051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/23/2022] [Accepted: 12/31/2022] [Indexed: 01/09/2023] Open
Abstract
Successful performance of a task relies on selectively attending to the target, while ignoring distractions. Studies on perceptual load theory (PLT), conducted involving independent tasks with visual and auditory modalities, have shown that if a task is low-load, distractors and the target are both processed. If the task is high-load, distractions are not processed. The current study expands these findings by considering the effect of cross-modality (target and distractor from separate modalities) and congruency (similarity of target and distractor) on selective attention, using a word-identification task. Parameters were analysed, including response time, accuracy rates, congruency of distractions, and subjective report of load. In contrast to past studies on PLT, the results of the current study show that modality (congruency of the distractors) had a significant effect and load had no effect on selective attention. This study demonstrates that subjective measurement of load is important when studying perceptual load and selective attention.
Collapse
|
7
|
Exploring the effectiveness of auditory, visual, and audio-visual sensory cues in a multiple object tracking environment. Atten Percept Psychophys 2022; 84:1611-1624. [PMID: 35610410 PMCID: PMC9232473 DOI: 10.3758/s13414-022-02492-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/14/2022] [Indexed: 11/08/2022]
Abstract
Maintaining object correspondence among multiple moving objects is an essential task of the perceptual system in many everyday life activities. A substantial body of research has confirmed that observers are able to track multiple target objects amongst identical distractors based only on their spatiotemporal information. However, naturalistic tasks typically involve the integration of information from more than one modality, and there is limited research investigating whether auditory and audio-visual cues improve tracking. In two experiments, we asked participants to track either five target objects or three versus five target objects amongst similarly indistinguishable distractor objects for 14 s. During the tracking interval, the target objects bounced occasionally against the boundary of a centralised orange circle. A visual cue, an auditory cue, neither or both coincided with these collisions. Following the motion interval, the participants were asked to indicate all target objects. Across both experiments and both set sizes, our results indicated that visual and auditory cues increased tracking accuracy although visual cues were more effective than auditory cues. Audio-visual cues, however, did not increase tracking performance beyond the level of purely visual cues for both high and low load conditions. We discuss the theoretical implications of our findings for multiple object tracking as well as for the principles of multisensory integration.
Collapse
|