1
|
Shen L, Lu X, Wang Y, Jiang Y. Audiovisual correspondence facilitates the visual search for biological motion. Psychon Bull Rev 2023; 30:2272-2281. [PMID: 37231177 PMCID: PMC10728268 DOI: 10.3758/s13423-023-02308-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/05/2023] [Indexed: 05/27/2023]
Abstract
Hearing synchronous sounds may facilitate the visual search for the concurrently changed visual targets. Evidence for this audiovisual attentional facilitation effect mainly comes from studies using artificial stimuli with relatively simple temporal dynamics, indicating a stimulus-driven mechanism whereby synchronous audiovisual cues create a salient object to capture attention. Here, we investigated the crossmodal attentional facilitation effect on biological motion (BM), a natural, biologically significant stimulus with complex and unique dynamic profiles. We found that listening to temporally congruent sounds, compared with incongruent sounds, enhanced the visual search for BM targets. More intriguingly, such a facilitation effect requires the presence of distinctive local motion cues (especially the accelerations in feet movement) independent of the global BM configuration, suggesting a crossmodal mechanism triggered by specific biological features to enhance the salience of BM signals. These findings provide novel insights into how audiovisual integration boosts attention to biologically relevant motion stimuli and extend the function of a proposed life detection system driven by local kinematics of BM to multisensory life motion perception.
Collapse
Affiliation(s)
- Li Shen
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
- Chinese Institute for Brain Research, Beijing, 102206, China
| | - Xiqian Lu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
- Chinese Institute for Brain Research, Beijing, 102206, China
| | - Ying Wang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101, China.
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
- Chinese Institute for Brain Research, Beijing, 102206, China
| |
Collapse
|
2
|
Meyerhoff HS, Gehrer NA, Frings C. The Beep-Speed Illusion Cannot Be Explained With a Simple Selection Bias. Exp Psychol 2023; 70:249-256. [PMID: 38105748 DOI: 10.1027/1618-3169/a000594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
An object appears to move at higher speed than another equally fast object when brief nonspatial tones coincide with its changes in motion direction. We refer to this phenomenon as the beep-speed illusion (Meyerhoff et al., 2022, Cognition, 219, 104978). The origin of this illusion is unclear; however, attentional explanations and potential biases in the response behavior appear to be plausible candidates. In this report, we test a simple bias explanation that emerges from the way the dependent variable is assessed. As the participants have to indicate the faster of the two objects, participants possibly always indicate the audio-visually synchronized object in situations of perceptual uncertainty. Such a response behavior potentially could explain the observed shift in perceived speed. We therefore probed the magnitude of the beep-speed illusion when the participants indicated either the object that appeared to move faster or the object that appeared to move slower. If a simple selection bias would explain the beep-speed illusion, the response pattern should be inverted with the instruction to indicate the slower object. However, contrary to this bias hypothesis, illusion emerged indistinguishably under both instructions. Therefore, simple selection biases cannot explain the beep-speed illusion.
Collapse
Affiliation(s)
- Hauke S Meyerhoff
- Department of Psychology, University of Erfurt, Germany
- Cybermedia Lab, Leibniz-Institut für Wissensmedien, Tübingen, Germany
| | - Nina A Gehrer
- Department of Psychology, University of Tübingen, Germany
| | | |
Collapse
|
3
|
Exploring the effectiveness of auditory, visual, and audio-visual sensory cues in a multiple object tracking environment. Atten Percept Psychophys 2022; 84:1611-1624. [PMID: 35610410 PMCID: PMC9232473 DOI: 10.3758/s13414-022-02492-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/14/2022] [Indexed: 11/08/2022]
Abstract
Maintaining object correspondence among multiple moving objects is an essential task of the perceptual system in many everyday life activities. A substantial body of research has confirmed that observers are able to track multiple target objects amongst identical distractors based only on their spatiotemporal information. However, naturalistic tasks typically involve the integration of information from more than one modality, and there is limited research investigating whether auditory and audio-visual cues improve tracking. In two experiments, we asked participants to track either five target objects or three versus five target objects amongst similarly indistinguishable distractor objects for 14 s. During the tracking interval, the target objects bounced occasionally against the boundary of a centralised orange circle. A visual cue, an auditory cue, neither or both coincided with these collisions. Following the motion interval, the participants were asked to indicate all target objects. Across both experiments and both set sizes, our results indicated that visual and auditory cues increased tracking accuracy although visual cues were more effective than auditory cues. Audio-visual cues, however, did not increase tracking performance beyond the level of purely visual cues for both high and low load conditions. We discuss the theoretical implications of our findings for multiple object tracking as well as for the principles of multisensory integration.
Collapse
|
4
|
Meyerhoff HS, Gehrer NA, Merz S, Frings C. The beep-speed illusion: Non-spatial tones increase perceived speed of visual objects in a forced-choice paradigm. Cognition 2021; 219:104978. [PMID: 34864524 DOI: 10.1016/j.cognition.2021.104978] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 11/24/2021] [Accepted: 11/27/2021] [Indexed: 11/30/2022]
Abstract
We introduce a new audio-visual illusion revealing the interplay between audio-visual integration and selective visual attention. This illusion involves two simultaneously moving objects that change their motion trajectory occasionally, but only the direction changes of one object are accompanied by spatially uninformative tones. We observed a selective increase in perceived object speed of the audio-visually synchronized object by measuring the point of subjective equality in a forced-choice paradigm. The illusory increase in perceived speed of the audio-visually synchronized object persisted when preventing eye movements. Using temporally matched color changes of the synchronized object also increased the perceived speed. Yet, using color changes of a surrounding frame instead of tones had no effect on perceived speed ruling out simple alertness explanations. Thus, in contrast to coinciding tones, visual coincidences only elicit illusory increases in perceived speed when the coincidence provided spatial information. Taken together, our pattern of results suggests that audio-visual synchrony attracts visual attention toward the coinciding visual object, leading to an increase in speed-perception and thus shedding new light on the interplay between attention and multisensory feature integration. We discuss potential limitations such as the choice of paradigm and outline prospective research question to further investigate the effect of audio-visual integration on perceived object speed.
Collapse
|
5
|
Nazaré CJ, Oliveira AM. Effects of Audiovisual Presentations on Visual Localization Errors: One or Several Multisensory Mechanisms? Multisens Res 2021; 34:1-35. [PMID: 33882452 DOI: 10.1163/22134808-bja10048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Accepted: 03/30/2021] [Indexed: 11/19/2022]
Abstract
The present study examines the extent to which temporal and spatial properties of sound modulate visual motion processing in spatial localization tasks. Participants were asked to locate the place at which a moving visual target unexpectedly vanished. Across different tasks, accompanying sounds were factorially varied within subjects as to their onset and offset times and/or positions relative to visual motion. Sound onset had no effect on the localization error. Sound offset was shown to modulate the perceived visual offset location, both for temporal and spatial disparities. This modulation did not conform to attraction toward the timing or location of the sounds but, demonstrably in the case of temporal disparities, to bimodal enhancement instead. Favorable indications to a contextual effect of audiovisual presentations on interspersed visual-only trials were also found. The short sound-leading offset asynchrony had equivalent benefits to audiovisual offset synchrony, suggestive of the involvement of early-level mechanisms, constrained by a temporal window, at these conditions. Yet, we tentatively hypothesize that the whole of the results and how they compare with previous studies requires the contribution of additional mechanisms, including learning-detection of auditory-visual associations and cross-sensory spread of endogenous attention.
Collapse
Affiliation(s)
- Cristina Jordão Nazaré
- Instituto Politécnico de Coimbra, ESTESC - Coimbra Health School, Audiologia, Coimbra, Portugal
| | | |
Collapse
|
6
|
Meyerhoff HS, Gehrer NA. Visuo-perceptual capabilities predict sensitivity for coinciding auditory and visual transients in multi-element displays. PLoS One 2017; 12:e0183723. [PMID: 28902903 PMCID: PMC5597177 DOI: 10.1371/journal.pone.0183723] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2017] [Accepted: 08/09/2017] [Indexed: 11/18/2022] Open
Abstract
In order to obtain a coherent representation of the outside world, auditory and visual information are integrated during human information processing. There is remarkable variance among observers in the capability to integrate auditory and visual information. Here, we propose that visuo-perceptual capabilities predict detection performance for audiovisually coinciding transients in multi-element displays due to severe capacity limitations in audiovisual integration. In the reported experiment, we employed an individual differences approach in order to investigate this hypothesis. Therefore, we measured performance in a useful-field-of-view task that captures detection performance for briefly presented stimuli across a large perceptual field. Furthermore, we measured sensitivity for visual direction changes that coincide with tones within the same participants. Our results show that individual differences in visuo-perceptual capabilities predicted sensitivity for the presence of audiovisually synchronous events among competing visual stimuli. To ensure that this correlation does not stem from superordinate factors, we also tested performance in an unrelated working memory task. Performance in this task was independent of sensitivity for the presence of audiovisually synchronous events. Our findings strengthen the proposed link between visuo-perceptual capabilities and audiovisual integration. The results also suggest that basic visuo-perceptual capabilities provide the basis for the subsequent integration of auditory and visual information.
Collapse
Affiliation(s)
| | - Nina A. Gehrer
- Department of Psychology, University of Tübingen, Tübingen, Germany
| |
Collapse
|
7
|
Nielsen T. Microdream neurophenomenology. Neurosci Conscious 2017; 2017:nix001. [PMID: 30042836 PMCID: PMC6007184 DOI: 10.1093/nc/nix001] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Revised: 11/21/2016] [Accepted: 12/21/2016] [Indexed: 12/01/2022] Open
Abstract
Nightly transitions into sleep are usually uneventful and transpire in the blink of an eye. But in the laboratory these transitions afford a unique view of how experience is transformed from the perceptually grounded consciousness of wakefulness to the hallucinatory simulations of dreaming. The present review considers imagery in the sleep-onset transition-"microdreams" in particular-as an alternative object of study to dreaming as traditionally studied in the sleep lab. A focus on microdream phenomenology has thus far proven fruitful in preliminary efforts to (i) develop a classification for dreaming's core phenomenology (the "oneiragogic spectrum"), (ii) establish a structure for assessing dreaming's multiple memory inputs ("multi-temporal memory sources"), (iii) further Silberer's project for classifying sleep-onset images in relation to waking cognition by revealing two new imagery types ("autosensory imagery," "exosensory imagery"), and (iv) embed a potential understanding of microdreaming processes in a larger explanatory framework ("multisensory integration approach"). Such efforts may help resolve outstanding questions about dream neurophysiology and dreaming's role in memory consolidation during sleep but may also advance discovery in the neuroscience of consciousness more broadly.
Collapse
Affiliation(s)
- Tore Nielsen
- Dream & Nightmare Laboratory, Center for Advanced Research in Sleep Medicine, Hopital du Sacre-Coeur de Montreal and Department of Psychiatry, University of Montreal, Canada
| |
Collapse
|
8
|
Li Q, Yang H, Sun F, Wu J. Spatiotemporal Relationships among Audiovisual Stimuli Modulate Auditory Facilitation of Visual Target Discrimination. Perception 2017; 44:232-42. [PMID: 26562250 DOI: 10.1068/p7846] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Sensory information is multimodal; through audiovisual interaction, task-irrelevant auditory stimuli tend to speed response times and increase visual perception accuracy. However, mechanisms underlying these performance enhancements have remained unclear. We hypothesize that task-irrelevant auditory stimuli might provide reliable temporal and spatial cues for visual target discrimination and behavioral response enhancement. Using signal detection theory, the present study investigated the effects of spatiotemporal relationships on auditory facilitation of visual target discrimination. Three experiments were conducted where an auditory stimulus maintained reliable temporal and/or spatial relationships with visual target stimuli. Results showed that perception sensitivity (d') to visual target stimuli was enhanced only when a task-irrelevant auditory stimulus maintained reliable spatiotemporal relationships with a visual target stimulus. When only reliable spatial or temporal information was contained, perception sensitivity was not enhanced. These results suggest that reliable spatiotemporal relationships between visual and auditory signals are required for audiovisual integration during a visual discrimination task, most likely due to a spread of attention. These results also indicate that auditory facilitation of visual target discrimination follows from late-stage cognitive processes rather than early stage sensory processes.
Collapse
Affiliation(s)
- Qi Li
- Brain Informatics Laboratory
| | - Huamin Yang
- School of Computer Science and Technology, Changchun University of Science and Technology, 7089 Weixing Road, Changchun 130022, China
| | | | - Jinglong Wu
- Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| |
Collapse
|
9
|
Macaluso E, Noppeney U, Talsma D, Vercillo T, Hartcher-O’Brien J, Adam R. The Curious Incident of Attention in Multisensory Integration: Bottom-up vs. Top-down. Multisens Res 2016. [DOI: 10.1163/22134808-00002528] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The role attention plays in our experience of a coherent, multisensory world is still controversial. On the one hand, a subset of inputs may be selected for detailed processing and multisensory integration in a top-down manner, i.e., guidance of multisensory integration by attention. On the other hand, stimuli may be integrated in a bottom-up fashion according to low-level properties such as spatial coincidence, thereby capturing attention. Moreover, attention itself is multifaceted and can be describedviaboth top-down and bottom-up mechanisms. Thus, the interaction between attention and multisensory integration is complex and situation-dependent. The authors of this opinion paper are researchers who have contributed to this discussion from behavioural, computational and neurophysiological perspectives. We posed a series of questions, the goal of which was to illustrate the interplay between bottom-up and top-down processes in various multisensory scenarios in order to clarify the standpoint taken by each author and with the hope of reaching a consensus. Although divergence of viewpoint emerges in the current responses, there is also considerable overlap: In general, it can be concluded that the amount of influence that attention exerts on MSI depends on the current task as well as prior knowledge and expectations of the observer. Moreover stimulus properties such as the reliability and salience also determine how open the processing is to influences of attention.
Collapse
Affiliation(s)
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, UK
| | - Durk Talsma
- Department of Experimental Psychology, Ghent University, Henri Dunantlaan 2, B-9000 Ghent, Belgium
| | | | | | - Ruth Adam
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians-Universität LMU, Munich, Germany
| |
Collapse
|
10
|
Tang X, Wu J, Shen Y. The interactions of multisensory integration with endogenous and exogenous attention. Neurosci Biobehav Rev 2015; 61:208-24. [PMID: 26546734 DOI: 10.1016/j.neubiorev.2015.11.002] [Citation(s) in RCA: 89] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Revised: 11/01/2015] [Accepted: 11/02/2015] [Indexed: 11/24/2022]
Abstract
Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner.
Collapse
Affiliation(s)
- Xiaoyu Tang
- College of Psychology, Liaoning Normal University, 850 Huanghe Road, Shahekou District, Dalian, Liaoning, 116029, China; Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-naka, Okayama, 700-8530, Japan
| | - Jinglong Wu
- Key Laboratory of Biomimetic Robots and System, Ministry of Education, State Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, 5 Nandajie, Zhongguancun, Haidian, Beijing 100081, China; Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-naka, Okayama, 700-8530, Japan.
| | - Yong Shen
- Neurodegenerative Disease Research Center, School of Life Sciences, University of Science and Technology of China, CAS Key Laboratory of Brain Functions and Disease, Hefei, China; Center for Advanced Therapeutic Strategies for Brain Disorders, Roskamp Institute, Sarasota, FL 34243, USA
| |
Collapse
|
11
|
Talsma D. Predictive coding and multisensory integration: an attentional account of the multisensory mind. Front Integr Neurosci 2015; 9:19. [PMID: 25859192 PMCID: PMC4374459 DOI: 10.3389/fnint.2015.00019] [Citation(s) in RCA: 111] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2014] [Accepted: 03/03/2015] [Indexed: 11/13/2022] Open
Abstract
Multisensory integration involves a host of different cognitive processes, occurring at different stages of sensory processing. Here I argue that, despite recent insights suggesting that multisensory interactions can occur at very early latencies, the actual integration of individual sensory traces into an internally consistent mental representation is dependent on both top–down and bottom–up processes. Moreover, I argue that this integration is not limited to just sensory inputs, but that internal cognitive processes also shape the resulting mental representation. Studies showing that memory recall is affected by the initial multisensory context in which the stimuli were presented will be discussed, as well as several studies showing that mental imagery can affect multisensory illusions. This empirical evidence will be discussed from a predictive coding perspective, in which a central top–down attentional process is proposed to play a central role in coordinating the integration of all these inputs into a coherent mental representation.
Collapse
Affiliation(s)
- Durk Talsma
- Department of Experimental Psychology, Ghent University Ghent, Belgium
| |
Collapse
|
12
|
Anticipating action effects recruits audiovisual movement representations in the ventral premotor cortex. Brain Cogn 2014; 92C:39-47. [DOI: 10.1016/j.bandc.2014.09.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2013] [Revised: 08/13/2014] [Accepted: 09/30/2014] [Indexed: 11/15/2022]
|
13
|
Van der Burg E, Olivers CNL, Theeuwes J. The attentional window modulates capture by audiovisual events. PLoS One 2012; 7:e39137. [PMID: 22808027 PMCID: PMC3393717 DOI: 10.1371/journal.pone.0039137] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2012] [Accepted: 05/18/2012] [Indexed: 11/19/2022] Open
Abstract
Visual search is markedly improved when a target color change is synchronized with a spatially non-informative auditory signal. This "pip and pop" effect is an automatic process as even a distractor captures attention when accompanied by a tone. Previous studies investigating visual attention have indicated that automatic capture is susceptible to the size of the attentional window. The present study investigated whether the pip and pop effect is modulated by the extent to which participants divide their attention across the visual field We show that participants were better in detecting a synchronized audiovisual event when they divided their attention across the visual field relative to a condition in which they focused their attention. We argue that audiovisual capture is reduced under focused conditions relative to distributed settings.
Collapse
Affiliation(s)
- Erik Van der Burg
- Department Cognitive Psychology, Vrije Universiteit, Amsterdam, The Netherlands.
| | | | | |
Collapse
|