1
|
Levy O, Korisky A, Zvilichovsky Y, Zion Golumbic E. The Neurophysiological Costs of Learning in a Noisy Classroom: An Ecological Virtual Reality Study. J Cogn Neurosci 2025; 37:300-316. [PMID: 39348110 DOI: 10.1162/jocn_a_02249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/01/2024]
Abstract
Many real-life situations can be extremely noisy, which makes it difficult to understand what people say. Here, we introduce a novel audiovisual virtual reality experimental platform to study the behavioral and neurophysiological consequences of background noise on processing continuous speech in highly realistic environments. We focus on a context where the ability to understand speech is particularly important: the classroom. Participants (n = 32) experienced sitting in a virtual reality classroom and were told to pay attention to a virtual teacher giving a lecture. Trials were either quiet or contained background construction noise, emitted from outside the classroom window. Two realistic types of noise were used: continuous drilling and intermittent air hammers. Alongside behavioral outcomes, we measured several neurophysiological metrics, including neural activity (EEG), eye-gaze and skin conductance (galvanic skin response). Our results confirm the detrimental effect of background noise. Construction noise, and particularly intermittent noise, was associated with reduced behavioral performance, reduced neural tracking of the teacher's speech and an increase in skin conductance, although it did not have a significant effect on alpha-band oscillations or eye-gaze patterns. These results demonstrate the neurophysiological costs of learning in noisy environments and emphasize the role of temporal dynamics in speech-in-noise perception. The finding that intermittent noise was more disruptive than continuous noise supports a "habituation" rather than "glimpsing" hypothesis of speech-in-noise processing. These results also underscore the importance of increasing the ecologically relevance of neuroscientific research and considering acoustic, temporal, and semantic features of realistic stimuli as well as the cognitive demands of real-life environments.
Collapse
|
2
|
Saboundji RR, Faragó KB, Firyaridi V. Prediction of Attention Groups and Big Five Personality Traits from Gaze Features Collected from an Outlier Search Game. J Imaging 2024; 10:255. [PMID: 39452418 PMCID: PMC11508584 DOI: 10.3390/jimaging10100255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Revised: 09/25/2024] [Accepted: 10/09/2024] [Indexed: 10/26/2024] Open
Abstract
This study explores the intersection of personality, attention and task performance in traditional 2D and immersive virtual reality (VR) environments. A visual search task was developed that required participants to find anomalous images embedded in normal background images in 3D space. Experiments were conducted with 30 subjects who performed the task in 2D and VR environments while their eye movements were tracked. Following an exploratory correlation analysis, we applied machine learning techniques to investigate the predictive power of gaze features on human data derived from different data collection methods. Our proposed methodology consists of a pipeline of steps for extracting fixation and saccade features from raw gaze data and training machine learning models to classify the Big Five personality traits and attention-related processing speed/accuracy levels computed from the Group Bourdon test. The models achieved above-chance predictive performance in both 2D and VR settings despite visually complex 3D stimuli. We also explored further relationships between task performance, personality traits and attention characteristics.
Collapse
Affiliation(s)
- Rachid Rhyad Saboundji
- Department of Artificial Intelligence, Faculty of Informatics, ELTE Eötvös Loránd University, Pázmány Péter Sétány 1/A, H-1117 Budapest, Hungary
| | - Kinga Bettina Faragó
- Department of Artificial Intelligence, Faculty of Informatics, ELTE Eötvös Loránd University, Pázmány Péter Sétány 1/A, H-1117 Budapest, Hungary
| | | |
Collapse
|
3
|
Kayser C, Debats N, Heuer H. Both stimulus-specific and configurational features of multiple visual stimuli shape the spatial ventriloquism effect. Eur J Neurosci 2024; 59:1770-1788. [PMID: 38230578 DOI: 10.1111/ejn.16251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/22/2023] [Accepted: 12/25/2023] [Indexed: 01/18/2024]
Abstract
Studies on multisensory perception often focus on simplistic conditions in which one single stimulus is presented per modality. Yet, in everyday life, we usually encounter multiple signals per modality. To understand how multiple signals within and across the senses are combined, we extended the classical audio-visual spatial ventriloquism paradigm to combine two visual stimuli with one sound. The individual visual stimuli presented in the same trial differed in their relative timing and spatial offsets to the sound, allowing us to contrast their individual and combined influence on sound localization judgements. We find that the ventriloquism bias is not dominated by a single visual stimulus but rather is shaped by the collective multisensory evidence. In particular, the contribution of an individual visual stimulus to the ventriloquism bias depends not only on its own relative spatio-temporal alignment to the sound but also the spatio-temporal alignment of the other visual stimulus. We propose that this pattern of multi-stimulus multisensory integration reflects the evolution of evidence for sensory causal relations during individual trials, calling for the need to extend established models of multisensory causal inference to more naturalistic conditions. Our data also suggest that this pattern of multisensory interactions extends to the ventriloquism aftereffect, a bias in sound localization observed in unisensory judgements following a multisensory stimulus.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Nienke Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
4
|
Omigie D, Mencke I. A model of time-varying music engagement. Philos Trans R Soc Lond B Biol Sci 2024; 379:20220421. [PMID: 38104598 PMCID: PMC10725767 DOI: 10.1098/rstb.2022.0421] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 11/13/2023] [Indexed: 12/19/2023] Open
Abstract
The current paper offers a model of time-varying music engagement, defined as changes in curiosity, attention and positive valence, as music unfolds over time. First, we present research (including new data) showing that listeners tend to allocate attention to music in a manner that is guided by both features of the music and listeners' individual differences. Next, we review relevant predictive processing literature before using this body of work to inform our model. In brief, we propose that music engagement, over the course of an extended listening episode, may constitute several cycles of curiosity, attention and positive valence that are interspersed with moments of mind-wandering. Further, we suggest that refocusing on music after an episode of mind-wandering can be due to triggers in the music or, conversely, mental action that occurs when the listener realizes they are mind-wandering. Finally, we argue that factors that modulate both overall levels of music engagement and how it changes over time include music complexity, listener background and the listening context. Our paper highlights how music can be used to provide insights into the temporal dynamics of attention and into how curiosity might emerge in everyday contexts. This article is part of the theme issue 'Art, aesthetics and predictive processing: theoretical and empirical perspectives'.
Collapse
Affiliation(s)
- Diana Omigie
- Department of Psychology, Goldsmiths University of London, London, SE14 6NW, UK
| | - Iris Mencke
- Music Perception and Processing Lab, Department of Medical Physics and Acoustics, University of Oldenburg, 26129 Oldenberg, Germany
- Hanse-Wissenschaftskolleg—Institute for Advanced Studies, 27753 Delmenhorst, Germany
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt/Main 60322, Germany
| |
Collapse
|
5
|
Brown A, Pinto D, Burgart K, Zvilichovsky Y, Zion-Golumbic E. Neurophysiological Evidence for Semantic Processing of Irrelevant Speech and Own-Name Detection in a Virtual Café. J Neurosci 2023; 43:5045-5056. [PMID: 37336758 PMCID: PMC10324990 DOI: 10.1523/jneurosci.1731-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 04/18/2023] [Accepted: 04/27/2023] [Indexed: 06/21/2023] Open
Abstract
The well-known "cocktail party effect" refers to incidental detection of salient words, such as one's own-name, in supposedly unattended speech. However, empirical investigation of the prevalence of this phenomenon and the underlying mechanisms has been limited to extremely artificial contexts and has yielded conflicting results. We introduce a novel empirical approach for revisiting this effect under highly ecological conditions, by immersing participants in a multisensory Virtual Café and using realistic stimuli and tasks. Participants (32 female, 18 male) listened to conversational speech from a character at their table, while a barista in the back of the café called out food orders. Unbeknownst to them, the barista sometimes called orders containing either their own-name or words that created semantic violations. We assessed the neurophysiological response-profile to these two probes in the task-irrelevant barista stream by measuring participants' brain activity (EEG), galvanic skin response and overt gaze-shifts.SIGNIFICANCE STATEMENT We found distinct neural and physiological responses to participants' own-name and semantic violations, indicating their incidental semantic processing despite being task-irrelevant. Interestingly, these responses were covert in nature and gaze-patterns were not associated with word-detection responses. This study emphasizes the nonexclusive nature of attention in multimodal ecological environments and demonstrates the brain's capacity to extract linguistic information from additional sources outside the primary focus of attention.
Collapse
Affiliation(s)
- Adi Brown
- Gonda Center for Multidisciplinary Brain Research, Bar-Ilan University, Ramat Gan, Israel, 5290002
| | - Danna Pinto
- Gonda Center for Multidisciplinary Brain Research, Bar-Ilan University, Ramat Gan, Israel, 5290002
| | - Ksenia Burgart
- Gonda Center for Multidisciplinary Brain Research, Bar-Ilan University, Ramat Gan, Israel, 5290002
| | - Yair Zvilichovsky
- Gonda Center for Multidisciplinary Brain Research, Bar-Ilan University, Ramat Gan, Israel, 5290002
| | - Elana Zion-Golumbic
- Gonda Center for Multidisciplinary Brain Research, Bar-Ilan University, Ramat Gan, Israel, 5290002
| |
Collapse
|
6
|
Burridge SD, Schlupp I, Makowicz AM. Male attention allocation depends on social context. Behav Processes 2023; 209:104878. [PMID: 37116668 DOI: 10.1016/j.beproc.2023.104878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 03/17/2023] [Accepted: 04/25/2023] [Indexed: 04/30/2023]
Abstract
Allocation of attention, typically a limited capacity, is a mechanism used to filter large amounts of information and determine what stimuli are most relevant at a particular moment. In dynamic social environments as found in almost all species, including humans, multiple individuals may play a pivotal role in any given interaction where a male's attention may be divided between a rival, a current mate, and/or future potential mates. Although clearly important, the role of the social environment on attention in animals is not well understood. Here, we investigated impacts of the social environment on attention allocation using male sailfin mollies, Poecilia latipinna, which are a part of a sexual-unisexual mating system with the Amazon molly, Poecilia formosa. We asked: 1) Does the species of female influence the amount of attention a male allocates to her? And 2) Is a male's attention towards his mate influenced by different social partners? We show that males perceive a larger male as a more relevant stimulus to pay attention to compared to a smaller male, and a conspecific female (either a partner or audience) as a more relevant stimulus compared to a heterospecific female. Our results show that differential allocation of attention is dependent upon multiple components of the social environment in which an individual interacts. Understanding what qualities of rival males or potential mates provide enough meaning to males to cause a shift in attention away from a mating opportunity is essential to understanding the influence of the social environment in sexual selection.
Collapse
Affiliation(s)
- Shelby D Burridge
- Department of Biology, University of Oklahoma, 730 Van Vleet Oval, Norman, OK 73019, USA
| | - Ingo Schlupp
- Department of Biology, University of Oklahoma, 730 Van Vleet Oval, Norman, OK 73019, USA
| | - Amber M Makowicz
- Department of Biology, University of Oklahoma, 730 Van Vleet Oval, Norman, OK 73019, USA; Department of Biological Sciences, Florida State University, 319 Stadium Drive, Tallahassee, FL 32306.
| |
Collapse
|
7
|
Kaufman M, Zion Golumbic E. Listening to two speakers: Capacity and tradeoffs in neural speech tracking during Selective and Distributed Attention. Neuroimage 2023; 270:119984. [PMID: 36854352 DOI: 10.1016/j.neuroimage.2023.119984] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 02/06/2023] [Accepted: 02/24/2023] [Indexed: 02/27/2023] Open
Abstract
Speech comprehension is severely compromised when several people talk at once, due to limited perceptual and cognitive resources. In such circumstances, top-down attention mechanisms can actively prioritize processing of task-relevant speech. However, behavioral and neural evidence suggest that this selection is not exclusive, and the system may have sufficient capacity to process additional speech input as well. Here we used a data-driven approach to contrast two opposing hypotheses regarding the system's capacity to co-represent competing speech: Can the brain represent two speakers equally or is the system fundamentally limited, resulting in tradeoffs between them? Neural activity was measured using magnetoencephalography (MEG) as human participants heard concurrent speech narratives and engaged in two tasks: Selective Attention, where only one speaker was task-relevant and Distributed Attention, where both speakers were equally relevant. Analysis of neural speech-tracking revealed that both tasks engaged a similar network of brain regions involved in auditory processing, attentional control and speech processing. Interestingly, during both Selective and Distributed Attention the neural representation of competing speech showed a bias towards one speaker. This is in line with proposed 'bottlenecks' for co-representation of concurrent speech and suggests that good performance on distributed attention tasks may be achieved by toggling attention between speakers over time.
Collapse
Affiliation(s)
- Maya Kaufman
- The Gonda Center for Multidisciplinary Brain Research, Bar Ilan University, Ramat Gan, Israel
| | - Elana Zion Golumbic
- The Gonda Center for Multidisciplinary Brain Research, Bar Ilan University, Ramat Gan, Israel.
| |
Collapse
|
8
|
Makov S, Pinto D, Har-Shai Yahav P, Miller LM, Zion Golumbic E. "Unattended, distracting or irrelevant": Theoretical implications of terminological choices in auditory selective attention research. Cognition 2023; 231:105313. [PMID: 36344304 DOI: 10.1016/j.cognition.2022.105313] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 09/30/2022] [Accepted: 10/19/2022] [Indexed: 11/06/2022]
Abstract
For seventy years, auditory selective attention research has focused on studying the cognitive mechanisms of prioritizing the processing a 'main' task-relevant stimulus, in the presence of 'other' stimuli. However, a closer look at this body of literature reveals deep empirical inconsistencies and theoretical confusion regarding the extent to which this 'other' stimulus is processed. We argue that many key debates regarding attention arise, at least in part, from inappropriate terminological choices for experimental variables that may not accurately map onto the cognitive constructs they are meant to describe. Here we critically review the more common or disruptive terminological ambiguities, differentiate between methodology-based and theory-derived terms, and unpack the theoretical assumptions underlying different terminological choices. Particularly, we offer an in-depth analysis of the terms 'unattended' and 'distractor' and demonstrate how their use can lead to conflicting theoretical inferences. We also offer a framework for thinking about terminology in a more productive and precise way, in hope of fostering more productive debates and promoting more nuanced and accurate cognitive models of selective attention.
Collapse
Affiliation(s)
- Shiri Makov
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel
| | - Danna Pinto
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel
| | - Paz Har-Shai Yahav
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel
| | - Lee M Miller
- The Center for Mind and Brain, University of California, Davis, CA, United States of America; Department of Neurobiology, Physiology, & Behavior, University of California, Davis, CA, United States of America; Department of Otolaryngology / Head and Neck Surgery, University of California, Davis, CA, United States of America
| | - Elana Zion Golumbic
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel.
| |
Collapse
|
9
|
The multisensory cocktail party problem in children: Synchrony-based segregation of multiple talking faces improves in early childhood. Cognition 2022; 228:105226. [PMID: 35882100 DOI: 10.1016/j.cognition.2022.105226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 07/09/2022] [Accepted: 07/11/2022] [Indexed: 11/23/2022]
Abstract
Extraction of meaningful information from multiple talkers relies on perceptual segregation. The temporal synchrony statistics inherent in everyday audiovisual (AV) speech offer a powerful basis for perceptual segregation. We investigated the developmental emergence of synchrony-based perceptual segregation of multiple talkers in 3-7-year-old children. Children either saw four identical or four different faces articulating temporally jittered versions of the same utterance and heard the audible version of the same utterance either synchronized with one of the talkers or desynchronized with all of them. Eye tracking revealed that selective attention to the temporally synchronized talking face increased while attention to the desynchronized faces decreased with age and that attention to the talkers' mouth primarily drove responsiveness. These findings demonstrate that the temporal synchrony statistics inherent in fluent AV speech assume an increasingly greater role in perceptual segregation of the multisensory clutter created by multiple talking faces in early childhood.
Collapse
|
10
|
Grenzebach J, Romanus E. Quantifying the Effect of Noise on Cognitive Processes: A Review of Psychophysiological Correlates of Workload. Noise Health 2022; 24:199-214. [PMID: 36537445 PMCID: PMC10088430 DOI: 10.4103/nah.nah_34_22] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Noise is present in most work environments, including emissions from machines and devices, irrelevant speech from colleagues, and traffic noise. Although it is generally accepted that noise below the permissible exposure limits does not pose a considerable risk for auditory effects like hearing impairments. Yet, noise can have a direct adverse effect on cognitive performance (non-auditory effects like workload or stress). Under certain circumstances, the observable performance for a task carried out in silence compared to noisy surroundings may not differ. One possible explanation for this phenomenon needs further investigation: individuals may invest additional cognitive resources to overcome the distraction from irrelevant auditory stimulation. Recent developments in measurements of psychophysiological correlates and analysis methods of load-related parameters can shed light on this complex interaction. These objective measurements complement subjective self-report of perceived effort by quantifying unnoticed noise-related cognitive workload. In this review, literature databases were searched for peer-reviewed journal articles that deal with an at least partially irrelevant "auditory stimulation" during an ongoing "cognitive task" that is accompanied by "psychophysiological correlates" to quantify the "momentary workload." The spectrum of assessed types of "auditory stimulations" extended from speech stimuli (varying intelligibility), oddball sounds (repeating short tone sequences), and auditory stressors (white noise, task-irrelevant real-life sounds). The type of "auditory stimulation" was related (speech stimuli) or unrelated (oddball, auditory stressor) to the type of primary "cognitive task." The types of "cognitive tasks" include speech-related tasks, fundamental psychological assessment tasks, and real-world/simulated tasks. The "psychophysiological correlates" include pupillometry and eye-tracking, recordings of brain activity (hemodynamic, potentials), cardiovascular markers, skin conductance, endocrinological markers, and behavioral markers. The prevention of negative effects on health by unexpected stressful soundscapes during mental work starts with the continuous estimation of cognitive workload triggered by auditory noise. This review gives a comprehensive overview of methods that were tested for their sensitivity as markers of workload in various auditory settings during cognitive processing.
Collapse
|
11
|
Hearing Aid Noise Reduction Lowers the Sustained Listening Effort During Continuous Speech in Noise-A Combined Pupillometry and EEG Study. Ear Hear 2021; 42:1590-1601. [PMID: 33950865 DOI: 10.1097/aud.0000000000001050] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The investigation of auditory cognitive processes recently moved from strictly controlled, trial-based paradigms toward the presentation of continuous speech. This also allows the investigation of listening effort on larger time scales (i.e., sustained listening effort). Here, we investigated the modulation of sustained listening effort by a noise reduction algorithm as applied in hearing aids in a listening scenario with noisy continuous speech. The investigated directional noise reduction algorithm mainly suppresses noise from the background. DESIGN We recorded the pupil size and the EEG in 22 participants with hearing loss who listened to audio news clips in the presence of background multi-talker babble noise. We estimated how noise reduction (off, on) and signal-to-noise ratio (SNR; +3 dB, +8 dB) affect pupil size and the power in the parietal EEG alpha band (i.e., parietal alpha power) as well as the behavioral performance. RESULTS Our results show that noise reduction reduces pupil size, while there was no significant effect of the SNR. It is important to note that we found interactions of SNR and noise reduction, which suggested that noise reduction reduces pupil size predominantly under the lower SNR. Parietal alpha power showed a similar yet nonsignificant pattern, with increased power under easier conditions. In line with the participants' reports that one of the two presented talkers was more intelligible, we found a reduced pupil size, increased parietal alpha power, and better performance when people listened to the more intelligible talker. CONCLUSIONS We show that the modulation of sustained listening effort (e.g., by hearing aid noise reduction) as indicated by pupil size and parietal alpha power can be studied under more ecologically valid conditions. Mainly concluded from pupil size, we demonstrate that hearing aid noise reduction lowers sustained listening effort. Our study approximates to real-world listening scenarios and evaluates the benefit of the signal processing as can be found in a modern hearing aid.
Collapse
|
12
|
Har-shai Yahav P, Zion Golumbic E. Linguistic processing of task-irrelevant speech at a cocktail party. eLife 2021; 10:e65096. [PMID: 33942722 PMCID: PMC8163500 DOI: 10.7554/elife.65096] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Accepted: 04/26/2021] [Indexed: 01/05/2023] Open
Abstract
Paying attention to one speaker in a noisy place can be extremely difficult, because to-be-attended and task-irrelevant speech compete for processing resources. We tested whether this competition is restricted to acoustic-phonetic interference or if it extends to competition for linguistic processing as well. Neural activity was recorded using Magnetoencephalography as human participants were instructed to attend to natural speech presented to one ear, and task-irrelevant stimuli were presented to the other. Task-irrelevant stimuli consisted either of random sequences of syllables, or syllables structured to form coherent sentences, using hierarchical frequency-tagging. We find that the phrasal structure of structured task-irrelevant stimuli was represented in the neural response in left inferior frontal and posterior parietal regions, indicating that selective attention does not fully eliminate linguistic processing of task-irrelevant speech. Additionally, neural tracking of to-be-attended speech in left inferior frontal regions was enhanced when competing with structured task-irrelevant stimuli, suggesting inherent competition between them for linguistic processing.
Collapse
Affiliation(s)
- Paz Har-shai Yahav
- The Gonda Center for Multidisciplinary Brain Research, Bar Ilan UniversityRamat GanIsrael
| | - Elana Zion Golumbic
- The Gonda Center for Multidisciplinary Brain Research, Bar Ilan UniversityRamat GanIsrael
| |
Collapse
|
13
|
Lewkowicz DJ, Schmuckler M, Agrawal V. The multisensory cocktail party problem in adults: Perceptual segregation of talking faces on the basis of audiovisual temporal synchrony. Cognition 2021; 214:104743. [PMID: 33940250 DOI: 10.1016/j.cognition.2021.104743] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 04/16/2021] [Accepted: 04/21/2021] [Indexed: 10/21/2022]
Abstract
Social interactions often involve a cluttered multisensory scene consisting of multiple talking faces. We investigated whether audiovisual temporal synchrony can facilitate perceptual segregation of talking faces. Participants either saw four identical or four different talking faces producing temporally jittered versions of the same visible speech utterance and heard the audible version of the same speech utterance. The audible utterance was either synchronized with the visible utterance produced by one of the talking faces or not synchronized with any of them. Eye tracking indicated that participants exhibited a marked preference for the synchronized talking face, that they gazed more at the mouth than the eyes overall, that they gazed more at the eyes of an audiovisually synchronized than a desynchronized talking face, and that they gazed more at the mouth when all talking faces were audiovisually desynchronized. These findings demonstrate that audiovisual temporal synchrony plays a major role in perceptual segregation of multisensory clutter and that adults rely on differential scanning strategies of a talker's eyes and mouth to discover sources of multisensory coherence.
Collapse
Affiliation(s)
- David J Lewkowicz
- Haskins Laboratories, New Haven, CT, USA; Yale Child Study Center, New Haven, CT, USA.
| | - Mark Schmuckler
- Department of Psychology, University of Toronto at Scarborough, Toronto, Canada
| | | |
Collapse
|
14
|
Keller AS, Davidesco I, Tanner KD. Attention Matters: How Orchestrating Attention May Relate to Classroom Learning. CBE LIFE SCIENCES EDUCATION 2020; 19:fe5. [PMID: 32870089 PMCID: PMC8711818 DOI: 10.1187/cbe.20-05-0106] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 06/30/2020] [Accepted: 07/09/2020] [Indexed: 06/11/2023]
Abstract
Attention is thought to be the gateway between information and learning, yet there is much we do not understand about how students pay attention in the classroom. Leveraging ideas from cognitive neuroscience and psychology, we explore a framework for understanding attention in the classroom, organized along two key dimensions: internal/external attention and on-topic/off-topic attention. This framework helps us to build new theories for why active-learning strategies are effective teaching tools and how synchronized brain activity across students in a classroom may support learning. These ideas suggest new ways of thinking about how attention functions in the classroom and how different approaches to the same active-learning strategy may vary in how effectively they direct students' attention. We hypothesize that some teaching approaches are more effective than others because they leverage natural fluctuations in students' attention. We conclude by discussing implications for teaching and opportunities for future research.
Collapse
Affiliation(s)
- Arielle S. Keller
- Neurosciences Graduate Program, Stanford University, Stanford, CA 94305
| | - Ido Davidesco
- Department of Educational Psychology, Neag School of Education, University of Connecticut, Storrs, CT 06269
| | - Kimberly D. Tanner
- Department of Biology, San Francisco State University, San Francisco, CA 94132
| |
Collapse
|