1
|
Wu Z, Bao X, Ding Y, Gao Y, Zhang C, Qu T, Li L. Differences in auditory associative memory between younger adults and older adults. AGING NEUROPSYCHOLOGY AND COGNITION 2021; 29:882-902. [PMID: 34078214 DOI: 10.1080/13825585.2021.1932714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Aging impairs visual associative memories. Up to date, little is known about whether aging impairs auditory associative memories. Using the head-related-transfer function to induce perceived spatial locations of auditory phonemes, this study used an audiospatial paired-associates-learning (PAL) paradigm to assess the auditory associative memory for phoneme-location pairs in both younger and older adults. Both aging groups completed the PAL task with various levels of difficulty, which were defined by the number of items to be remembered. The results showed that compared with younger participants' performance, older participants passed fewer stages and had lower capacity of auditory associative memory. For maintaining a single audiospatial pair, no significant behavioral differences between the two aging grous werefound. However, when multiple sound-location pairs were required to be remembered, older adults made more errors and demonstrated a lower working memory capacity than younger adults. Our study indicates aging impairs audiospatial associative learning and memory.
Collapse
Affiliation(s)
- Zhemeng Wu
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavioral and Mental Health, Peking University, Beijing, China
| | - Xiaohan Bao
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavioral and Mental Health, Peking University, Beijing, China
| | - Yu Ding
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavioral and Mental Health, Peking University, Beijing, China
| | - Yayue Gao
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavioral and Mental Health, Peking University, Beijing, China
| | - Changxin Zhang
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavioral and Mental Health, Peking University, Beijing, China
| | - Tianshu Qu
- Department of Machine Intelligence, Peking University, Beijing, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavioral and Mental Health, Peking University, Beijing, China.,Beijing Institute for Brain Disorders, Beijing, China
| |
Collapse
|
2
|
Jaha N, Shen S, Kerlin JR, Shahin AJ. Visual Enhancement of Relevant Speech in a 'Cocktail Party'. Multisens Res 2020; 33:277-294. [PMID: 32508080 DOI: 10.1163/22134808-20191423] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Lip-reading improves intelligibility in noisy acoustical environments. We hypothesized that watching mouth movements benefits speech comprehension in a 'cocktail party' by strengthening the encoding of the neural representations of the visually paired speech stream. In an audiovisual (AV) task, EEG was recorded as participants watched and listened to videos of a speaker uttering a sentence while also hearing a concurrent sentence by a speaker of the opposite gender. A key manipulation was that each audio sentence had a 200-ms segment replaced by white noise. To assess comprehension, subjects were tasked with transcribing the AV-attended sentence on randomly selected trials. In the auditory-only trials, subjects listened to the same sentences and completed the same task while watching a static picture of a speaker of either gender. Subjects directed their listening to the voice of the gender of the speaker in the video. We found that the N1 auditory-evoked potential (AEP) time-locked to white noise onsets was significantly more inhibited for the AV-attended sentences than for those of the auditorily-attended (A-attended) and AV-unattended sentences. N1 inhibition to noise onsets has been shown to index restoration of phonemic representations of degraded speech. These results underscore that attention and congruency in the AV setting help streamline the complex auditory scene, partly by reinforcing the neural representations of the visually attended stream, heightening the perception of continuity and comprehension.
Collapse
Affiliation(s)
- Niti Jaha
- 1Center for Mind and Brain, University of California, Davis, 95618, USA
| | - Stanley Shen
- 1Center for Mind and Brain, University of California, Davis, 95618, USA
| | - Jess R Kerlin
- 1Center for Mind and Brain, University of California, Davis, 95618, USA
| | - Antoine J Shahin
- 1Center for Mind and Brain, University of California, Davis, 95618, USA.,2Department of Cognitive and Information Sciences, University of California, Merced, CA 95343, USA
| |
Collapse
|
3
|
Addleman DA, Jiang YV. Experience-Driven Auditory Attention. Trends Cogn Sci 2019; 23:927-937. [PMID: 31521482 DOI: 10.1016/j.tics.2019.08.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Revised: 08/19/2019] [Accepted: 08/19/2019] [Indexed: 12/01/2022]
Abstract
In addition to conscious goals and stimulus salience, an observer's prior experience also influences selective attention. Early studies demonstrated experience-driven effects on attention mainly in the visual modality, but increasing evidence shows that experience drives auditory selection as well. We review evidence for a multiple-levels framework of auditory attention, in which experience-driven attention relies on mechanisms that acquire control settings and mechanisms that guide attention towards selected stimuli. Mechanisms of acquisition include cue-target associative learning, reward learning, and sensitivity to prior selection history. Once acquired, implementation of these biases can occur either consciously or unconsciously. Future research should more fully characterize the sources of experience-driven auditory attention and investigate the neural mechanisms used to acquire and implement experience-driven auditory attention.
Collapse
Affiliation(s)
- Douglas A Addleman
- Department of Psychology, University of Minnesota, Minneapolis, MN 55455, USA.
| | - Yuhong V Jiang
- Department of Psychology, University of Minnesota, Minneapolis, MN 55455, USA
| |
Collapse
|
4
|
Mattsson TS, Lind O, Follestad T, Grøndahl K, Wilson W, Nicholas J, Nordgård S, Andersson S. Electrophysiological characteristics in children with listening difficulties, with or without auditory processing disorder. Int J Audiol 2019; 58:704-716. [PMID: 31154863 DOI: 10.1080/14992027.2019.1621396] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Objective: To determine if the auditory middle latency responses (AMLR), auditory late latency response (ALLR) and auditory P300 were sensitive to auditory processing disorder (APD) and listening difficulties in children, and further to elucidate mechanisms regarding level of neurobiological problems in the central auditory nervous system. Design: Three-group, repeated measure design. Study sample: Forty-six children aged 8-14 years were divided into three groups: children with reported listening difficulties fulfilling APD diagnostic criteria, children with reported listening difficulties not fulfilling APD diagnostic criteria and normally hearing children. Results: AMLR Na latency and P300 latency and amplitude were sensitive to listening difficulties. No other auditory evoked potential (AEP) measures were sensitive to listening difficulties, and no AEP measures were sensitive to APD only. Moderate correlations were observed between P300 latency and amplitude and the behavioural AP measures of competing words, frequency patterns, duration patterns and dichotic digits. Conclusions: Impaired thalamo-cortical (bottom up) and neurocognitive function (top-down) may contribute to difficulties discriminating speech and non-speech sounds. Cognitive processes involved in conscious recognition, attention and discrimination of the acoustic characteristics of the stimuli could contribute to listening difficulties in general, and to APD in particular.
Collapse
Affiliation(s)
- Tone Stokkereit Mattsson
- Department of Otorhinolaryngology, Head and Neck Surgery, Ålesund Hospital , Aalesund , Norway.,Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology , Trondheim , Norway
| | - Ola Lind
- Department of Otorhinolaryngology, Head and Neck Surgery, Haukeland University Hospital , Bergen , Norway
| | - Turid Follestad
- Department of Public Health and General Practice, Norwegian University of Science and Technology , Trondheim , Norway
| | - Kjell Grøndahl
- Department of Clinical Engineering, Haukeland University Hospital , Bergen , Norway
| | - Wayne Wilson
- School of Health and Rehabilitation Sciences, The University of Queensland , Brisbane , Australia
| | - Jude Nicholas
- Statped National Service Center for Special Needs Education , Bergen , Norway.,Department of Occupational Medicine, Haukeland University Hospital , Bergen , Norway
| | - Ståle Nordgård
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology , Trondheim , Norway.,Department of Otorhinolaryngology, Head and Neck Surgery, St. Olavs University Hospital , Trondheim , Norway
| | - Stein Andersson
- Department of Psychology, University of Oslo , Oslo , Norway
| |
Collapse
|
5
|
Holt LL, Tierney AT, Guerra G, Laffere A, Dick F. Dimension-selective attention as a possible driver of dynamic, context-dependent re-weighting in speech processing. Hear Res 2018; 366:50-64. [PMID: 30131109 PMCID: PMC6107307 DOI: 10.1016/j.heares.2018.06.014] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Revised: 06/10/2018] [Accepted: 06/19/2018] [Indexed: 12/24/2022]
Abstract
The contribution of acoustic dimensions to an auditory percept is dynamically adjusted and reweighted based on prior experience about how informative these dimensions are across the long-term and short-term environment. This is especially evident in speech perception, where listeners differentially weight information across multiple acoustic dimensions, and use this information selectively to update expectations about future sounds. The dynamic and selective adjustment of how acoustic input dimensions contribute to perception has made it tempting to conceive of this as a form of non-spatial auditory selective attention. Here, we review several human speech perception phenomena that might be consistent with auditory selective attention although, as of yet, the literature does not definitively support a mechanistic tie. We relate these human perceptual phenomena to illustrative nonhuman animal neurobiological findings that offer informative guideposts in how to test mechanistic connections. We next present a novel empirical approach that can serve as a methodological bridge from human research to animal neurobiological studies. Finally, we describe four preliminary results that demonstrate its utility in advancing understanding of human non-spatial dimension-based auditory selective attention.
Collapse
Affiliation(s)
- Lori L Holt
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, 15213, USA; Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| | - Adam T Tierney
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK
| | - Giada Guerra
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK
| | - Aeron Laffere
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK
| | - Frederic Dick
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK; Department of Experimental Psychology, University College London, London, WC1H 0AP, UK
| |
Collapse
|
6
|
Schwartz ZP, David SV. Focal Suppression of Distractor Sounds by Selective Attention in Auditory Cortex. Cereb Cortex 2018; 28:323-339. [PMID: 29136104 PMCID: PMC6057511 DOI: 10.1093/cercor/bhx288] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Indexed: 11/15/2022] Open
Abstract
Auditory selective attention is required for parsing crowded acoustic environments, but cortical systems mediating the influence of behavioral state on auditory perception are not well characterized. Previous neurophysiological studies suggest that attention produces a general enhancement of neural responses to important target sounds versus irrelevant distractors. However, behavioral studies suggest that in the presence of masking noise, attention provides a focal suppression of distractors that compete with targets. Here, we compared effects of attention on cortical responses to masking versus non-masking distractors, controlling for effects of listening effort and general task engagement. We recorded single-unit activity from primary auditory cortex (A1) of ferrets during behavior and found that selective attention decreased responses to distractors masking targets in the same spectral band, compared with spectrally distinct distractors. This suppression enhanced neural target detection thresholds, suggesting that limited attention resources serve to focally suppress responses to distractors that interfere with target detection. Changing effort by manipulating target salience consistently modulated spontaneous but not evoked activity. Task engagement and changing effort tended to affect the same neurons, while attention affected an independent population, suggesting that distinct feedback circuits mediate effects of attention and effort in A1.
Collapse
Affiliation(s)
- Zachary P Schwartz
- Neuroscience Graduate Program, Oregon Health and Science University, OR, USA
| | - Stephen V David
- Oregon Hearing Research Center, Oregon Health and Science University, OR, USA
- Address Correspondence to Stephen V. David, Oregon Hearing Research Center, Oregon Health and Science University, 3181 SW Sam Jackson Park Road, MC L335A, Portland, OR 97239, USA.
| |
Collapse
|
7
|
Hutchison JL, Hubbard TL, Hubbard NA, Rypma B. Ear Advantage for Musical Location and Relative Pitch: Effects of Musical Training and Attention. Perception 2017; 46:745-762. [PMID: 28523983 DOI: 10.1177/0301006616684238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Trained musicians have been found to exhibit a right-ear advantage for high tones and a left-ear advantage for low tones. We investigated whether this right/high, left/low pattern of musical processing advantage exists in listeners who had varying levels of musical experience, and whether such a pattern might be modulated by attentional strategy. A dichotic listening paradigm was used in which different melodic sequences were presented to each ear, and listeners attended to (a) the left ear or the right ear or (b) the higher pitched tones or the lower pitched tones. Listeners judged whether tone-to-tone transitions within each melodic sequence moved upward or downward in pitch. Only musically experienced listeners could adequately judge the direction of successive pitch transitions when attending to a specific ear; however, all listeners could judge the direction of successive pitch transitions within a high-tone stream or a low-tone stream. Overall, listeners exhibited greater accuracy when attending to relatively higher pitches, but there was no evidence to support a right/high, left/low bias. Results were consistent with effects of attentional strategy rather than an ear advantage for high or low tones. Implications for a potential performer/audience paradox in listening space are considered.
Collapse
Affiliation(s)
- Joanna L Hutchison
- Department of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX, USA
| | | | - Nicholas A Hubbard
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Bart Rypma
- Department of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX, USA
| |
Collapse
|
8
|
Bolders AC, Band GPH, Stallen PJM. Inconsistent Effect of Arousal on Early Auditory Perception. Front Psychol 2017; 8:447. [PMID: 28424639 PMCID: PMC5372791 DOI: 10.3389/fpsyg.2017.00447] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Accepted: 03/09/2017] [Indexed: 11/23/2022] Open
Abstract
Mood has been shown to influence cognitive performance. However, little is known about the influence of mood on sensory processing, specifically in the auditory domain. With the current study, we sought to investigate how auditory processing of neutral sounds is affected by the mood state of the listener. This was tested in two experiments by measuring masked-auditory detection thresholds before and after a standard mood-induction procedure. In the first experiment (N = 76), mood was induced by imagining a mood-appropriate event combined with listening to mood inducing music. In the second experiment (N = 80), imagining was combined with affective picture viewing to exclude any possibility of confounding the results by acoustic properties of the music. In both experiments, the thresholds were determined by means of an adaptive staircase tracking method in a two-interval forced-choice task. Masked detection thresholds were compared between participants in four different moods (calm, happy, sad, and anxious), which enabled differentiation of mood effects along the dimensions arousal and pleasure. Results of the two experiments were analyzed both in separate analyses and in a combined analysis. The first experiment showed that, while there was no impact of pleasure level on the masked threshold, lower arousal was associated with lower threshold (higher masked sensitivity). However, as indicated by an interaction effect between experiment and arousal, arousal did have a different effect on the threshold in Experiment 2. Experiment 2 showed a trend of arousal in opposite direction. These results show that the effect of arousal on auditory-masked sensitivity may depend on the modality of the mood-inducing stimuli. As clear conclusions regarding the genuineness of the arousal effect on the masked threshold cannot be drawn, suggestions for further research that could clarify this issue are provided.
Collapse
Affiliation(s)
- Anna C Bolders
- Cognitive Psychology Unit, Institute of Psychology, Leiden UniversityLeiden, Netherlands
| | - Guido P H Band
- Cognitive Psychology Unit, Institute of Psychology, Leiden UniversityLeiden, Netherlands.,Leiden Institute for Brain and Cognition, Leiden UniversityLeiden, Netherlands
| | - Pieter Jan M Stallen
- Cognitive Psychology Unit, Institute of Psychology, Leiden UniversityLeiden, Netherlands
| |
Collapse
|
9
|
Johnson JS, O'Connor KN, Sutter ML. Segregating two simultaneous sounds in elevation using temporal envelope: Human psychophysics and a physiological model. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:33-43. [PMID: 26233004 PMCID: PMC4491017 DOI: 10.1121/1.4922224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2014] [Revised: 04/29/2015] [Accepted: 05/21/2015] [Indexed: 06/04/2023]
Abstract
The ability to segregate simultaneous sound sources based on their spatial locations is an important aspect of auditory scene analysis. While the role of sound azimuth in segregation is well studied, the contribution of sound elevation remains unknown. Although previous studies in humans suggest that elevation cues alone are not sufficient to segregate simultaneous broadband sources, the current study demonstrates they can suffice. Listeners segregating a temporally modulated noise target from a simultaneous unmodulated noise distracter differing in elevation fall into two statistically distinct groups: one that identifies target direction accurately across a wide range of modulation frequencies (MF) and one that cannot identify target direction accurately and, on average, reports the opposite direction of the target for low MF. A non-spiking model of inferior colliculus neurons that process single-source elevation cues suggests that the performance of both listener groups at the population level can be accounted for by the balance of excitatory and inhibitory inputs in the model. These results establish the potential for broadband elevation cues to contribute to the computations underlying sound source segregation and suggest a potential mechanism underlying this contribution.
Collapse
Affiliation(s)
- Jeffrey S Johnson
- Center for Neuroscience, University of California at Davis, 1544 Newton Court, Davis, California 95618, USA
| | - Kevin N O'Connor
- Center for Neuroscience, University of California at Davis, 1544 Newton Court, Davis, California 95618, USA
| | - Mitchell L Sutter
- Center for Neuroscience, University of California at Davis, 1544 Newton Court, Davis, California 95618, USA
| |
Collapse
|
10
|
Koch I, Lawo V. The flip side of the auditory spatial selection benefit: larger attentional mixing costs for target selection by ear than by gender in auditory task switching. Exp Psychol 2014; 62:66-74. [PMID: 25384645 DOI: 10.1027/1618-3169/a000274] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
In cued auditory task switching, one of two dichotically presented number words, spoken by a female and a male, had to be judged according to its numerical magnitude. One experimental group selected targets by speaker gender and another group by ear of presentation. In mixed-task blocks, the target-defining feature (male/female vs. left/right) was cued prior to each trial, but in pure blocks it remained constant. Compared to selection by gender, selection by ear led to better performance in pure blocks than in mixed blocks, resulting in larger "global" mixing costs for ear-based selection. Selection by ear also led to larger "local" switch costs in mixed blocks, but this finding was partially mediated by differential cue-repetition benefits. Together, the data suggest that requirements of attention shifting diminish the auditory spatial selection benefit.
Collapse
Affiliation(s)
- Iring Koch
- Institute of Psychology, RWTH Aachen University, Aachen, Germany
| | - Vera Lawo
- Institute of Psychology, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
11
|
Lehmann A, Schönwiesner M. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues. PLoS One 2014; 9:e85442. [PMID: 24454869 PMCID: PMC3893196 DOI: 10.1371/journal.pone.0085442] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2013] [Accepted: 11/27/2013] [Indexed: 11/18/2022] Open
Abstract
Selective attention is the mechanism that allows focusing one’s attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.
Collapse
Affiliation(s)
- Alexandre Lehmann
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
| | - Marc Schönwiesner
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- Montreal Neurological Institute, McGill University, Montreal, Canada
- * E-mail:
| |
Collapse
|
12
|
Du Y, He Y, Arnott SR, Ross B, Wu X, Li L, Alain C. Rapid tuning of auditory "what" and "where" pathways by training. ACTA ACUST UNITED AC 2013; 25:496-506. [PMID: 24042339 DOI: 10.1093/cercor/bht251] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Behavioral improvement within the first hour of training is commonly explained as procedural learning (i.e., strategy changes resulting from task familiarization). However, it may additionally reflect a rapid adjustment of the perceptual and/or attentional system in a goal-directed task. In support of this latter hypothesis, we show feature-specific gains in performance for groups of participants briefly trained to use either a spectral or spatial difference between 2 vowels presented simultaneously during a vowel identification task. In both groups, the neuromagnetic activity measured during the vowel identification task following training revealed source activity in auditory cortices, prefrontal, inferior parietal, and motor areas. More importantly, the contrast between the 2 groups revealed a striking double dissociation in which listeners trained on spectral or spatial cues showed higher source activity in ventral ("what") and dorsal ("where") brain areas, respectively. These feature-specific effects indicate that brief training can implicitly bias top-down processing to a trained acoustic cue and induce a rapid recalibration of the ventral and dorsal auditory streams during speech segregation and identification.
Collapse
Affiliation(s)
- Yi Du
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1 Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Yu He
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Stephen R Arnott
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Bernhard Ross
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Xihong Wu
- Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Liang Li
- Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1 Department of Psychology, University of Toronto, Ontario, Canada M8V 2S4
| |
Collapse
|
13
|
Measuring target detection performance in paradigms with high event rates. Clin Neurophysiol 2013; 124:928-40. [DOI: 10.1016/j.clinph.2012.11.012] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2012] [Revised: 11/04/2012] [Accepted: 11/18/2012] [Indexed: 11/15/2022]
|
14
|
The fusion of unattended duration representations as indexed by the mismatch negativity (MMN). Brain Res 2012; 1435:118-29. [DOI: 10.1016/j.brainres.2011.10.043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2011] [Revised: 10/24/2011] [Accepted: 10/26/2011] [Indexed: 11/23/2022]
|
15
|
The auditory dorsal pathway: Orienting vision. Neurosci Biobehav Rev 2011; 35:2162-73. [PMID: 21530585 DOI: 10.1016/j.neubiorev.2011.04.005] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2010] [Revised: 03/16/2011] [Accepted: 04/10/2011] [Indexed: 11/24/2022]
|
16
|
Abstract
Modern vehicle cockpits have begun to incorporate a number of information-rich techno-logies, including systems to enhance and improve driving and navigation performance and also driving-irrelevant information systems. The visually intensive nature of the driving task requires these systems to adopt primarily nonvisual means of information display, and the auditory modality represents an obvious alternative to vision for interacting with in-vehicle technologies (IVTs). Although the literature on auditory displays has grown tremendously in recent decades, to date, few guidelines or recommendations exist to aid in the design of effective auditory displays for IVTs. This chapter provides an overview of the current state of research and practice with auditory displays for IVTs. The role of basic auditory capabilities and limitations as they relate to in-vehicle auditory display design are discussed. Extant systems and prototypes are reviewed, and when possible, design recommendations are made. Finally, research needs and an iterative design process to meet those needs are discussed.
Collapse
|
17
|
Decomposing the Garner interference paradigm: evidence for dissociations between macrolevel and microlevel performance. Atten Percept Psychophys 2010; 72:1676-91. [PMID: 20675810 DOI: 10.3758/app.72.6.1676] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Three Garner interference experiments are described in which baseline, filtering, and correlated performance were assessed at both a macrolevel (condition average) and microlevel (intertrial contingency), using the pair-wise combinations of auditory pitch, loudness, and location. Discrepancies between pairs of dimensions were revealed between macro- and microlevel estimates of performance and, also, between filtering costs and correlated benefits, relative to baseline. The examination of the intertrial effects associated with filtering costs suggested that effects of increased stimulus uncertainty were mandatory, whereas effects of irrelevant variation were not. The examination of the intertrial effects associated with correlated benefits suggested that the detection of stimulus repetition took precedence over that of stimulus change. Violations of standard horse race accounts of processing did not appear to stem from differences in the absolute or relative speeds of processing between dimensions but, rather, from the special role that certain dimensions (e.g., pitch) may play in certain modalities (e.g., audition). The utility of examining repetition effects is demonstrated by revealing a level of understanding regarding stimulus processing typically hidden by aggregated measures of performance.
Collapse
|
18
|
Dyson BJ. Trial after trial: General processing consequences as a function of repetition and change in multidimensional sound. Q J Exp Psychol (Hove) 2010; 63:1770-88. [DOI: 10.1080/17470210903514255] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
While there are pointers relating to the consequences of repetition, a general framework regarding the cognitive implications of processing multidimensional stimuli as a function of previous stimulus history is currently lacking. Three experiments using sounds varying in location and pitch were carried out, in which the immediate consequences of repeating or changing task-relevant and task-irrelevant attributes were orthogonally examined. A consistent pattern of data was shown, in that the magnitude of selective attention failure was larger when the task-relevant value repeated across trials, while differences between dimensions were larger when the task-relevant value changed across trials. These effects of irrelevance and dimension as a function of intertrial contingency are summarized in a model depicting the dynamic allocation of processing resource.
Collapse
|
19
|
Bizley JK, Walker KMM. Sensitivity and selectivity of neurons in auditory cortex to the pitch, timbre, and location of sounds. Neuroscientist 2010; 16:453-69. [PMID: 20530254 DOI: 10.1177/1073858410371009] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
We are able to rapidly recognize and localize the many sounds in our environment. We can describe any of these sounds in terms of various independent "features" such as their loudness, pitch, or position in space. However, we still know surprisingly little about how neurons in the auditory brain, specifically the auditory cortex, might form representations of these perceptual characteristics from the information that the ear provides about sound acoustics. In this article, the authors examine evidence that the auditory cortex is necessary for processing the pitch, timbre, and location of sounds, and document how neurons across multiple auditory cortical fields might represent these as trains of action potentials. They conclude by asking whether neurons in different regions of the auditory cortex might not be simply sensitive to each of these three sound features but whether they might be selective for one of them. The few studies that have examined neural sensitivity to multiple sound attributes provide only limited support for neural selectivity within auditory cortex. Providing an explanation of the neural basis of feature invariance is thus one of the major challenges to sensory neuroscience obtaining the ultimate goal of understanding how neural firing patterns in the brain give rise to perception.
Collapse
Affiliation(s)
- Jennifer K Bizley
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom.
| | | |
Collapse
|
20
|
Hölig C, Berti S. To switch or not to switch: brain potential indices of attentional control after task-relevant and task-irrelevant changes of stimulus features. Brain Res 2010; 1345:164-75. [PMID: 20580694 DOI: 10.1016/j.brainres.2010.05.047] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2009] [Revised: 05/11/2010] [Accepted: 05/17/2010] [Indexed: 11/17/2022]
Abstract
Attention is controlled by the interplay of sensory input and top-down processes. We compared attentional control processes during task switching and reorientation after distraction. The primary task was to discriminate laterally and centrally presented tones; these stimuli were composed of a frequent standard or an infrequent deviant pitch. In the distraction condition, pitch was irrelevant and could be ignored. In the switch condition, pitch changes were relevant: whenever a deviant tone was presented, participants had to discriminate its pitch and not its direction. The task in standard trials remained unchanged. In both conditions, deviants elicited mismatch negativity (MMN), P3a, P3b, and reorienting negativity (RON). We, therefore, suggest that distraction and switching are triggered by the same system of attentional control. In addition, remarkable differences were observable between the two conditions: In the switch condition the MMN was followed by a more pronounced N2b and P3a. The differences between these components support the idea that in the distraction condition, a switch of attention is only initiated but not completely performed.
Collapse
Affiliation(s)
- Cordula Hölig
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| | | |
Collapse
|
21
|
Hill KT, Miller LM. Auditory attentional control and selection during cocktail party listening. Cereb Cortex 2009; 20:583-90. [PMID: 19574393 DOI: 10.1093/cercor/bhp124] [Citation(s) in RCA: 126] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
In realistic auditory environments, people rely on both attentional control and attentional selection to extract intelligible signals from a cluttered background. We used functional magnetic resonance imaging to examine auditory attention to natural speech under such high processing-load conditions. Participants attended to a single talker in a group of 3, identified by the target talker's pitch or spatial location. A catch-trial design allowed us to distinguish activity due to top-down control of attention versus attentional selection of bottom-up information in both the spatial and spectral (pitch) feature domains. For attentional control, we found a left-dominant fronto-parietal network with a bias toward spatial processing in dorsal precentral sulcus and superior parietal lobule, and a bias toward pitch in inferior frontal gyrus. During selection of the talker, attention modulated activity in left intraparietal sulcus when using talker location and in bilateral but right-dominant superior temporal sulcus when using talker pitch. We argue that these networks represent the sources and targets of selective attention in rich auditory environments.
Collapse
Affiliation(s)
- Kevin T Hill
- Center for Mind and Brain, University of California Davis, Davis, CA 95618, USA
| | | |
Collapse
|
22
|
Abstract
Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.
Collapse
|
23
|
Abstract
Attentional capture by color singletons during shape search can be eliminated when the target is not a feature singleton (Bacon & Egeth, 1994). This suggests that a "singleton detection" search strategy must be adopted for attentional capture to occur. Here we find similar effects on auditory attentional capture. Irrelevant high-intensity singletons interfered with an auditory search task when the target itself was also a feature singleton. However, singleton interference was eliminated when the target was not a singleton (i.e., when nontargets were made heterogeneous, or when more than one target sound was presented). These results suggest that auditory attentional capture depends on the observer's attentional set, as does visual attentional capture. The suggestion that hearing might act as an early warning system that would always be tuned to unexpected unique stimuli must therefore be modified to accommodate these strategy-dependent capture effects.
Collapse
Affiliation(s)
- Polly Dalton
- Department of Psychology, Royal Holloway University of London, Egham, Surrey, England.
| | | |
Collapse
|
24
|
Pichora-Fuller MK, Singh G. Effects of age on auditory and cognitive processing: implications for hearing aid fitting and audiologic rehabilitation. Trends Amplif 2006; 10:29-59. [PMID: 16528429 PMCID: PMC4111543 DOI: 10.1177/108471380601000103] [Citation(s) in RCA: 274] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Recent advances in research and clinical practice concerning aging and auditory communication have been driven by questions about age-related differences in peripheral hearing, central auditory processing, and cognitive processing. A "site-of-lesion'' view based on anatomic levels inspired research to test competing hypotheses about the contributions of changes at these three levels of the nervous system. A "processing'' view based on psychologic functions inspired research to test alternative hypotheses about how lower-level sensory processes and higher-level cognitive processes interact. In the present paper, we suggest that these two views can begin to be unified following the example set by the cognitive neuroscience of aging. The early pioneers of audiology anticipated such a unified view, but today, advances in science and technology make it both possible and necessary. Specifically, we argue that a synthesis of new knowledge concerning the functional neuroscience of auditory cognition is necessary to inform the design and fitting of digital signal processing in "intelligent'' hearing devices, as well as to inform best practices for resituating hearing aid fitting in a broader context of audiologic rehabilitation. Long-standing approaches to rehabilitative audiology should be revitalized to emphasize the important role that training and therapy play in promoting compensatory brain reorganization as older adults acclimatize to new technologies. The purpose of the present paper is to provide an integrated framework for understanding how auditory and cognitive processing interact when older adults listen, comprehend, and communicate in realistic situations, to review relevant models and findings, and to suggest how new knowledge about age-related changes in audition and cognition may influence future developments in hearing aid fitting and audiologic rehabilitation.
Collapse
Affiliation(s)
- M Kathleen Pichora-Fuller
- Department of Psychology, University of Toronto, 3359 Mississauga Road, Mississauga, Ontario, Canada L5L 1C6.
| | | |
Collapse
|
25
|
Lange K, Krämer UM, Röder B. Attending points in time and space. Exp Brain Res 2006; 173:130-40. [PMID: 16506009 DOI: 10.1007/s00221-006-0372-3] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2005] [Accepted: 01/16/2006] [Indexed: 10/25/2022]
Abstract
Both spatial and temporal attention improves auditory processing and these effects seem to originate at perceptual processing stages. It is not yet known if space and time are used in parallel or sequentially for stimulus selection. To directly compare when temporal and spatial attention affect stimulus processing in the auditory modality, short and long empty intervals (600 and 1,200 ms) were presented. Each interval started with a centrally presented tone (S1) and ended with a second tone (S2) presented either on the left or on the right side. Participants had to attend one point in time (offset of the short or long interval) and one position (left or right side) and had to respond to infrequent, deviant offset markers presented at the attended time point and at the attended position. The N1 of concurrently recorded event-related potentials (ERPs) to the frequent standard stimuli was enhanced by both temporal and spatial attention. The temporal and the spatial N1 attention effect had a similar scalp topography, suggesting common neural generators. By contrast, later effects of temporal and spatial attention, consisting of a posterior positivity and an anterior negativity, markedly differed.
Collapse
Affiliation(s)
- Kathrin Lange
- Department of Experimental Psychology, Heinrich Heine University, 40225, Düsseldorf, Germany.
| | | | | |
Collapse
|
26
|
Kidd G, Arbogast TL, Mason CR, Gallun FJ. The advantage of knowing where to listen. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2005; 118:3804-15. [PMID: 16419825 DOI: 10.1121/1.2109187] [Citation(s) in RCA: 173] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
This study examined the role of focused attention along the spatial (azimuthal) dimension in a highly uncertain multitalker listening situation. The task of the listener was to identify key words from a target talker in the presence of two other talkers simultaneously uttering similar sentences. When the listener had no a priori knowledge about target location, or which of the three sentences was the target sentence, performance was relatively poor-near the value expected simply from choosing to focus attention on only one of the three locations. When the target sentence was cued before the trial, but location was uncertain, performance improved significantly relative to the uncued case. When spatial location information was provided before the trial, performance improved significantly for both cued and uncued conditions. If the location of the target was certain, proportion correct identification performance was higher than 0.9 independent of whether the target was cued beforehand. In contrast to studies in which known versus unknown spatial locations were compared for relatively simple stimuli and tasks, the results of the current experiments suggest that the focus of attention along the spatial dimension can play a very significant role in solving the "cocktail party" problem.
Collapse
Affiliation(s)
- Gerald Kidd
- Department of Speech, Language and Hearing Sciences and Hearing Research Center, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA.
| | | | | | | |
Collapse
|
27
|
Justus T, List A. Auditory attention to frequency and time: an analogy to visual local-global stimuli. Cognition 2005; 98:31-51. [PMID: 16297675 PMCID: PMC1987383 DOI: 10.1016/j.cognition.2004.11.001] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2004] [Revised: 07/27/2004] [Accepted: 11/11/2004] [Indexed: 10/26/2022]
Abstract
Two priming experiments demonstrated exogenous attentional persistence to the fundamental auditory dimensions of frequency (Experiment 1) and time (Experiment 2). In a divided-attention task, participants responded to an independent dimension, the identification of three-tone sequence patterns, for both prime and probe stimuli. The stimuli were specifically designed to parallel the local-global hierarchical letter stimuli of [Navon D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353-383] and the task was designed to parallel subsequent work in visual attention using Navon stimuli [Robertson, L. C. (1996). Attentional persistence for features of hierarchical patterns. Journal of Experimental Psychology: General, 125, 227-249; Ward, L. M. (1982). Determinants of attention to local and global features of visual forms. Journal of Experimental Psychology: Human Perception and Performance, 8, 562-581]. The results are discussed in terms of previous work in auditory attention and previous approaches to auditory local-global processing.
Collapse
|
28
|
Dyson BJ, Alain C, He Y. Effects of visual attentional load on low-level auditory scene analysis. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2005; 5:319-38. [PMID: 16396093 DOI: 10.3758/cabn.5.3.319] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The sharing of processing resources between the senses was investigated by examining the effects of visual task load on auditory event-related brain potentials (ERPs). In Experiment 1, participants completed both a zero-back and a one-back visual task while a tone pattern or a harmonic series was presented. N1 and P2 waves were modulated by visual task difficulty, but neither mismatch negativity (MMN) elicited by deviant stimuli from the tone pattern nor object-related negativity (ORN) elicited by mistuning from the harmonic series was affected. In Experiment 2, participants responded to identity (what) or location (where) in vision, while ignoring sounds alternating in either pitch (what) or location (where). Auditory ERP modulations were consistent with task difficulty, rather than with task specificity. In Experiment 3, we investigated auditory ERP generation under conditions of no visual task. The results are discussed with respect to a distinction between process-general (N1 and P2) and process-specific (MMN and ORN) auditory ERPs.
Collapse
Affiliation(s)
- Benjamin J Dyson
- Department of Psychology, University of Sussex, Falmer, Brighton, England.
| | | | | |
Collapse
|
29
|
Tremblay S, Vachon F, Jones DM. Attentional and perceptual sources of the auditory attentional blink. ACTA ACUST UNITED AC 2005; 67:195-208. [PMID: 15971684 DOI: 10.3758/bf03206484] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When a rapid succession of auditory stimuli is listened to, processing of the second of two successive targets among fillers is often impaired, a phenomenon known as the attentional blink (AB). Three experiments were conducted to examine the role of filler items in modulating the size of the auditory AB, using a two-alternative forced choice discrimination paradigm. In the first experiment, dual-stream presentations in which low- and high-pitch items were separated by six semitones were tested. A transient deficit in reporting the probe was observed in the presence of fillers that was greater when fillers were in the same stream as the probe. In the absence of a filler, there was a residual deficit, but this was not related to the time lag between the target and the probe. In the second and third experiments, in which single-stream presentations were used, a typical AB was found in the presence of homogeneous fillers, but heterogeneous fillers tended to produce a greater deficit. In the absence of a filler, there was little or no evidence of a blink. The pattern of results suggests that other attentional and perceptual factors contribute to the blink.
Collapse
|
30
|
Forster B, Eimer M. The attentional selection of spatial and non-spatial attributes in touch: ERP evidence for parallel and independent processes. Biol Psychol 2004; 66:1-20. [PMID: 15019167 DOI: 10.1016/j.biopsycho.2003.08.001] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2003] [Accepted: 08/11/2003] [Indexed: 10/26/2022]
Abstract
To investigate the functional relationship between spatial and non-spatial attentional selectivity in somatosensory processing, event-related potentials (ERPs) were recorded to mechanical tactile stimuli, which were delivered to the right or left hand, and were low or high in frequency (Experiment 1), or soft or strong in intensity (Experiment 2). Participants' task was to attend to a specific combination of one stimulus location and one non-spatial attribute. Spatial attention was reflected in enhanced N140 components followed by a sustained attentional negativity. ERP effects of non-spatial attention (enhanced negativities to the attended frequency or intensity) were observed in the same latency range, suggesting that the attentional selection of relevant spatial and non-spatial attributes occurs in parallel. Most importantly, ERP correlates of attention directed to stimulus frequency and intensity were unaffected by the current focus of spatial attention. In contrast to vision, where the selective processing of non-spatial attributes is hierarchically dependent on selection by location, but similar to auditory attention, spatial and non-spatial attentional selectivity appear to operate independently in touch.
Collapse
Affiliation(s)
- Bettina Forster
- School of Psychology, Birkbeck College, University of London, Malet Street, London WC1E 7HX, UK.
| | | |
Collapse
|
31
|
Abstract
The phenomenon of attentional capture by a unique yet irrelevant singleton distractor has typically been studied in visual search. In this article, the authors examine whether a similar phenomenon occurs in the auditory domain. Participants searched sequences of sounds for targets defined by frequency, intensity, or duration. The presence of a singleton distractor that was unique on an irrelevant dimension (e.g., a low-frequency singleton in search for a target of high intensity) was associated with search costs in both detection and discrimination tasks. However, if the singleton feature coincided with the target item, search was facilitated. These results establish the phenomenon of auditory attentional capture.
Collapse
Affiliation(s)
- Polly Dalton
- Department of Psychology, University College London, United Kingdom.
| | | |
Collapse
|
32
|
Abstract
In 3 experiments, the authors tested performance in simple tone matching and classification tasks. Each tone was defined on location and frequency dimensions. In the first 2 experiments, participants completed a same-different matching task on the basis of one of these dimensions while attempting to ignore irrelevant variation in the other dimension. In Experiment 3, in which the tones were classified either by frequency or location, the authors explored intertrial repetition effects. The patterns of performance across these different tasks were remarkably similar and were taken to reveal basic characteristics of stimulus encoding processes. The data suggest a processing sequence in audition that reveals an early stage in which location and frequency are treated as being integral and a latter stage in which location and frequency are separable.
Collapse
|
33
|
Abstract
Abstract
The effects of attention on the neural processes underlying auditory scene analysis were investigated through the manipulation of auditory task load. Participants were asked to focus their attention on tuned and mistuned stimuli presented to one ear and to ignore similar stimuli presented to the other ear. For both tuned and mistuned sounds, long (standard) and shorter (deviant) duration stimuli were presented in both ears. Auditory task load was manipulated by varying task instructions. In the easier condition, participants were asked to press a button for deviant sounds (target) at the attended location, irrespective of tuning. In the harder condition, participants were further asked to identify whether the targets were tuned or mistuned. Participants were faster in detecting targets defined by duration only than by both duration and tuning. At the unattended location, deviant stimuli generated a mismatch negativity wave at frontocentral sites whose amplitude decreased with increasing task demand. In comparison, standard mistuned stimuli generated an object-related negativity at central sites whose amplitude was not affected by task difficulty. These results show that the processing of sound sequences is differentially affected by attentional load than is the processing of sounds that occur simultaneously (i.e., sequential vs. simultaneous grouping processes), and that they each recruit distinct neural networks.
Collapse
|
34
|
Abstract
Visual feature integration theory was one of the most influential theories of visual information processing in the last quarter of the 20th century. This article provides an exposition of the theory and a review of the associated data. In the past much emphasis has been placed on how the theory explains performance in various visual search tasks. The relevant literature is discussed and alternative accounts are described. Amendments to the theory are also set out. Many other issues concerning internal processes and representations implicated by the theory are reviewed. The article closes with a synopsis of what has been learned from consideration of the theory, and it is concluded that some of the issues may remain intractable unless appropriate neuroscientific investigations are carried out.
Collapse
Affiliation(s)
- Philip T Quinlan
- Department of Psychology, University of York, Heslington, United Kingdom.
| |
Collapse
|
35
|
Affiliation(s)
- David Van Valkenburg
- Department of Psychology, The University of Virginia, Charlottesville, VA 22904-4400, USA.
| | | |
Collapse
|
36
|
Dyson BJ, Quinlan PT. Feature and conjunction processing in the auditory modality. PERCEPTION & PSYCHOPHYSICS 2003; 65:254-72. [PMID: 12713242 DOI: 10.3758/bf03194798] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In five experiments, participants made speeded target/nontarget classification responses to singly presented auditory stimuli. Stimuli were defined via vocal identity and location in Experiments 1 and 2 and frequency and location in the remaining experiments. Performance was examined in two conditions inspired by visual search: In the feature condition, responses were based on the detection of unique stimulus features; in the conjunction condition, unique combinations of features were critical. Experiment 1 showed a conjunction benefit, since classifications were faster in the conjunction condition than in the feature condition. Potential confounds were eliminated in Experiments 2 and 3, which resulted in the observation of conjunction costs. In Experiments 4 and 5, we examined, respectively, whether the cost could be explained in terms of differences in interstimulus similarity and target template complexity across the main conditions. Both accounts were refuted. It seems that when the identification of particular feature combinations is necessary, conjunction processing in audition becomes an effortful process.
Collapse
|
37
|
Abstract
Although a great deal is now known about the peripheral sensory mechanisms involved in tactile information processing [Ann Rev Psychol 1990;50:305], it is only more recently that we have started to gain a clearer understanding of the effects of selective attention on tactile perception [Front Biosci 2000;5:D894]. To date, the majority of this selective attention research has considered each modality in isolation. However, in order to deal with the multimodal selection problems of everyday life, we need to be able to coordinate our selective attention cross-modally [Philos Trans R Soc, Sec B 1998:353; Curr Biol 2000;10:R731]. In this review, I will highlight the results of behavioral studies demonstrating the existence of extensive cross-modal links in selective attention between touch, vision, audition, and even olfaction. In particular, the review is structured around two key research questions: First, "Can attention can be selectively directed to a particular sensory modality?", and second "Are there cross-modal links in spatial attention?". The results of recent neuroimaging studies that have started to elucidate some of the neural mechanisms underlying these cross-modal attentional effects are also discussed, and potential questions for future research outlined.
Collapse
Affiliation(s)
- Charles Spence
- Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UK.
| |
Collapse
|
38
|
Dyson BJ, Quinlan PT. Within- and between-dimensional processing in the auditory modality. ACTA ACUST UNITED AC 2002. [DOI: 10.1037/0096-1523.28.6.1483] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
39
|
Woods DL, Alain C. Conjoining three auditory features: an event-related brain potential study. J Cogn Neurosci 2001; 13:492-509. [PMID: 11388922 DOI: 10.1162/08989290152001916] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The mechanisms of auditory feature processing and conjunction were examined with event-related brain potential (ERP) recording in a task in which participants responded to target tones defined by the combination of location, frequency, and duration features amid distractor tones varying randomly along all feature dimensions. Attention effects were isolated as negative difference (Nd) waves by subtracting ERPs to tones with no target features from ERPs to tones with one, two, or three target features. Nd waves were seen to all tones sharing a single feature with the target, including tones sharing only target duration. Nd waves associated with the analysis of frequency and location features began at latencies of 60 msec, whereas Nd-Duration waves began at 120 msec. Nd waves to tones with single target features continued until 400+ msec, suggesting that once begun, the analysis of tone features continued exhaustively to conclusion. Nd-Frequency and Nd-Human Location waves had distinct scalp distributions, consistent with generation in different auditory cortical areas. Three stages of feature processing were identified: (1) Parallel feature processing (60-140 msec): Nd waves combined linearly, such that Nd-wave amplitudes following tones with two or three target features were equal to the sum of the Nd waves elicited by tones with only one target feature. (2) Conjunction-specific (CS) processing (140-220 msec): Nd amplitudes were enhanced following tones with any pair of attended features. (3) Target-specific (TS) processing (220-300 msec): Nd amplitudes were specifically enhanced to target tones with all three features. These results are consistent with a facilitatory interactive feature analysis (FIFA) model in which feature conjunction is associated with the amplified processing of individual stimulus features. Activation of N-methyl-D-aspartate (NMDA) receptors is proposed to underlie the FIFA process.
Collapse
Affiliation(s)
- D L Woods
- University of California-Davis and Northern California System of Clinics, USA.
| | | |
Collapse
|