1
|
Shen D, Ross B, Alain C. Temporal deployment of attention in musicians: Evidence from an attentional blink paradigm. Ann N Y Acad Sci 2023; 1530:110-123. [PMID: 37823710 DOI: 10.1111/nyas.15069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
The generalization of music training to unrelated nonmusical domains is well established and may reflect musicians' superior ability to regulate attention. We investigated the temporal deployment of attention in musicians and nonmusicians using scalp-recording of event-related potentials in an attentional blink (AB) paradigm. Participants listened to rapid sequences of stimuli and identified target and probe sounds. The AB was defined as a probe identification deficit when the probe closely follows the target. The sequence of stimuli was preceded by a neutral or informative cue about the probe position within the sequence. Musicians outperformed nonmusicians in identifying the target and probe. In both groups, cueing improved target and probe identification and reduced the AB. The informative cue elicited a sustained potential, which was more prominent in musicians than nonmusicians over left temporal areas and yielded a larger N1 amplitude elicited by the target. The N1 was larger in musicians than nonmusicians, and its amplitude over the left frontocentral cortex of musicians correlated with accuracy. Together, these results reveal musicians' superior ability to regulate attention, allowing them to prepare for incoming stimuli, thereby improving sound object identification. This capacity to manage attentional resources to optimize task performance may generalize to nonmusical activities.
Collapse
Affiliation(s)
- Dawei Shen
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Bernhard Ross
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, University of Toronto, Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, University of Toronto, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
2
|
The time course of auditory recognition measured with rapid sequences of short natural sounds. Sci Rep 2019; 9:8005. [PMID: 31142750 PMCID: PMC6541711 DOI: 10.1038/s41598-019-43126-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Accepted: 03/25/2019] [Indexed: 11/09/2022] Open
Abstract
Human listeners are able to recognize accurately an impressive range of complex sounds, such as musical instruments or voices. The underlying mechanisms are still poorly understood. Here, we aimed to characterize the processing time needed to recognize a natural sound. To do so, by analogy with the “rapid visual sequential presentation paradigm”, we embedded short target sounds within rapid sequences of distractor sounds. The core hypothesis is that any correct report of the target implies that sufficient processing for recognition had been completed before the time of occurrence of the subsequent distractor sound. We conducted four behavioral experiments using short natural sounds (voices and instruments) as targets or distractors. We report the effects on performance, as measured by the fastest presentation rate for recognition, of sound duration, number of sounds in a sequence, the relative pitch between target and distractors and target position in the sequence. Results showed a very rapid auditory recognition of natural sounds in all cases. Targets could be recognized at rates up to 30 sounds per second. In addition, the best performance was observed for voices in sequences of instruments. These results give new insights about the remarkable efficiency of timbre processing in humans, using an original behavioral paradigm to provide strong constraints on future neural models of sound recognition.
Collapse
|
3
|
Multisensory feature integration in (and out) of the focus of spatial attention. Atten Percept Psychophys 2019; 82:363-376. [DOI: 10.3758/s13414-019-01813-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
4
|
Object-based attention in complex, naturalistic auditory streams. Sci Rep 2019; 9:2854. [PMID: 30814547 PMCID: PMC6393668 DOI: 10.1038/s41598-019-39166-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Accepted: 01/14/2019] [Indexed: 11/08/2022] Open
Abstract
In vision, objects have been described as the 'units' on which non-spatial attention operates in many natural settings. Here, we test the idea of object-based attention in the auditory domain within ecologically valid auditory scenes, composed of two spatially and temporally overlapping sound streams (speech signal vs. environmental soundscapes in Experiment 1 and two speech signals in Experiment 2). Top-down attention was directed to one or the other auditory stream by a non-spatial cue. To test for high-level, object-based attention effects we introduce an auditory repetition detection task in which participants have to detect brief repetitions of auditory objects, ruling out any possible confounds with spatial or feature-based attention. The participants' responses were significantly faster and more accurate in the valid cue condition compared to the invalid cue condition, indicating a robust cue-validity effect of high-level, object-based auditory attention.
Collapse
|
5
|
Evidence for cue-independent spatial representation in the human auditory cortex during active listening. Proc Natl Acad Sci U S A 2017; 114:E7602-E7611. [PMID: 28827357 DOI: 10.1073/pnas.1707522114] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.
Collapse
|
6
|
Forth J, Agres K, Purver M, Wiggins GA. Entraining IDyOT: Timing in the Information Dynamics of Thinking. Front Psychol 2016; 7:1575. [PMID: 27803682 PMCID: PMC5067415 DOI: 10.3389/fpsyg.2016.01575] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2016] [Accepted: 09/28/2016] [Indexed: 11/16/2022] Open
Abstract
We present a novel hypothetical account of entrainment in music and language, in context of the Information Dynamics of Thinking model, IDyOT. The extended model affords an alternative view of entrainment, and its companion term, pulse, from earlier accounts. The model is based on hierarchical, statistical prediction, modeling expectations of both what an event will be and when it will happen. As such, it constitutes a kind of predictive coding, with a particular novel hypothetical implementation. Here, we focus on the model's mechanism for predicting when a perceptual event will happen, given an existing sequence of past events, which may be musical or linguistic. We propose a range of tests to validate or falsify the model, at various different levels of abstraction, and argue that computational modeling in general, and this model in particular, can offer a means of providing limited but useful evidence for evolutionary hypotheses.
Collapse
Affiliation(s)
| | | | | | - Geraint A. Wiggins
- Computational Creativity Lab, Computational Linguistics Lab, Cognitive Science Group, School of Electronic Engineering and Computer Science, Queen Mary University of LondonLondon, UK
| |
Collapse
|
7
|
Du Y, He Y, Arnott SR, Ross B, Wu X, Li L, Alain C. Rapid tuning of auditory "what" and "where" pathways by training. ACTA ACUST UNITED AC 2013; 25:496-506. [PMID: 24042339 DOI: 10.1093/cercor/bht251] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Behavioral improvement within the first hour of training is commonly explained as procedural learning (i.e., strategy changes resulting from task familiarization). However, it may additionally reflect a rapid adjustment of the perceptual and/or attentional system in a goal-directed task. In support of this latter hypothesis, we show feature-specific gains in performance for groups of participants briefly trained to use either a spectral or spatial difference between 2 vowels presented simultaneously during a vowel identification task. In both groups, the neuromagnetic activity measured during the vowel identification task following training revealed source activity in auditory cortices, prefrontal, inferior parietal, and motor areas. More importantly, the contrast between the 2 groups revealed a striking double dissociation in which listeners trained on spectral or spatial cues showed higher source activity in ventral ("what") and dorsal ("where") brain areas, respectively. These feature-specific effects indicate that brief training can implicitly bias top-down processing to a trained acoustic cue and induce a rapid recalibration of the ventral and dorsal auditory streams during speech segregation and identification.
Collapse
Affiliation(s)
- Yi Du
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1 Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Yu He
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Stephen R Arnott
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Bernhard Ross
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Xihong Wu
- Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Liang Li
- Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1 Department of Psychology, University of Toronto, Ontario, Canada M8V 2S4
| |
Collapse
|
8
|
Suied C, Agus TR, Thorpe SJ, Pressnitzer D. Processing of short auditory stimuli: the rapid audio sequential presentation paradigm (RASP). ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2013; 787:443-51. [PMID: 23716251 DOI: 10.1007/978-1-4614-1590-9_49] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Human listeners seem to be remarkably able to recognise acoustic sound sources based on timbre cues. Here we describe a psychophysical paradigm to estimate the time it takes to recognise a set of complex sounds differing only in timbre cues: both in terms of the minimum duration of the sounds and the inferred neural processing time. Listeners had to respond to the human voice while ignoring a set of distractors. All sounds were recorded from natural sources over the same pitch range and equalised to the same duration and power. In a first experiment, stimuli were gated in time with a raised-cosine window of variable duration and random onset time. A voice/non-voice (yes/no) task was used. Performance, as measured by d', remained above chance for the shortest sounds tested (2 ms); d's above 1 were observed for durations longer than or equal to 8 ms. Then, we constructed sequences of short sounds presented in rapid succession. Listeners were asked to report the presence of a single voice token that could occur at a random position within the sequence. This method is analogous to the "rapid sequential visual presentation" paradigm (RSVP), which has been used to evaluate neural processing time for images. For 500-ms sequences made of 32-ms and 16-ms sounds, d' remained above chance for presentation rates of up to 30 sounds per second. There was no effect of the pitch relation between successive sounds: identical for all sounds in the sequence or random for each sound. This implies that the task was not determined by streaming or forward masking, as both phenomena would predict better performance for the random pitch condition. Overall, the recognition of familiar sound categories such as the voice seems to be surprisingly fast, both in terms of the acoustic duration required and of the underlying neural time constants.
Collapse
Affiliation(s)
- Clara Suied
- Département d'études cognitives, Ecole Normale Supérieure, Paris, France.
| | | | | | | |
Collapse
|
9
|
Temporally selective attention supports speech processing in 3- to 5-year-old children. Dev Cogn Neurosci 2011; 2:120-8. [PMID: 22682733 DOI: 10.1016/j.dcn.2011.03.002] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2010] [Revised: 03/11/2011] [Accepted: 03/14/2011] [Indexed: 11/20/2022] Open
Abstract
Recent event-related potential (ERP) evidence demonstrates that adults employ temporally selective attention to preferentially process the initial portions of words in continuous speech. Doing so is an effective listening strategy since word-initial segments are highly informative. Although the development of this process remains unexplored, directing attention to word onsets may be important for speech processing in young children who would otherwise be overwhelmed by the rapidly changing acoustic signals that constitute speech. We examined the use of temporally selective attention in 3- to 5-year-old children listening to stories by comparing ERPs elicited by attention probes presented at four acoustically matched times relative to word onsets: concurrently with a word onset, 100 ms before, 100 ms after, and at random control times. By 80 ms, probes presented at and after word onsets elicited a larger negativity than probes presented before word onsets or at control times. The latency and distribution of this effect is similar to temporally and spatially selective attention effects measured in adults and, despite differences in polarity, spatially selective attention effects measured in children. These results indicate that, like adults, preschool aged children modulate temporally selective attention to preferentially process the initial portions of words in continuous speech.
Collapse
|
10
|
Du Y, He Y, Ross B, Bardouille T, Wu X, Li L, Alain C. Human Auditory Cortex Activity Shows Additive Effects of Spectral and Spatial Cues during Speech Segregation. Cereb Cortex 2010; 21:698-707. [PMID: 20685854 DOI: 10.1093/cercor/bhq136] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Affiliation(s)
- Yi Du
- Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China 100871
| | | | | | | | | | | | | |
Collapse
|
11
|
Shi LF, Law Y. Masking effects of speech and music: does the masker's hierarchical structure matter? Int J Audiol 2010; 49:296-308. [PMID: 20151877 DOI: 10.3109/14992020903350188] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Speech and music are time-varying signals organized by parallel hierarchical rules. Through a series of four experiments, this study compared the masking effects of single-talker speech and instrumental music on speech perception while manipulating the complexity of hierarchical and temporal structures of the maskers. Listeners' word recognition was found to be similar between hierarchically intact and disrupted speech or classical music maskers (Experiment 1). When sentences served as the signal, significantly greater masking effects were observed with disrupted than intact speech or classical music maskers (Experiment 2), although not with jazz or serial music maskers, which differed from the classical music masker in their hierarchical structures (Experiment 3). Removing the classical music masker's temporal dynamics or partially restoring it affected listeners' sentence recognition; yet, differences in performance between intact and disrupted maskers remained robust (Experiment 4). Hence, the effect of structural expectancy was largely present across maskers when comparing them before and after their hierarchical structure was purposefully disrupted. This effect seemed to lend support to the auditory stream segregation theory.
Collapse
Affiliation(s)
- Lu-Feng Shi
- Department of Communication Sciences and Disorders, Long Island University - Brooklyn Campus, New York 11201, USA.
| | | |
Collapse
|
12
|
Gabriel DN, Munoz DP, Boehnke SE. The eccentricity effect for auditory saccadic reaction times is independent of target frequency. Hear Res 2010; 262:19-25. [PMID: 20138978 DOI: 10.1016/j.heares.2010.01.016] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/06/2009] [Revised: 01/26/2010] [Accepted: 01/31/2010] [Indexed: 11/29/2022]
Abstract
Although much is understood about the stimulus properties affecting the latency of saccadic eye movements to visual targets, relatively little is known about the properties affecting saccades to auditory targets. This study examined the effect of three primary acoustic features-frequency, intensity, and spatial location-on auditory saccade characteristics in humans, and compared them to visual saccades. Saccade targets were presented from an azimuthal array of speakers and LEDs spanning +/-36 degrees. There was an 'eccentricity effect' for auditory saccades such that latencies decreased by up to 70 ms with eccentricity. This was observed for all frequencies and intensities tested. There was a smaller effect in the opposite direction effect for visual saccades. Auditory saccades had similar latencies to visual saccades (within 5 ms) for near midline locations, but were up to 90 ms faster at eccentric locations (+/-36 degrees). Overall, saccadic latencies were shortest for wideband noise and narrowband noises with center frequencies falling within the human speech range. Examination of saccade accuracy showed decreasing accuracy with increasing eccentricity, and a negative correlation between accuracy and latency for auditory stimuli.
Collapse
Affiliation(s)
- Denise N Gabriel
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada K7L 3N6
| | | | | |
Collapse
|
13
|
Elhilali M, Xiang J, Shamma SA, Simon JZ. Interaction between attention and bottom-up saliency mediates the representation of foreground and background in an auditory scene. PLoS Biol 2009; 7:e1000129. [PMID: 19529760 PMCID: PMC2690434 DOI: 10.1371/journal.pbio.1000129] [Citation(s) in RCA: 118] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2008] [Accepted: 05/05/2009] [Indexed: 11/19/2022] Open
Abstract
The mechanism by which a complex auditory scene is parsed into coherent objects depends on poorly understood interactions between task-driven and stimulus-driven attentional processes. We illuminate these interactions in a simultaneous behavioral-neurophysiological study in which we manipulate participants' attention to different features of an auditory scene (with a regular target embedded in an irregular background). Our experimental results reveal that attention to the target, rather than to the background, correlates with a sustained (steady-state) increase in the measured neural target representation over the entire stimulus sequence, beyond auditory attention's well-known transient effects on onset responses. This enhancement, in both power and phase coherence, occurs exclusively at the frequency of the target rhythm, and is only revealed when contrasting two attentional states that direct participants' focus to different features of the acoustic stimulus. The enhancement originates in auditory cortex and covaries with both behavioral task and the bottom-up saliency of the target. Furthermore, the target's perceptual detectability improves over time, correlating strongly, within participants, with the target representation's neural buildup. These results have substantial implications for models of foreground/background organization, supporting a role of neuronal temporal synchrony in mediating auditory object formation.
Collapse
Affiliation(s)
- Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Juanjuan Xiang
- Starkey Laboratories, Eden Prairie, Minnesota, United States of America
| | - Shihab A. Shamma
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
| | - Jonathan Z. Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- Department of Biology, University of Maryland, College Park, Maryland, United States of America
- * E-mail:
| |
Collapse
|
14
|
Degerman A, Rinne T, Särkkä AK, Salmi J, Alho K. Selective attention to sound location or pitch studied with event-related brain potentials and magnetic fields. Eur J Neurosci 2008; 27:3329-41. [PMID: 18598270 DOI: 10.1111/j.1460-9568.2008.06286.x] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Event-related brain potentials (ERPs) and magnetic fields (ERFs) were used to compare brain activity associated with selective attention to sound location or pitch in humans. Sixteen healthy adults participated in the ERP experiment, and 11 adults in the ERF experiment. In different conditions, the participants focused their attention on a designated sound location or pitch, or pictures presented on a screen, in order to detect target sounds or pictures among the attended stimuli. In the Attend Location condition, the location of sounds varied randomly (left or right), while their pitch (high or low) was kept constant. In the Attend Pitch condition, sounds of varying pitch (high or low) were presented at a constant location (left or right). Consistent with previous ERP results, selective attention to either sound feature produced a negative difference (Nd) between ERPs to attended and unattended sounds. In addition, ERPs showed a more posterior scalp distribution for the location-related Nd than for the pitch-related Nd, suggesting partially different generators for these Nds. The ERF source analyses found no source distribution differences between the pitch-related Ndm (the magnetic counterpart of the Nd) and location-related Ndm in the superior temporal cortex (STC), where the main sources of the Ndm effects are thought to be located. Thus, the ERP scalp distribution differences between the location-related and pitch-related Nd effects may have been caused by activity of areas outside the STC, perhaps in the inferior parietal regions.
Collapse
|
15
|
Krumbholz K, Eickhoff SB, Fink GR. Feature- and object-based attentional modulation in the human auditory "where" pathway. J Cogn Neurosci 2008; 19:1721-33. [PMID: 18271742 DOI: 10.1162/jocn.2007.19.10.1721] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Attending to a visual stimulus feature, such as color or motion, enhances the processing of that feature in the visual cortex. Moreover, the processing of the attended object's other, unattended, features is also enhanced. Here, we used functional magnetic resonance imaging to show that attentional modulation in the auditory system may also exhibit such feature- and object-specific effects. Specifically, we found that attending to auditory motion increases activity in nonprimary motion-sensitive areas of the auditory cortical "where" pathway. Moreover, activity in these motion-sensitive areas was also increased when attention was directed to a moving rather than a stationary sound object, even when motion was not the attended feature. An analysis of effective connectivity revealed that the motion-specific attentional modulation was brought about by an increase in connectivity between the primary auditory cortex and nonprimary motion-sensitive areas, which, in turn, may have been mediated by the paracingulate cortex in the frontal lobe. The current results indicate that auditory attention can select both objects and features. The finding of feature-based attentional modulation implies that attending to one feature of a sound object does not necessarily entail an exhaustive processing of the object's unattended features.
Collapse
Affiliation(s)
- Katrin Krumbholz
- MRC Institute of Hearing Research, University Park Nottingham NG-7 2RD, UK.
| | | | | |
Collapse
|
16
|
Guiraud J, Gallego S, Arnold L, Boyle P, Truy E, Collet L. Effects of auditory pathway anatomy and deafness characteristics? Part 2: On electrically evoked late auditory responses. Hear Res 2007; 228:44-57. [PMID: 17350776 DOI: 10.1016/j.heares.2007.01.022] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/13/2006] [Revised: 01/12/2007] [Accepted: 01/12/2007] [Indexed: 10/23/2022]
Abstract
The purpose of this study was to distinguish the effects of different parameters on latencies of wave N1, wave P2, and inter-peak interval N1-P2 of electrical late auditory responses (ELARs). ELARs were recorded from four intra-cochlear electrodes in fourteen adult HiRes90K cochlear implant users who had at least three months of experience. The relationship between latencies and stimulation sites in the cochlea was characterized to assess the influence of the auditory pathway anatomy on ELARs, i.e., whether the speed of neural propagation varies according to the place that is activated in the cochlea. Audiograms before implantation, duration of deafness, and psychophysics at first fitting were used to describe the influence of deafness characteristics on latencies. The stimulation sites were found to have no effect on ELAR latency and, while there was no influence of psychophysics on latency, a strong relationship was shown with duration of deafness and the pre-implantation audiogram. Thus, ELAR latency was longer for poorer audiograms and longer durations of deafness and this relationship appeared to be independent of stimulation parameters such as stimulation site. Comparison between these findings and those from the equivalent study on EABR waves IIIe and Ve latency [Guiraud, J., Gallego, S., Arnold, L., Boyle, P., Truy, E., Collet, L., 2007. Effects of auditory pathway anatomy and deafness characteristics? (1): On electrically evoked auditory brainstem responses. Hear. Res. 223 (1-2), 48-60] shows that, while ELAR and EABR latencies are related with parameters that reflect the integrity of the auditory pathway, ELAR latency is less dependent on stimulation parameters than EABR latency.
Collapse
Affiliation(s)
- Jeanne Guiraud
- CNRS UMR 5020, Neurosciences & Sensorial Systems Laboratory, University Lyon 1, and Department of Audiology and Otorhinolaryngology, Edouard Herriot Hospital, 5 place d'Arsonval, 69437 Lyon, France.
| | | | | | | | | | | |
Collapse
|
17
|
Degerman A, Rinne T, Salmi J, Salonen O, Alho K. Selective attention to sound location or pitch studied with fMRI. Brain Res 2006; 1077:123-34. [PMID: 16515772 DOI: 10.1016/j.brainres.2006.01.025] [Citation(s) in RCA: 67] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2005] [Revised: 12/30/2005] [Accepted: 01/04/2006] [Indexed: 10/25/2022]
Abstract
We used 3-T functional magnetic resonance imaging to compare the brain mechanisms underlying selective attention to sound location and pitch. In different tasks, the subjects (N = 10) attended to a designated sound location or pitch or to pictures presented on the screen. In the Attend Location conditions, the sound location varied randomly (left or right), while the pitch was kept constant (high or low). In the Attend Pitch conditions, sounds of randomly varying pitch (high or low) were presented at a constant location (left or right). Both attention to location and attention to pitch produced enhanced activity (in comparison with activation caused by the same sounds when attention was focused on the pictures) in widespread areas of the superior temporal cortex. Attention to either sound feature also activated prefrontal and inferior parietal cortical regions. These activations were stronger during attention to location than during attention to pitch. Attention to location but not to pitch produced a significant increase of activation in the premotor/supplementary motor cortices of both hemispheres and in the right prefrontal cortex, while no area showed activity specifically related to attention to pitch. The present results suggest some differences in the attentional selection of sounds on the basis of their location and pitch consistent with the suggested auditory "what" and "where" processing streams.
Collapse
|
18
|
Meehan S, Singhal A, Fowler B. The late Nd reflects a memory trace containing amodal spatial information. Psychophysiology 2005; 42:531-9. [PMID: 16176375 DOI: 10.1111/j.1469-8986.2005.00309.x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
The early Nd reflects the analysis of simple features of selectively attended auditory stimuli, but the precise nature of the more complex processing reflected by the late Nd is unclear. The late but not the early Nd is sensitive to interference from a concurrently presented visual spatial attention switching task. This experiment investigated whether the late Nd is also sensitive to deeper visual attention switching. Twenty-one subjects performed a dichotic listening task concurrently with either visual spatial or visual letter matching attention switching tasks. Late Nd amplitude was reduced by the spatial but not the letter matching task, indicating insensitivity to deeper attention switching. P300 amplitude was reduced by both tasks. Reductions in N100 and P200 were uncorrelated. We propose that, in part, the late Nd reflects an amodal memory trace containing spatial information, possibly involving a "where" rather than a "what" auditory pathway.
Collapse
Affiliation(s)
- Sean Meehan
- Department of Kinesiology and Health Science, York University, North York, Ontario, Canada
| | | | | |
Collapse
|
19
|
Coch D, Sanders LD, Neville HJ. An event-related potential study of selective auditory attention in children and adults. J Cogn Neurosci 2005; 17:605-22. [PMID: 15829081 DOI: 10.1162/0898929053467631] [Citation(s) in RCA: 92] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In a dichotic listening paradigm, event-related potentials (ERPs) were recorded to linguistic and nonlinguistic probe stimuli embedded in 2 different narrative contexts as they were either attended or unattended. In adults, the typical N1 attention effect was observed for both types of probes: Probes superimposed on the attended narrative elicited an enhanced negativity compared to the same probes when unattended. Overall, this sustained attention effect was greater over medial and left lateral sites, but was more posteriorly distributed and of longer duration for linguistic as compared to nonlinguistic probes. In contrast, in 6- to 8-year-old children the ERPs were morphologically dissimilar to those elicited in adults and children displayed a greater positivity to both types of probe stimuli when embedded in the attended as compared to the unattended narrative. Although both adults and children showed attention effects beginning at about 100 msec, only adults displayed left-lateralized attention effects and a distinct, posterior distribution for linguistic probes. These results suggest that the attentional networks indexed by this task continue to develop beyond the age of 8 years.
Collapse
Affiliation(s)
- Donna Coch
- Department of Education, Dartmouth College, Hanover, NH 03755, USA.
| | | | | |
Collapse
|
20
|
Dyson BJ, Quinlan PT. Within- and between-dimensional processing in the auditory modality. ACTA ACUST UNITED AC 2002. [DOI: 10.1037/0096-1523.28.6.1483] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
21
|
Woods DL, Alain C. Conjoining three auditory features: an event-related brain potential study. J Cogn Neurosci 2001; 13:492-509. [PMID: 11388922 DOI: 10.1162/08989290152001916] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The mechanisms of auditory feature processing and conjunction were examined with event-related brain potential (ERP) recording in a task in which participants responded to target tones defined by the combination of location, frequency, and duration features amid distractor tones varying randomly along all feature dimensions. Attention effects were isolated as negative difference (Nd) waves by subtracting ERPs to tones with no target features from ERPs to tones with one, two, or three target features. Nd waves were seen to all tones sharing a single feature with the target, including tones sharing only target duration. Nd waves associated with the analysis of frequency and location features began at latencies of 60 msec, whereas Nd-Duration waves began at 120 msec. Nd waves to tones with single target features continued until 400+ msec, suggesting that once begun, the analysis of tone features continued exhaustively to conclusion. Nd-Frequency and Nd-Human Location waves had distinct scalp distributions, consistent with generation in different auditory cortical areas. Three stages of feature processing were identified: (1) Parallel feature processing (60-140 msec): Nd waves combined linearly, such that Nd-wave amplitudes following tones with two or three target features were equal to the sum of the Nd waves elicited by tones with only one target feature. (2) Conjunction-specific (CS) processing (140-220 msec): Nd amplitudes were enhanced following tones with any pair of attended features. (3) Target-specific (TS) processing (220-300 msec): Nd amplitudes were specifically enhanced to target tones with all three features. These results are consistent with a facilitatory interactive feature analysis (FIFA) model in which feature conjunction is associated with the amplified processing of individual stimulus features. Activation of N-methyl-D-aspartate (NMDA) receptors is proposed to underlie the FIFA process.
Collapse
Affiliation(s)
- D L Woods
- University of California-Davis and Northern California System of Clinics, USA.
| | | |
Collapse
|
22
|
Woods DL, Alain C, Ogawa KH. Conjoining auditory and visual features during high-rate serial presentation: processing and conjoining two features can be faster than processing one. PERCEPTION & PSYCHOPHYSICS 1998; 60:239-49. [PMID: 9529908 DOI: 10.3758/bf03206033] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
The time required to conjoin stimulus features in high-rate serial presentation tasks was estimated in auditory and visual modalities. In the visual experiment, targets were defined by color, orientation, or the conjunction of color and orientation features. Responses were fastest in color conditions, intermediate in orientation conditions, and slowest in conjunction conditions. Estimates of feature conjunction time (FCT) were derived on the basis of a model in which features were processed in parallel and then conjoined, permitting FCTs to be estimated from the difference in reaction times between conjunction and the slowest single-feature condition. Visual FCTs averaged 17 msec, but were negative for certain stimuli and subjects. In the auditory experiment, targets were defined by frequency, location, or the conjunction of frequency and location features. Responses were fastest in frequency conditions, but were faster in conjunction than in location conditions, yielding negative FCTs. The results from both experiments suggest that the processing of stimulus features occurs interactively during early stages of feature conjunction.
Collapse
Affiliation(s)
- D L Woods
- University of California, Davis, USA.
| | | | | |
Collapse
|
23
|
Alain C, Woods DL, Covarrubias D. Activation of duration-sensitive auditory cortical fields in humans. ELECTROENCEPHALOGRAPHY AND CLINICAL NEUROPHYSIOLOGY 1997; 104:531-9. [PMID: 9402895 DOI: 10.1016/s0168-5597(97)00057-9] [Citation(s) in RCA: 49] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
The influence of stimulus duration on auditory evoked potentials (AEPs) was examined for tones varying randomly in duration, location, and frequency in an auditory selective attention task. Stimulus duration effects were isolated as duration difference waves by subtracting AEPs to short duration tones from AEPs to longer duration tones of identical location, frequency and rise time. This analysis revealed that AEP components generally increased in amplitude and decreased in latency with increments in signal duration, with evidence of longer temporal integration times for lower frequency tones. Different temporal integration functions were seen for different N1 subcomponents. The results suggest that different auditory cortical areas have different temporal integration times, and that these functions vary as a function of tone frequency.
Collapse
Affiliation(s)
- C Alain
- Department of Neurology, University of California at Davis, USA
| | | | | |
Collapse
|
24
|
Rotte M, Heinze HJ, Smid HG. Selective attention to conjunctions of color and shape of alphanumeric versus non-alphanumeric stimuli: a comparative electrophysiological study. Biol Psychol 1997; 46:199-221. [PMID: 9360773 DOI: 10.1016/s0301-0511(97)00018-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
We compared multi-dimensional selection on the basis of the color, the global shape and the local shape of alphanumeric (letters) and non-alphanumeric (non-letters) stimuli. We investigated whether letters are selected on the basis of name codes or on the basis of highly familiar local shape codes. Participants responded to a single conjunction of color, global shape and local shape occurring in a randomized stream of other conjunctions of these attributes. Dependent variables were reaction time and measures derived from event-related brain potentials (onset latencies and peak amplitudes of the occipital selection negativity, SN). The SN results showed that, for both letters and non-letters, color and global shape were selected first and local shape was selected later. Reaction times were faster, and SN to the local shape occurred earlier for letters than for non-letters. The SN to the local shape of letters was larger than the SN to the local shape of non-letters. In contrast, the SN to the global shape of letters was smaller than the SN to the global shape of non-letters. Selection of the global shape of letters, but not of non-letters, depended on whether they occurred in the relevant color. Selection of the color of both letters and non-letters was independent of shape relevance, and selection of the local shape of both letters and non-letters was independent of color relevance. These results suggest that, (1) both letter and non-letter shapes are initially analyzed in a feature-specific manner; and (2) letters are selected for task-directed processing on the basis of highly familiar local shape codes and not on the basis of name codes.
Collapse
Affiliation(s)
- M Rotte
- Department of Clinical Neurophysiology, Otto-von-Guericke University, Magdeburg, Germany.
| | | | | |
Collapse
|
25
|
Smid HG, Jakob A, Heinze HJ. The organization of multidimensional selection on the basis of color and shape: an event-related brain potential study. PERCEPTION & PSYCHOPHYSICS 1997; 59:693-713. [PMID: 9259637 DOI: 10.3758/bf03206016] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
In this paper, we examine whether color and shape, tied to a single object in space, (1) are identified and selected in series or in parallel, (2) are identified and selected in a dependent, self-terminating manner or in an independent and exhaustive manner, and (3) are conjoined by a feature integration process before or only after an initial stage of separate attribute analyses has finished. We measured response time and the selection negativity (SN) derived from event-related brain potentials when participants responded to a unique conjunction of color and shape in a go/no-go target detection task. The discriminability of the color and the shape of the conjunction was manipulated in three conditions. When color and shape were easy to discriminate, the SNs to color and shape started at the same time. When one attribute was less discriminable the SN to that attribute started later, but not the SN to the complementary attribute. This suggests that color and shape are identified and selected in parallel. In all three discriminability conditions, the SNs to color and shape were initially independent but later interacted. This suggests that color and shape are initially selected independently and exhaustively, after which their conjunction is analyzed. The SN to local shape features started later than that to the conjunction of color and global shape features, which suggests that feature integration can start before the analyses of the separate attributes have finished.
Collapse
Affiliation(s)
- H G Smid
- Otto-von-Guericke University, Medical Faculty, Clinic for Neurophysiology, Magdeburg, Germany.
| | | | | |
Collapse
|
26
|
Anllo-Vento L, Hillyard SA. Selective attention to the color and direction of moving stimuli: electrophysiological correlates of hierarchical feature selection. PERCEPTION & PSYCHOPHYSICS 1996; 58:191-206. [PMID: 8838164 DOI: 10.3758/bf03211875] [Citation(s) in RCA: 236] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Event-related brain potentials (ERPs) were recorded from subjects who attended to pairs of adjacent colored squares that were flashed sequentially to produce a perception of movement. The task was to attend selectively to stimuli in one visual field and to detect slower moving targets that contained the critical value of the attended feature, be it color or movement direction. Attention to location was reflected by a modulation of the early P1 and N1 components of the ERP, whereas selection of the relevant stimulus feature was associated with later selection negativity components. ERP indices of feature selection were elicited only by stimuli at the attended location and had distinctive scalp distributions for features mediated by "ventral" (color) and "dorsal" (motion) cortical areas. ERP indices of target selection were also contingent on the prior selection of location but initially did not depend on the selection of the relevant feature. These ERP data reveal the timing of sequential, parallel, and contingent stages of visual processing and support early-selection theories of attention that stipulate attentional control over the initial processing of stimulus features.
Collapse
|
27
|
Abstract
The scalp distributions of middle latency auditory evoked potentials (MAEPs) elicited by tone bursts of 250 and 4000 Hz were compared in two experiments. Na (19.9 ms), Pa (29.8 ms), and Pb (51.4 ms) components elicited by tones of either frequency had fronto-central distributions, whereas the Nb component (38.4 ms) was maximal at parietal sites. Although the distributions of MAEP components varied as a function of the ear of stimulation, no significant differences were found as a function of tone frequency. The results are consistent with suggestions that MAEPs reflect activation of non-tonotopically organized generators.
Collapse
Affiliation(s)
- D L Woods
- Department of Neurology, UC Davis, Northern California System of Clinics, Martinez 94553, USA
| | | | | | | |
Collapse
|
28
|
Oades RD, Dittmann-Balcar A, Zerbin D. The topography of 4 subtraction ERP-waveforms derived from a 3-tone auditory oddball task in healthy young adults. Int J Neurosci 1995; 81:265-81. [PMID: 7628915 DOI: 10.3109/00207459509004891] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Five components were studied in 4 subtraction waveforms derived from ERPs obtained in passive and active conditions of a 3-tone oddball task (common = 70%, C, 0.8 KHz; deviant = 15%, D, 2 KHz; 1.4 KHz = 15%, t, also used as a target (T)). These waveforms reflect different stimulus-mismatch processes and thus their topography could be revealing of different brain regions mediating them. The following mismatches were studied: stimulus-mismatch (deviant--common, D/C, rarity and pitch confounded), pitch-mismatch (T--deviant, T/D, rarity not target features controlled), attention-mismatch (T-t), T/t, controlled for pitch and rarity to show the influence of target features). These are compared with Goodin's procedure [G-wv, (T--common (active))--(t--common (passive))]. There were main site effects in normalized data in all cases (not P2 and N2 latency). There were separate frontal and posterior contributions to P1, with the former emphasized where target comparisons were involved. Frontal N1 peaks, largest in D/C, spread posterior and to the right where target matching was involved. P2 posterior maxima were also less localized where target features were involved in the comparison. N2 topography was similar between waveforms but spread slightly more to each side in the T/t comparison. Onset was earlier in the D/C comparison. Parietal P3 peaks in waves based on target-ERPs showed a left temporal shift (vs D/C), though in T/D P3 was in fact maximal on the right. Thus an attentional effect is evident as early as 60 ms. Target features modify the anteroposterior distribution of positivity and negativity for the early components and in the lateralization of P3-like positivity. A comparison of waveforms by latency of potential shift (running t-test) vs peak identification (MANOVA) is illustrated and discussed. D/C and T/t (rather than T/D or G-wv) waveforms are recommended for distinguishing comparator mechanisms for stimulus- and task-relevant features.
Collapse
Affiliation(s)
- R D Oades
- RLHK Clinic for Child and Adolescent Psychiatry, Essen, Germany
| | | | | |
Collapse
|
29
|
Oades RD, Zerbin D, Dittmann-Balcar A. The topography of event-related potentials in passive and active conditions of a 3-tone auditory oddball test. Int J Neurosci 1995; 81:249-64. [PMID: 7628914 DOI: 10.3109/00207459509004890] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Normalized event-related potential (ERP) data were analysed for topographical differences of ERP amplitude or latency in two conditions of a 3-tone oddball paradigm. The aim was to compare perception-related features relating to tone-type (passive non-task condition) with focussed attention-related features (active discrimination of target from non-target) in 5 ERP components from 23 young healthy subjects. The tones used were a common standard (70%, 0.8 KHz), a deviant standard (15%, 2 KHz) and a 1.4 KHz tone (15%, t) also used as the target (T). A site x tone interaction was obtained for P1 amplitude (augmenting with pitch anterior to posterior). The opposite tendency was seen for P2 to the right of midline maxima. No interaction was obtained for N1 amplitude. Condition became relevant for the N2-P3 complex. Frontal N2 amplitude increased after rare tones in the active condition. Posterior P3 peak size distinguished between tone (more widespread response to the common tone) and condition (more right-sided in the passive condition). The common tone elicited more widespread shift to the right than the rare tones. Latency was affected by condition from the P2 onwards and confirmed many of the amplitude interactions. This report extends and qualifies well-known main effects of tone and condition through main site effects to lateral sites. It supports claims of multiple sources of ERP components, except for N1 and P2. The contributions of these sources are influenced by tone-features (from P1) and the presence or absence of focussed attention (from the N2-P3 complex).
Collapse
Affiliation(s)
- R D Oades
- RLHK Clinic for Child and Adolescent Psychiatry, Essen, Germany
| | | | | |
Collapse
|
30
|
Alain C, Woods DL. Signal clustering modulates auditory cortical activity in humans. PERCEPTION & PSYCHOPHYSICS 1994; 56:501-16. [PMID: 7991348 DOI: 10.3758/bf03206947] [Citation(s) in RCA: 58] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Auditory streaming and its relevance to attentional processing was examined using event-related brain potentials (ERPs) in situations facilitating perception of one or two streams of sounds. Subjects listened to sequences of brief tones of three different frequencies presented in random order. In evenly spaced (ES) conditions, the three frequencies were equidistant on the musical scale. In clustered, easy (CE) conditions, the attended frequency was distinct, while the middle and extreme distractor tones were clustered together. In clustered, hard (CH) conditions, the attended frequency was clustered with one of the distractors. The subjects pressed a button in response to occasional target tones of longer duration at a prespecified frequency. The subjects were faster and more accurate in CE conditions than they were in ES conditions, and ERP attention effects were enhanced in amplitude in CE conditions. Conversely, the subjects were slower and less accurate in CH conditions and ERP attention effects were delayed in latency and decreased in amplitude. Clustering effects suggest that the processing of stimuli belonging to the attended stream was promoted and the processing of those falling outside the stream was inhibited. The timing and scalp distribution of clustering-related changes in ERPs suggest that clustering modulates early sensory processing in auditory cortex.
Collapse
Affiliation(s)
- C Alain
- University of California, Davis
| | | |
Collapse
|
31
|
Woods DL, Knight RT, Scabini D. Anatomical substrates of auditory selective attention: behavioral and electrophysiological effects of posterior association cortex lesions. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 1993; 1:227-40. [PMID: 8003922 DOI: 10.1016/0926-6410(93)90007-r] [Citation(s) in RCA: 79] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Event-related brain potentials (ERPs) and reaction times (RTs) were recorded in an auditory selective attention task in control subjects and two groups of patients with lesions centered in (1) the temporal/parietal junction (T/P, n = 9); and (2) the inferior parietal lobe (IPL, n = 7). High pitched tones were presented to one ear and low pitched tones to the other in random sequences that included infrequent longer-duration tones and occasional novel sounds. Subjects attended to a specified ear and pressed a button to the longer-duration tones in that ear. IPL and T/P lesions slowed reaction times (RTs) and increased error rates, but improved one aspect of performance--patients showed less distraction than controls when targets followed novel sounds. T/P lesions reduced the amplitude of early sensory ERPs, initially over the damaged hemisphere (N1a, 70-110 ms) and then bilaterally (N1b, 110-130 ms, and N1c 130-160 ms). The reduction was accentuated for tones presented contralateral to the lesion, suggesting that N1 generators receive excitatory input primarily from the contralateral ear. IPL lesions reduced N1 amplitudes to both low frequency tones and novel sounds. Nd components associated with attentional selection were diminished over both hemispheres in the T/P group and over the lesioned hemisphere in the IPL group independent of ear of stimulation. Target and novel N2s tended to be diminished by IPL lesions but were unaffected by T/P lesions. The mismatch negativity was unaffected by either T/P or IPL lesions. The results support different roles of T/P and IPL cortex in auditory selective attention.
Collapse
Affiliation(s)
- D L Woods
- Department of Neurology, UC Davis, VA Medical Center, Martinez, CA 94553
| | | | | |
Collapse
|
32
|
Alain C, Woods DL. Distractor clustering enhances detection speed and accuracy during selective listening. PERCEPTION & PSYCHOPHYSICS 1993; 54:509-14. [PMID: 8255713 DOI: 10.3758/bf03211773] [Citation(s) in RCA: 27] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
The effects of distractor clustering on target detection were examined in two experiments in which subjects attended to binaural tone bursts of one frequency while ignoring distracting tones of two competing frequencies. The subjects pressed a button in response to occasional target tones of longer duration (Experiment 1) or increased loudness (Experiment 2). In evenly spaced conditions, attended and distractor frequencies differed by 6 and 12 semitones, respectively (e.g., 2096-Hz targets vs. 1482- and 1048-Hz distractors). In clustered conditions, distractor frequencies were grouped; attended tones differed from the distractors by 6 and 7 semitones, respectively (e.g., 2096-Hz targets vs. 1482- and 1400-Hz distractors). The tones were presented in randomized sequences at fixed or random stimulus onset asynchronies (SOAs). In both experiments, clustering of the unattended frequencies improved the detectability of targets and speeded target reaction times. Similar effects were found at fixed and variable SOAs. Results from the analysis of stimulus sequence suggest that clustering improved performance primarily by reducing the interference caused by distractors that immediately preceded the target.
Collapse
Affiliation(s)
- C Alain
- Neurosciences Center, University of California, Davis
| | | |
Collapse
|
33
|
Abstract
Three experiments were performed, two comparing the peak latencies of auditory evoked potentials (AEPs) elicited by 250 Hz and 4000 Hz tone pips and a third comparing simple reaction times (RTs) to the same stimuli. In the AEP experiments, the latencies of brainstem, middle and long-latency components were delayed following 250 Hz tone pips in comparison with the latencies of the same components evoked by loudness-matched 4000 Hz tones. Frequency-related latency differences increased with component latency, ranging from less than 1.0 ms for wave V of the brainstem AEP, to more than 20.0 ms for the cortical N1 component. Interpeak latency differences were also significantly lengthened following the 250 Hz tone pips. In the behavioral study, RTs were 14.6 ms slower following 250 than 4000 Hz tone pips. The results suggest that the time required for the sensory analysis of auditory signals varies inversely with their frequency.
Collapse
Affiliation(s)
- D L Woods
- Department of Neurology and Neurobiology Center, UC Davis, Martinez
| | | | | | | |
Collapse
|