1
|
Soper DJ, Reich D, Ross A, Salami P, Cash SS, Basu I, Peled N, Paulk AC. Modular pipeline for reconstruction and localization of implanted intracranial ECoG and sEEG electrodes. PLoS One 2023; 18:e0287921. [PMID: 37418486 PMCID: PMC10328232 DOI: 10.1371/journal.pone.0287921] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 06/15/2023] [Indexed: 07/09/2023] Open
Abstract
Implantation of electrodes in the brain has been used as a clinical tool for decades to stimulate and record brain activity. As this method increasingly becomes the standard of care for several disorders and diseases, there is a growing need to quickly and accurately localize the electrodes once they are placed within the brain. We share here a protocol pipeline for localizing electrodes implanted in the brain, which we have applied to more than 260 patients, that is accessible to multiple skill levels and modular in execution. This pipeline uses multiple software packages to prioritize flexibility by permitting multiple different parallel outputs while minimizing the number of steps for each output. These outputs include co-registered imaging, electrode coordinates, 2D and 3D visualizations of the implants, automatic surface and volumetric localizations of the brain regions per electrode, and anonymization and data sharing tools. We demonstrate here some of the pipeline's visualizations and automatic localization algorithms which we have applied to determine appropriate stimulation targets, to conduct seizure dynamics analysis, and to localize neural activity from cognitive tasks in previous studies. Further, the output facilitates the extraction of information such as the probability of grey matter intersection or the nearest anatomic structure per electrode contact across all data sets that go through the pipeline. We expect that this pipeline will be a useful framework for researchers and clinicians alike to localize implanted electrodes in the human brain.
Collapse
Affiliation(s)
- Daniel J. Soper
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA, United States of America
- Department of Neurology, Harvard Medical School, Boston, MA, United States of America
| | - Dustine Reich
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA, United States of America
- Department of Neurology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Alex Ross
- Department of Neurosurgery, University of Cincinnati College of Medicine, Cincinnati, OH, United States of America
| | - Pariya Salami
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA, United States of America
- Department of Neurology, Harvard Medical School, Boston, MA, United States of America
| | - Sydney S. Cash
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA, United States of America
- Department of Neurology, Harvard Medical School, Boston, MA, United States of America
| | - Ishita Basu
- Department of Neurosurgery, University of Cincinnati College of Medicine, Cincinnati, OH, United States of America
| | - Noam Peled
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
| | - Angelique C. Paulk
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA, United States of America
- Department of Neurology, Harvard Medical School, Boston, MA, United States of America
| |
Collapse
|
2
|
Fernandez Pujol C, Blundon EG, Dykstra AR. Laminar specificity of the auditory perceptual awareness negativity: A biophysical modeling study. PLoS Comput Biol 2023; 19:e1011003. [PMID: 37384802 PMCID: PMC10337981 DOI: 10.1371/journal.pcbi.1011003] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 07/12/2023] [Accepted: 06/17/2023] [Indexed: 07/01/2023] Open
Abstract
How perception of sensory stimuli emerges from brain activity is a fundamental question of neuroscience. To date, two disparate lines of research have examined this question. On one hand, human neuroimaging studies have helped us understand the large-scale brain dynamics of perception. On the other hand, work in animal models (mice, typically) has led to fundamental insight into the micro-scale neural circuits underlying perception. However, translating such fundamental insight from animal models to humans has been challenging. Here, using biophysical modeling, we show that the auditory awareness negativity (AAN), an evoked response associated with perception of target sounds in noise, can be accounted for by synaptic input to the supragranular layers of auditory cortex (AC) that is present when target sounds are heard but absent when they are missed. This additional input likely arises from cortico-cortical feedback and/or non-lemniscal thalamic projections and targets the apical dendrites of layer-5 (L5) pyramidal neurons. In turn, this leads to increased local field potential activity, increased spiking activity in L5 pyramidal neurons, and the AAN. The results are consistent with current cellular models of conscious processing and help bridge the gap between the macro and micro levels of perception-related brain activity.
Collapse
Affiliation(s)
- Carolina Fernandez Pujol
- Department of Biomedical Engineering, University of Miami, Coral Gables, Florida, United States of America
| | - Elizabeth G. Blundon
- Department of Biomedical Engineering, University of Miami, Coral Gables, Florida, United States of America
| | - Andrew R. Dykstra
- Department of Biomedical Engineering, University of Miami, Coral Gables, Florida, United States of America
| |
Collapse
|
3
|
Melland P, Curtu R. Attractor-Like Dynamics Extracted from Human Electrocorticographic Recordings Underlie Computational Principles of Auditory Bistable Perception. J Neurosci 2023; 43:3294-3311. [PMID: 36977581 PMCID: PMC10162465 DOI: 10.1523/jneurosci.1531-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 03/03/2023] [Accepted: 03/15/2023] [Indexed: 03/30/2023] Open
Abstract
In bistable perception, observers experience alternations between two interpretations of an unchanging stimulus. Neurophysiological studies of bistable perception typically partition neural measurements into stimulus-based epochs and assess neuronal differences between epochs based on subjects' perceptual reports. Computational studies replicate statistical properties of percept durations with modeling principles like competitive attractors or Bayesian inference. However, bridging neuro-behavioral findings with modeling theory requires the analysis of single-trial dynamic data. Here, we propose an algorithm for extracting nonstationary timeseries features from single-trial electrocorticography (ECoG) data. We applied the proposed algorithm to 5-min ECoG recordings from human primary auditory cortex obtained during perceptual alternations in an auditory triplet streaming task (six subjects: four male, two female). We report two ensembles of emergent neuronal features in all trial blocks. One ensemble consists of periodic functions that encode a stereotypical response to the stimulus. The other comprises more transient features and encodes dynamics associated with bistable perception at multiple time scales: minutes (within-trial alternations), seconds (duration of individual percepts), and milliseconds (switches between percepts). Within the second ensemble, we identified a slowly drifting rhythm that correlates with the perceptual states and several oscillators with phase shifts near perceptual switches. Projections of single-trial ECoG data onto these features establish low-dimensional attractor-like geometric structures invariant across subjects and stimulus types. These findings provide supporting neural evidence for computational models with oscillatory-driven attractor-based principles. The feature extraction techniques described here generalize across recording modality and are appropriate when hypothesized low-dimensional dynamics characterize an underlying neural system.SIGNIFICANCE STATEMENT Irrespective of the sensory modality, neurophysiological studies of multistable perception have typically investigated events time-locked to the perceptual switching rather than the time course of the perceptual states per se. Here, we propose an algorithm that extracts neuronal features of bistable auditory perception from largescale single-trial data while remaining agnostic to the subject's perceptual reports. The algorithm captures the dynamics of perception at multiple timescales, minutes (within-trial alternations), seconds (durations of individual percepts), and milliseconds (timing of switches), and distinguishes attributes of neural encoding of the stimulus from those encoding the perceptual states. Finally, our analysis identifies a set of latent variables that exhibit alternating dynamics along a low-dimensional manifold, similar to trajectories in attractor-based models for perceptual bistability.
Collapse
Affiliation(s)
- Pake Melland
- Department of Mathematics, Southern Methodist University, Dallas, Texas 75275
- Applied Mathematical & Computational Sciences, The University of Iowa, Iowa City, Iowa 52242
| | - Rodica Curtu
- Department of Mathematics, The University of Iowa, Iowa City, Iowa 52242
- The Iowa Neuroscience Institute, The University of Iowa, Iowa City, Iowa 52242
| |
Collapse
|
4
|
Devia C, Concha-Miranda M, Rodríguez E. Bi-Stable Perception: Self-Coordinating Brain Regions to Make-Up the Mind. Front Neurosci 2022; 15:805690. [PMID: 35153663 PMCID: PMC8829010 DOI: 10.3389/fnins.2021.805690] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 12/16/2021] [Indexed: 11/17/2022] Open
Abstract
Bi-stable perception is a strong instance of cognitive self-organization, providing a research model for how ‘the brain makes up its mind.’ The complexity of perceptual bistability prevents a simple attribution of functions to areas, because many cognitive processes, recruiting multiple brain regions, are simultaneously involved. The functional magnetic resonance imaging (fMRI) evidence suggests the activation of a large network of distant brain areas. Concurrently, electroencephalographic and magnetoencephalographic (MEEG) literature shows sub second oscillatory activity and phase synchrony on several frequency bands. Strongly represented are beta and gamma bands, often associated with neural/cognitive integration processes. The spatial extension and short duration of brain activities suggests the need for a fast, large-scale neural coordination mechanism. To address the range of temporo-spatial scales involved, we systematize the current knowledge from mathematical models, cognitive sciences and neuroscience at large, from single-cell- to system-level research, including evidence from human and non-human primates. Surprisingly, despite evidence spanning through different organization levels, models, and experimental approaches, the scarcity of integrative studies is evident. In a final section of the review we dwell on the reasons behind such scarcity and on the need of integration in order to achieve a real understanding of the complexities underlying bi-stable perception processes.
Collapse
Affiliation(s)
- Christ Devia
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Santiago, Chile
- Biomedical Neuroscience Institute, Universidad de Chile, Santiago, Chile
| | - Miguel Concha-Miranda
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Santiago, Chile
- Laboratorio de Neurodinámica Básica y Aplicada, Escuela de Psicología, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Eugenio Rodríguez
- Laboratorio de Neurodinámica Básica y Aplicada, Escuela de Psicología, Pontificia Universidad Católica de Chile, Santiago, Chile
- *Correspondence: Eugenio Rodríguez,
| |
Collapse
|
5
|
Widge AS, Ellard KK, Paulk AC, Basu I, Yousefi A, Zorowitz S, Gilmour A, Afzal A, Deckersbach T, Cash SS, Kramer MA, Eden UT, Dougherty DD, Eskandar EN. Treating Refractory Mental Illness With Closed-Loop Brain Stimulation: Progress Towards a Patient-Specific Transdiagnostic Approach. FOCUS (AMERICAN PSYCHIATRIC PUBLISHING) 2022; 20:137-151. [PMID: 35746936 PMCID: PMC9063604 DOI: 10.1176/appi.focus.20102] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2015] [Accepted: 07/25/2016] [Indexed: 01/03/2023]
|
6
|
Basile LFH, Sato JR, Pasquini HA, Velasques B, Ribeiro P, Anghinah R. Individual versus task differences in slow potential generators. Neurol Sci 2021; 42:3781-3789. [PMID: 33454832 DOI: 10.1007/s10072-021-05062-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 01/12/2021] [Indexed: 10/22/2022]
Abstract
Average slow potentials (SPs) can be computed from any voluntary task, minimally involving attention to anticipated stimuli. Their topography when recorded by large electrode arrays even during simple tasks is complex, multifocal, and its generators appear to be equally multifocal and highly variable across subjects. Various sources of noise of course contaminate such averages and must contribute to the topographic complexity. Here, we report a study in which the non-averaged SP band (0 to 1 Hz) was analyzed by independent components (ICA), from 256 channel recordings of 18 subjects, during four task conditions (resting, visual attention, CPT, and Stroop). We intended to verify whether the replicable SP generators (between two separate day sessions) modeled as current density reconstruction on structural MRI sets were individual-specific, and if putative task-related differences were systematic across subjects. Typically, 3 ICA components (out of 10) explained SPs in each task and subject, and their combined generators were highly variable across subjects: although some occipito-temporal and medial temporal areas contained generators in most subjects; the overall patterns were obviously variable, with no single area common to all 18 subjects. Linear regression modeling to compare combined generators (from all ICA components) between tasks and sessions showed significantly higher correlations between the four tasks than between sessions for each task. Moreover, it was clear that no common task-specific areas could be seen across subjects. Those results represent one more instance in which individual case analyses favor the hypothesis of individual-specific patterns of cortical activity, regardless of task conditions. We discuss this hypothesis with respect to results from the beta band, from individual-case fMRI studies, and its corroboration by functional neurosurgery and the neuropsychology of focal lesions.
Collapse
Affiliation(s)
- Luis F H Basile
- Laboratory of Psychophysiology, Faculdade da Saúde, UMESP, São Paulo, SP, Brazil. .,Division of Neurosurgery, Department of Neurology, University of São Paulo Medical School, São Paulo, SP, Brazil.
| | - João R Sato
- Center of Mathematics, Computation and Cognition, Universidade Federal do ABC, Santo André, SP, Brazil
| | - Henrique A Pasquini
- Laboratory of Psychophysiology, Faculdade da Saúde, UMESP, São Paulo, SP, Brazil
| | - Bruna Velasques
- Department of Psychiatry, Federal University of Rio de Janeiro, Rio de Janeiro, RJ, Brazil
| | - Pedro Ribeiro
- Department of Psychiatry, Federal University of Rio de Janeiro, Rio de Janeiro, RJ, Brazil
| | - Renato Anghinah
- Department of Neurology, University of São Paulo Medical School, São Paulo, SP, Brazil
| |
Collapse
|
7
|
Heelan C, Lee J, O’Shea R, Lynch L, Brandman DM, Truccolo W, Nurmikko AV. Decoding speech from spike-based neural population recordings in secondary auditory cortex of non-human primates. Commun Biol 2019; 2:466. [PMID: 31840111 PMCID: PMC6906475 DOI: 10.1038/s42003-019-0707-9] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 11/15/2019] [Indexed: 11/21/2022] Open
Abstract
Direct electronic communication with sensory areas of the neocortex is a challenging ambition for brain-computer interfaces. Here, we report the first successful neural decoding of English words with high intelligibility from intracortical spike-based neural population activity recorded from the secondary auditory cortex of macaques. We acquired 96-channel full-broadband population recordings using intracortical microelectrode arrays in the rostral and caudal parabelt regions of the superior temporal gyrus (STG). We leveraged a new neural processing toolkit to investigate the choice of decoding algorithm, neural preprocessing, audio representation, channel count, and array location on neural decoding performance. The presented spike-based machine learning neural decoding approach may further be useful in informing future encoding strategies to deliver direct auditory percepts to the brain as specific patterns of microstimulation.
Collapse
Affiliation(s)
- Christopher Heelan
- School of Engineering, Brown University, Providence, RI USA
- Connexon Systems, Providence, RI USA
| | - Jihun Lee
- School of Engineering, Brown University, Providence, RI USA
| | - Ronan O’Shea
- School of Engineering, Brown University, Providence, RI USA
| | - Laurie Lynch
- School of Engineering, Brown University, Providence, RI USA
| | - David M. Brandman
- Department of Surgery (Neurosurgery), Dalhousie University, Halifax, Nova Scotia Canada
| | - Wilson Truccolo
- Department of Neuroscience, Brown University, Providence, RI USA
- Carney Institute for Brain Science, Brown University, Providence, RI USA
| | - Arto V. Nurmikko
- School of Engineering, Brown University, Providence, RI USA
- Carney Institute for Brain Science, Brown University, Providence, RI USA
| |
Collapse
|
8
|
Neural Signatures of Auditory Perceptual Bistability Revealed by Large-Scale Human Intracranial Recordings. J Neurosci 2019; 39:6482-6497. [PMID: 31189576 PMCID: PMC6697394 DOI: 10.1523/jneurosci.0655-18.2019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 05/26/2019] [Accepted: 05/28/2019] [Indexed: 11/25/2022] Open
Abstract
A key challenge in neuroscience is understanding how sensory stimuli give rise to perception, especially when the process is supported by neural activity from an extended network of brain areas. Perception is inherently subjective, so interrogating its neural signatures requires, ideally, a combination of three factors: (1) behavioral tasks that separate stimulus-driven activity from perception per se; (2) human subjects who self-report their percepts while performing those tasks; and (3) concurrent neural recordings acquired at high spatial and temporal resolution. In this study, we analyzed human electrocorticographic recordings obtained during an auditory task which supported mutually exclusive perceptual interpretations. Eight neurosurgical patients (5 male; 3 female) listened to sequences of repeated triplets where tones were separated in frequency by several semitones. Subjects reported spontaneous alternations between two auditory perceptual states, 1-stream and 2-stream, by pressing a button. We compared averaged auditory evoked potentials (AEPs) associated with 1-stream and 2-stream percepts and identified significant differences between them in primary and nonprimary auditory cortex, surrounding auditory-related temporoparietal cortex, and frontal areas. We developed classifiers to identify spatial maps of percept-related differences in the AEP, corroborating findings from statistical analysis. We used one-dimensional embedding spaces to perform the group-level analysis. Our data illustrate exemplar high temporal resolution AEP waveforms in auditory core region; explain inconsistencies in perceptual effects within auditory cortex, reported across noninvasive studies of streaming of triplets; show percept-related changes in frontoparietal areas previously highlighted by studies that focused on perceptual transitions; and demonstrate that auditory cortex encodes maintenance of percepts and switches between them. SIGNIFICANCE STATEMENT The human brain has the remarkable ability to discern complex and ambiguous stimuli from the external world by parsing mixed inputs into interpretable segments. However, one's perception can deviate from objective reality. But how do perceptual discrepancies occur? What are their anatomical substrates? To address these questions, we performed intracranial recordings in neurosurgical patients as they reported their perception of sounds associated with two mutually exclusive interpretations. We identified signatures of subjective percepts as distinct from sound-driven brain activity in core and non-core auditory cortex and frontoparietal cortex. These findings were compared with previous studies of auditory bistable perception and suggested that perceptual transitions and maintenance of perceptual states were supported by common neural substrates.
Collapse
|
9
|
Gifford AM, Sperling MR, Sharan A, Gorniak RJ, Williams RB, Davis K, Kahana MJ, Cohen YE. Neuronal phase consistency tracks dynamic changes in acoustic spectral regularity. Eur J Neurosci 2018; 49:1268-1287. [PMID: 30402926 DOI: 10.1111/ejn.14263] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2018] [Revised: 10/15/2018] [Accepted: 10/23/2018] [Indexed: 11/28/2022]
Abstract
The brain parses the auditory environment into distinct sounds by identifying those acoustic features in the environment that have common relationships (e.g., spectral regularities) with one another and then grouping together the neuronal representations of these features. Although there is a large literature that tests how the brain tracks spectral regularities that are predictable, it is not known how the auditory system tracks spectral regularities that are not predictable and that change dynamically over time. Furthermore, the contribution of brain regions downstream of the auditory cortex to the coding of spectral regularity is unknown. Here, we addressed these two issues by recording electrocorticographic activity, while human patients listened to tone-burst sequences with dynamically varying spectral regularities, and identified potential neuronal mechanisms of the analysis of spectral regularities throughout the brain. We found that the degree of oscillatory stimulus phase consistency (PC) in multiple neuronal-frequency bands tracked spectral regularity. In particular, PC in the delta-frequency band seemed to be the best indicator of spectral regularity. We also found that these regularity representations existed in multiple regions throughout cortex. This widespread reliable modulation in PC - both in neuronal-frequency space and in cortical space - suggests that phase-based modulations may be a general mechanism for tracking regularity in the auditory system specifically and other sensory systems more generally. Our findings also support a general role for the delta-frequency band in processing the regularity of auditory stimuli.
Collapse
Affiliation(s)
- Adam M Gifford
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Michael R Sperling
- Jefferson Comprehensive Epilepsy Center, Department of Neurology, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Ashwini Sharan
- Jefferson Comprehensive Epilepsy Center, Department of Neurology, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Richard J Gorniak
- Department of Radiology, Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Ryan B Williams
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Kathryn Davis
- Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Michael J Kahana
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, Pennsylvania.,Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Yale E Cohen
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, Pennsylvania.,Departments of Otorhinolaryngology, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
10
|
Cai H, Screven LA, Dent ML. Behavioral measurements of auditory streaming and build-up by budgerigars ( Melopsittacus undulatus). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:1508. [PMID: 30424658 DOI: 10.1121/1.5054297] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2018] [Accepted: 08/27/2018] [Indexed: 06/09/2023]
Abstract
The perception of the build-up of auditory streaming has been widely investigated in humans, while it is unknown whether animals experience a similar perception when hearing high (H) and low (L) tonal pattern sequences. The paradigm previously used in European starlings (Sturnus vulgaris) was adopted in two experiments to address the build-up of auditory streaming in budgerigars (Melopsittacus undulatus). In experiment 1, different numbers of repetitions of low-high-low triplets were used in five conditions to study the build-up process. In experiment 2, 5 and 15 repetitions of high-low-high triplets were used to investigate the effects of repetition rate, frequency separation, and frequency range of the two tones on the birds' streaming perception. Similar to humans, budgerigars subjectively experienced the build-up process in auditory streaming; faster repetition rates and larger frequency separations enhanced the streaming perception, and these results were consistent across the two frequency ranges. Response latency analysis indicated that the budgerigars needed a longer amount of time to respond to stimuli that elicited a salient streaming perception. These results indicate, for the first time using a behavioral paradigm, that budgerigars experience a build-up of auditory streaming in a manner similar to humans.
Collapse
Affiliation(s)
- Huaizhen Cai
- Department of Psychology, University at Buffalo, The State University of New York, Buffalo, New York 14260, USA
| | - Laurel A Screven
- Department of Psychology, University at Buffalo, The State University of New York, Buffalo, New York 14260, USA
| | - Micheal L Dent
- Department of Psychology, University at Buffalo, The State University of New York, Buffalo, New York 14260, USA
| |
Collapse
|
11
|
Sanders RD, Winston JS, Barnes GR, Rees G. Magnetoencephalographic Correlates of Perceptual State During Auditory Bistability. Sci Rep 2018; 8:976. [PMID: 29343771 PMCID: PMC5772671 DOI: 10.1038/s41598-018-19287-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2016] [Accepted: 12/22/2017] [Indexed: 11/24/2022] Open
Abstract
Bistability occurs when two alternative percepts can be derived from the same physical stimulus. To identify the neural correlates of specific subjective experiences we used a bistable auditory stimulus and determined whether the two perceptual states could be distinguished electrophysiologically. Fourteen participants underwent magnetoencephalography while reporting their perceptual experience while listening to a continuous bistable stream of auditory tones. Participants reported bistability with a similar overall proportion of the two alternative percepts (52% vs 48%). At the individual level, sensor space electrophysiological discrimination between the percepts was possible in 9/14 participants with canonical variate analysis (CVA) or linear support vector machine (SVM) analysis over space and time dimensions. Classification was possible in 14/14 subjects with non-linear SVM. Similar effects were noted in an unconstrained source space CVA analysis (classifying 10/14 participants), linear SVM (classifying 9/14 subjects) and non-linear SVM (classifiying 13/14 participants). Source space analysis restricted to a priori ROIs showed discrimination was possible in the right and left auditory cortex with each classification approach but in the right intraparietal sulcus this was only apparent with non-linear SVM and only in a minority of particpants. Magnetoencephalography can be used to objectively classify auditory experiences from individual subjects.
Collapse
Affiliation(s)
- Robert D Sanders
- Institute of Cognitive Neuroscience University College London, Alexandra House, 17-19 Queen Square, London, WC1N 3AR, London, United Kingdom.
- Department of Anesthesiology, University of Wisconsin, Madison, USA.
| | - Joel S Winston
- Institute of Cognitive Neuroscience University College London, Alexandra House, 17-19 Queen Square, London, WC1N 3AR, London, United Kingdom
- Wellcome Trust Centre for Neuroimaging, University College London, London, WC1N 3BG, United Kingdom
| | - Gareth R Barnes
- Wellcome Trust Centre for Neuroimaging, University College London, London, WC1N 3BG, United Kingdom
| | - Geraint Rees
- Institute of Cognitive Neuroscience University College London, Alexandra House, 17-19 Queen Square, London, WC1N 3AR, London, United Kingdom
- Wellcome Trust Centre for Neuroimaging, University College London, London, WC1N 3BG, United Kingdom
| |
Collapse
|
12
|
Dykstra AR, Cariani PA, Gutschalk A. A roadmap for the study of conscious audition and its neural basis. Philos Trans R Soc Lond B Biol Sci 2017; 372:20160103. [PMID: 28044014 PMCID: PMC5206271 DOI: 10.1098/rstb.2016.0103] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2016] [Indexed: 12/16/2022] Open
Abstract
How and which aspects of neural activity give rise to subjective perceptual experience-i.e. conscious perception-is a fundamental question of neuroscience. To date, the vast majority of work concerning this question has come from vision, raising the issue of generalizability of prominent resulting theories. However, recent work has begun to shed light on the neural processes subserving conscious perception in other modalities, particularly audition. Here, we outline a roadmap for the future study of conscious auditory perception and its neural basis, paying particular attention to how conscious perception emerges (and of which elements or groups of elements) in complex auditory scenes. We begin by discussing the functional role of the auditory system, particularly as it pertains to conscious perception. Next, we ask: what are the phenomena that need to be explained by a theory of conscious auditory perception? After surveying the available literature for candidate neural correlates, we end by considering the implications that such results have for a general theory of conscious perception as well as prominent outstanding questions and what approaches/techniques can best be used to address them.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Andrew R Dykstra
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | | | - Alexander Gutschalk
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| |
Collapse
|
13
|
Widge AS, Ellard KK, Paulk AC, Basu I, Yousefi A, Zorowitz S, Gilmour A, Afzal A, Deckersbach T, Cash SS, Kramer MA, Eden UT, Dougherty DD, Eskandar EN. Treating refractory mental illness with closed-loop brain stimulation: Progress towards a patient-specific transdiagnostic approach. Exp Neurol 2017; 287:461-472. [PMID: 27485972 DOI: 10.1016/j.expneurol.2016.07.021] [Citation(s) in RCA: 79] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2015] [Accepted: 07/25/2016] [Indexed: 12/24/2022]
|
14
|
Dykstra AR, Halgren E, Gutschalk A, Eskandar EN, Cash SS. Neural Correlates of Auditory Perceptual Awareness and Release from Informational Masking Recorded Directly from Human Cortex: A Case Study. Front Neurosci 2016; 10:472. [PMID: 27812318 PMCID: PMC5071374 DOI: 10.3389/fnins.2016.00472] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2016] [Accepted: 10/03/2016] [Indexed: 11/13/2022] Open
Abstract
In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release) has not been well-characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity (HGA) between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus) as well as a broad P3b-like potential (between ~300 and 600 ms) with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas.
Collapse
Affiliation(s)
- Andrew R Dykstra
- Program in Speech and Hearing Bioscience and Technology, Harvard-MIT Division of Health Sciences and TechnologyCambridge, MA, USA; Department of Neurology, Massachusetts General Hospital and Harvard Medical SchoolBoston, MA, USA
| | - Eric Halgren
- Departments of Radiology and Neurosciences, University of California San Diego, La Jolla, CA, USA
| | - Alexander Gutschalk
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg Heidelberg, Germany
| | - Emad N Eskandar
- Department of Neurosurgery, Massachusetts General Hospital and Harvard Medical School Boston, MA, USA
| | - Sydney S Cash
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School Boston, MA, USA
| |
Collapse
|
15
|
Billig AJ, Carlyon RP. Automaticity and primacy of auditory streaming: Concurrent subjective and objective measures. J Exp Psychol Hum Percept Perform 2015; 42:339-353. [PMID: 26414168 PMCID: PMC4763253 DOI: 10.1037/xhp0000146] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Two experiments used subjective and objective measures to study the automaticity and primacy of auditory streaming. Listeners heard sequences of “ABA–” triplets, where “A” and “B” were tones of different frequencies and “–” was a silent gap. Segregation was more frequently reported, and rhythmically deviant triplets less well detected, for a greater between-tone frequency separation and later in the sequence. In Experiment 1, performing a competing auditory task for the first part of the sequence led to a reduction in subsequent streaming compared to when the tones were attended throughout. This is consistent with focused attention promoting streaming, and/or with attention switches resetting it. However, the proportion of segregated reports increased more rapidly following a switch than at the start of a sequence, indicating that some streaming occurred automatically. Modeling ruled out a simple “covert attention” account of this finding. Experiment 2 required listeners to perform subjective and objective tasks concurrently. It revealed superior performance during integrated compared to segregated reports, beyond that explained by the codependence of the two measures on stimulus parameters. We argue that listeners have limited access to low-level stimulus representations once perceptual organization has occurred, and that subjective and objective streaming measures partly index the same processes.
Collapse
|
16
|
Golden HL, Agustus JL, Goll JC, Downey LE, Mummery CJ, Schott JM, Crutch SJ, Warren JD. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease. Neuroimage Clin 2015; 7:699-708. [PMID: 26029629 PMCID: PMC4446369 DOI: 10.1016/j.nicl.2015.02.019] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Revised: 01/16/2015] [Accepted: 02/24/2015] [Indexed: 11/28/2022]
Abstract
Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known 'cocktail party effect' as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory 'foreground' and 'background'. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.
Collapse
Affiliation(s)
- Hannah L Golden
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Jennifer L Agustus
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Johanna C Goll
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Laura E Downey
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Catherine J Mummery
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Jonathan M Schott
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Sebastian J Crutch
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Jason D Warren
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| |
Collapse
|
17
|
Davidson GD, Pitts MA. Auditory event-related potentials associated with perceptual reversals of bistable pitch motion. Front Hum Neurosci 2014; 8:572. [PMID: 25152722 PMCID: PMC4126364 DOI: 10.3389/fnhum.2014.00572] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2014] [Accepted: 07/14/2014] [Indexed: 11/21/2022] Open
Abstract
Previous event-related potential (ERP) experiments have consistently identified two components associated with perceptual transitions of bistable visual stimuli, the "reversal negativity" (RN) and the "late positive complex" (LPC). The RN (~200 ms post-stimulus, bilateral occipital-parietal distribution) is thought to reflect transitions between neural representations that form the moment-to-moment contents of conscious perception, while the LPC (~400 ms, central-parietal) is considered an index of post-perceptual processing related to accessing and reporting one's percept. To explore the generality of these components across sensory modalities, the present experiment utilized a novel bistable auditory stimulus. Pairs of complex tones with ambiguous pitch relationships were presented sequentially while subjects reported whether they perceived the tone pairs as ascending or descending in pitch. ERPs elicited by the tones were compared according to whether perceived pitch motion changed direction or remained the same across successive trials. An auditory reversal negativity (aRN) component was evident at ~170 ms post-stimulus over bilateral fronto-central scalp locations. An auditory LPC component (aLPC) was evident at subsequent latencies (~350 ms, fronto-central distribution). These two components may be auditory analogs of the visual RN and LPC, suggesting functionally equivalent but anatomically distinct processes in auditory vs. visual bistable perception.
Collapse
|
18
|
Neural correlates of auditory streaming in an objective behavioral task. Proc Natl Acad Sci U S A 2014; 111:10738-43. [PMID: 25002519 DOI: 10.1073/pnas.1321487111] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Segregating streams of sounds from sources in complex acoustic scenes is crucial for perception in real world situations. We analyzed an objective psychophysical measure of stream segregation obtained while simultaneously recording forebrain neurons in the European starlings to investigate neural correlates of segregating a stream of A tones from a stream of B tones presented at one-half the rate. The objective measure, sensitivity for time shift detection of the B tone, was higher when the A and B tones were of the same frequency (one stream) compared with when there was a 6- or 12-semitone difference between them (two streams). The sensitivity for representing time shifts in spiking patterns was correlated with the behavioral sensitivity. The spiking patterns reflected the stimulus characteristics but not the behavioral response, indicating that the birds' primary cortical field represents the segregated streams, but not the decision process.
Collapse
|
19
|
Dolležal LV, Brechmann A, Klump GM, Deike S. Evaluating auditory stream segregation of SAM tone sequences by subjective and objective psychoacoustical tasks, and brain activity. Front Neurosci 2014; 8:119. [PMID: 24936170 PMCID: PMC4047832 DOI: 10.3389/fnins.2014.00119] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Accepted: 05/03/2014] [Indexed: 11/13/2022] Open
Abstract
Auditory stream segregation refers to a segregated percept of signal streams with different acoustic features. Different approaches have been pursued in studies of stream segregation. In psychoacoustics, stream segregation has mostly been investigated with a subjective task asking the subjects to report their percept. Few studies have applied an objective task in which stream segregation is evaluated indirectly by determining thresholds for a percept that depends on whether auditory streams are segregated or not. Furthermore, both perceptual measures and physiological measures of brain activity have been employed but only little is known about their relation. How the results from different tasks and measures are related is evaluated in the present study using examples relying on the ABA- stimulation paradigm that apply the same stimuli. We presented A and B signals that were sinusoidally amplitude modulated (SAM) tones providing purely temporal, spectral or both types of cues to evaluate perceptual stream segregation and its physiological correlate. Which types of cues are most prominent was determined by the choice of carrier and modulation frequencies (f mod) of the signals. In the subjective task subjects reported their percept and in the objective task we measured their sensitivity for detecting time-shifts of B signals in an ABA- sequence. As a further measure of processes underlying stream segregation we employed functional magnetic resonance imaging (fMRI). SAM tone parameters were chosen to evoke an integrated (1-stream), a segregated (2-stream), or an ambiguous percept by adjusting the f mod difference between A and B tones (Δf mod). The results of both psychoacoustical tasks are significantly correlated. BOLD responses in fMRI depend on Δf mod between A and B SAM tones. The effect of Δf mod, however, differs between auditory cortex and frontal regions suggesting differences in representation related to the degree of perceptual ambiguity of the sequences.
Collapse
Affiliation(s)
- Lena-Vanessa Dolležal
- Animal Physiology and Behavior Group, Department for Neuroscience, School for Medicine and Health Sciences, Center of Excellence "Hearing4all," Carl von Ossietzky University Oldenburg Oldenburg, Germany
| | - André Brechmann
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| | - Georg M Klump
- Animal Physiology and Behavior Group, Department for Neuroscience, School for Medicine and Health Sciences, Center of Excellence "Hearing4all," Carl von Ossietzky University Oldenburg Oldenburg, Germany
| | - Susann Deike
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| |
Collapse
|
20
|
Zündorf IC, Karnath HO, Lewald J. The effect of brain lesions on sound localization in complex acoustic environments. ACTA ACUST UNITED AC 2014; 137:1410-8. [PMID: 24618271 DOI: 10.1093/brain/awu044] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.
Collapse
Affiliation(s)
- Ida C Zündorf
- 1 Centre of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | | | | |
Collapse
|
21
|
Gutschalk A, Dykstra AR. Functional imaging of auditory scene analysis. Hear Res 2013; 307:98-110. [PMID: 23968821 DOI: 10.1016/j.heares.2013.08.003] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/21/2013] [Revised: 07/26/2013] [Accepted: 08/08/2013] [Indexed: 11/16/2022]
Abstract
Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Alexander Gutschalk
- Department of Neurology, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany.
| | | |
Collapse
|
22
|
Teki S, Chait M, Kumar S, Shamma S, Griffiths TD. Segregation of complex acoustic scenes based on temporal coherence. eLife 2013; 2:e00699. [PMID: 23898398 PMCID: PMC3721234 DOI: 10.7554/elife.00699] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2013] [Accepted: 06/16/2013] [Indexed: 11/13/2022] Open
Abstract
In contrast to the complex acoustic environments we encounter everyday, most studies of auditory segregation have used relatively simple signals. Here, we synthesized a new stimulus to examine the detection of coherent patterns (‘figures’) from overlapping ‘background’ signals. In a series of experiments, we demonstrate that human listeners are remarkably sensitive to the emergence of such figures and can tolerate a variety of spectral and temporal perturbations. This robust behavior is consistent with the existence of automatic auditory segregation mechanisms that are highly sensitive to correlations across frequency and time. The observed behavior cannot be explained purely on the basis of adaptation-based models used to explain the segregation of deterministic narrowband signals. We show that the present results are consistent with the predictions of a model of auditory perceptual organization based on temporal coherence. Our data thus support a role for temporal coherence as an organizational principle underlying auditory segregation. DOI:http://dx.doi.org/10.7554/eLife.00699.001 Even when seated in the middle of a crowded restaurant, we are still able to distinguish the speech of the person sitting opposite us from the conversations of fellow diners and a host of other background noise. While we generally perform this task almost effortlessly, it is unclear how the brain solves what is in reality a complex information processing problem. In the 1970s, researchers began to address this question using stimuli consisting of simple tones. When subjects are played a sequence of alternating high and low frequency tones, they perceive them as two independent streams of sound. Similar experiments in macaque monkeys reveal that each stream activates a different area of auditory cortex, suggesting that the brain may distinguish acoustic stimuli on the basis of their frequency. However, the simple tones that are used in laboratory experiments bear little resemblance to the complex sounds we encounter in everyday life. These are often made up of multiple frequencies, and overlap—both in frequency and in time—with other sounds in the environment. Moreover, recent experiments have shown that if a subject hears two tones simultaneously, he or she perceives them as belonging to a single stream of sound even if they have different frequencies: models that assume that we distinguish stimuli from noise on the basis of frequency alone struggle to explain this observation. Now, Teki, Chait, et al. have used more complex sounds, in which frequency components of the target stimuli overlap with those of background signals, to obtain new insights into how the brain solves this problem. Subjects were extremely good at discriminating these complex target stimuli from background noise, and computational modelling confirmed that they did so via integration of both frequency and temporal information. The work of Teki, Chait, et al. thus offers the first explanation for our ability to home in on speech and other pertinent sounds, even amidst a sea of background noise. DOI:http://dx.doi.org/10.7554/eLife.00699.002
Collapse
Affiliation(s)
- Sundeep Teki
- Wellcome Trust Centre for Neuroimaging , University College London , London , United Kingdom
| | | | | | | | | |
Collapse
|
23
|
Zündorf IC, Lewald J, Karnath HO. Neural correlates of sound localization in complex acoustic environments. PLoS One 2013; 8:e64259. [PMID: 23691185 PMCID: PMC3653868 DOI: 10.1371/journal.pone.0064259] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2013] [Accepted: 04/09/2013] [Indexed: 12/05/2022] Open
Abstract
Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality.
Collapse
Affiliation(s)
- Ida C. Zündorf
- Division of Neuropsychology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Ruhr University Bochum, Bochum, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Hans-Otto Karnath
- Division of Neuropsychology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- Department of Psychology, University of South Carolina, Columbia, South Carolina, United States of America
- * E-mail:
| |
Collapse
|
24
|
Carl D, Gutschalk A. Role of pattern, regularity, and silent intervals in auditory stream segregation based on inter-aural time differences. Exp Brain Res 2012; 224:557-70. [PMID: 23161159 DOI: 10.1007/s00221-012-3333-z] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2012] [Accepted: 10/31/2012] [Indexed: 11/25/2022]
Abstract
Tone triplets separated by a pause (ABA_) are a popular tone-repetition pattern to study auditory stream segregation. Such triplets produce a galloping rhythm when integrated, but isochronous rhythms when segregated. Other patterns lacking a pause may produce less-prominent rhythmic differences but stronger streaming. Here, we evaluated whether this difference is readily explained by the presence of the pause and potentially associated with the reduction of adaptation, or whether there is contribution of tone pattern per se. Sequences with repetitive ABA_ and ABAA patterns were presented in magnetoencephalography. A and B tones were separated by differences in inter-aural time differences (ΔITD). Results showed that the stronger streaming of ABAA was associated with a more prominent release from the adaptation of the P(1)m in auditory cortex. We further compared behavioral streaming responses for patterns with and without pauses, and varied the position of the pause and pattern regularity. Results showed a major effect of the pauses' presence, but no prominent effects of tone pattern or pattern regularity. These results make a case for the existence of an early, primitive streaming mechanism that does not require an analysis of the tone pattern at later stages suggested by predictive-coding models of auditory streaming. The results are better explained by the simpler population-separation model and stress the previously observed role of neural adaptation for streaming perception.
Collapse
Affiliation(s)
- David Carl
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany
| | | |
Collapse
|
25
|
Lewis JW, Talkington WJ, Tallaksen KC, Frum CA. Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes. Front Syst Neurosci 2012; 6:27. [PMID: 22582038 PMCID: PMC3348722 DOI: 10.3389/fnsys.2012.00027] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2011] [Accepted: 04/01/2012] [Indexed: 11/24/2022] Open
Abstract
Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds.
Collapse
Affiliation(s)
- James W Lewis
- Center for Neuroscience, West Virginia University, Morgantown WV, USA
| | | | | | | |
Collapse
|
26
|
Snyder JS, Gregg MK, Weintraub DM, Alain C. Attention, awareness, and the perception of auditory scenes. Front Psychol 2012; 3:15. [PMID: 22347201 PMCID: PMC3273855 DOI: 10.3389/fpsyg.2012.00015] [Citation(s) in RCA: 75] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2011] [Accepted: 01/11/2012] [Indexed: 11/25/2022] Open
Abstract
Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences.
Collapse
Affiliation(s)
- Joel S. Snyder
- Department of Psychology, University of Nevada Las VegasLas Vegas, NV, USA
| | - Melissa K. Gregg
- Department of Psychology, University of Nevada Las VegasLas Vegas, NV, USA
| | - David M. Weintraub
- Department of Psychology, University of Nevada Las VegasLas Vegas, NV, USA
| | - Claude Alain
- The Rotman Research Institute, Baycrest Centre for Geriatric CareToronto, ON, Canada
| |
Collapse
|