1
|
Wang B, Torok Z, Duffy A, Bell DG, Wongso S, Velho TAF, Fairhall AL, Lois C. Unsupervised restoration of a complex learned behavior after large-scale neuronal perturbation. Nat Neurosci 2024; 27:1176-1186. [PMID: 38684893 DOI: 10.1038/s41593-024-01630-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 03/26/2024] [Indexed: 05/02/2024]
Abstract
Reliable execution of precise behaviors requires that brain circuits are resilient to variations in neuronal dynamics. Genetic perturbation of the majority of excitatory neurons in HVC, a brain region involved in song production, in adult songbirds with stereotypical songs triggered severe degradation of the song. The song fully recovered within 2 weeks, and substantial improvement occurred even when animals were prevented from singing during the recovery period, indicating that offline mechanisms enable recovery in an unsupervised manner. Song restoration was accompanied by increased excitatory synaptic input to neighboring, unmanipulated neurons in the same brain region. A model inspired by the behavioral and electrophysiological findings suggests that unsupervised single-cell and population-level homeostatic plasticity rules can support the functional restoration after large-scale disruption of networks that implement sequential dynamics. These observations suggest the existence of cellular and systems-level restorative mechanisms that ensure behavioral resilience.
Collapse
Affiliation(s)
- Bo Wang
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| | - Zsofia Torok
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Alison Duffy
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
| | - David G Bell
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Shelyn Wongso
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Tarciso A F Velho
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Carlos Lois
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
2
|
Zheng J, Teoh HK, Delco ML, Bonassar LJ, Cohen I. Application of a variational autoencoder for clustering and analyzing in situ articular cartilage cellular response to mechanical stimuli. PLoS One 2024; 19:e0297947. [PMID: 38768116 PMCID: PMC11104615 DOI: 10.1371/journal.pone.0297947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 01/16/2024] [Indexed: 05/22/2024] Open
Abstract
In various biological systems, analyzing how cell behaviors are coordinated over time would enable a deeper understanding of tissue-scale response to physiologic or superphysiologic stimuli. Such data is necessary for establishing both normal tissue function and the sequence of events after injury that lead to chronic disease. However, collecting and analyzing these large datasets presents a challenge-such systems are time-consuming to process, and the overwhelming scale of data makes it difficult to parse overall behaviors. This problem calls for an analysis technique that can quickly provide an overview of the groups present in the entire system and also produce meaningful categorization of cell behaviors. Here, we demonstrate the application of an unsupervised method-the Variational Autoencoder (VAE)-to learn the features of cells in cartilage tissue after impact-induced injury and identify meaningful clusters of chondrocyte behavior. This technique quickly generated new insights into the spatial distribution of specific cell behavior phenotypes and connected specific peracute calcium signaling timeseries with long term cellular outcomes, demonstrating the value of the VAE technique.
Collapse
Affiliation(s)
- Jingyang Zheng
- Department of Physics, Cornell University, Ithaca, NY, United States of America
| | - Han Kheng Teoh
- Department of Physics, Cornell University, Ithaca, NY, United States of America
| | - Michelle L. Delco
- College of Veterinary Medicine, Cornell University, Ithaca, NY, United States of America
| | - Lawrence J. Bonassar
- Meinig School of Biomedical Engineering, Cornell University, Ithaca, NY, United States of America
- Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY, United States of America
| | - Itai Cohen
- Department of Physics, Cornell University, Ithaca, NY, United States of America
| |
Collapse
|
3
|
Santana GM, Dietrich MO. SqueakOut: Autoencoder-based segmentation of mouse ultrasonic vocalizations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.19.590368. [PMID: 38712291 PMCID: PMC11071348 DOI: 10.1101/2024.04.19.590368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Mice emit ultrasonic vocalizations (USVs) that are important for social communication. Despite great advancements in tools to detect USVs from audio files in the recent years, highly accurate segmentation of USVs from spectrograms (i.e., removing noise) remains a significant challenge. Here, we present a new dataset of 12,954 annotated spectrograms explicitly labeled for mouse USV segmentation. Leveraging this dataset, we developed SqueakOut, a lightweight (4.6M parameters) fully convolutional autoencoder that achieves high accuracy in supervised segmentation of USVs from spectrograms, with a Dice score of 90.22. SqueakOut combines a MobileNetV2 backbone with skip connections and transposed convolutions to precisely segment USVs. Using stochastic data augmentation techniques and a hybrid loss function, SqueakOut learns robust segmentation across varying recording conditions. We evaluate SqueakOut's performance, demonstrating substantial improvements over existing methods like VocalMat (63.82 Dice score). The accurate USV segmentations enabled by SqueakOut will facilitate novel methods for vocalization classification and more accurate analysis of mouse communication. To promote further research, we release the annotated 12,954 spectrogram USV segmentation dataset and the SqueakOut implementation publicly.
Collapse
Affiliation(s)
- Gustavo M Santana
- Laboratory of Physiology of Behavior, Interdepartmental Neuroscience Program, Program in Physics, Engineering and Biology, Yale University, USA
- Graduate Program in Biochemistry, Federal University of Rio Grande do Sul, BRA
| | - Marcelo O Dietrich
- Laboratory of Physiology of Behavior, Department of Comparative Medicine, Department of Neuroscience, Yale University, USA
| |
Collapse
|
4
|
Alam D, Zia F, Roberts TF. The hidden fitness of the male zebra finch courtship song. Nature 2024; 628:117-121. [PMID: 38509376 DOI: 10.1038/s41586-024-07207-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 02/19/2024] [Indexed: 03/22/2024]
Abstract
Vocal learning in songbirds is thought to have evolved through sexual selection, with female preference driving males to develop large and varied song repertoires1-3. However, many songbird species learn only a single song in their lifetime4. How sexual selection drives the evolution of single-song repertoires is not known. Here, by applying dimensionality-reduction techniques to the singing behaviour of zebra finches (Taeniopygia guttata), we show that syllable spread in low-dimensional feature space explains how single songs function as honest indicators of fitness. We find that this Gestalt measure of behaviour captures the spectrotemporal distinctiveness of song syllables in zebra finches; that females strongly prefer songs that occupy more latent space; and that matching path lengths in low-dimensional space is difficult for young males. Our findings clarify how simple vocal repertoires may have evolved in songbirds and indicate divergent strategies for how sexual selection can shape vocal learning.
Collapse
Affiliation(s)
- Danyal Alam
- Department of Neuroscience, UT Southwestern Medical Center, Dallas, TX, USA
| | - Fayha Zia
- Department of Neuroscience, UT Southwestern Medical Center, Dallas, TX, USA
| | - Todd F Roberts
- Department of Neuroscience, UT Southwestern Medical Center, Dallas, TX, USA.
- O'Donnell Brain Institute, UT Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
5
|
Martin K, Cornero FM, Clayton NS, Adam O, Obin N, Dufour V. Vocal complexity in a socially complex corvid: gradation, diversity and lack of common call repertoire in male rooks. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231713. [PMID: 38204786 PMCID: PMC10776222 DOI: 10.1098/rsos.231713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 12/08/2023] [Indexed: 01/12/2024]
Abstract
Vocal communication is widespread in animals, with vocal repertoires of varying complexity. The social complexity hypothesis predicts that species may need high vocal complexity to deal with complex social organization (e.g. have a variety of different interindividual relations). We quantified the vocal complexity of two geographically distant captive colonies of rooks, a corvid species with complex social organization and cognitive performances, but understudied vocal abilities. We quantified the diversity and gradation of their repertoire, as well as the inter-individual similarity at the vocal unit level. We found that males produced call units with lower diversity and gradation than females, while song units did not differ between sexes. Surprisingly, while females produced highly similar call repertoires, even between colonies, each individual male produced almost completely different call repertoires from any other individual. These findings question the way male rooks communicate with their social partners. We suggest that each male may actively seek to remain vocally distinct, which could be an asset in their frequently changing social environment. We conclude that inter-individual similarity, an understudied aspect of vocal repertoires, should also be considered as a measure of vocal complexity.
Collapse
Affiliation(s)
- Killian Martin
- PRC, UMR 7247, Ethologie Cognitive et Sociale, CNRS-IFCE-INRAE-Université de Tours, Strasbourg, France
| | | | | | - Olivier Adam
- Institut Jean Le Rond d'Alembert, UMR 7190, CNRS-Sorbonne Université, 75005 Paris, France
- Institut des Neurosciences Paris-Saclay, UMR 9197, CNRS-Université Paris Sud, Orsay, France
| | - Nicolas Obin
- STMS Lab, IRCAM, CNRS-Sorbonne Université, Paris, France
| | - Valérie Dufour
- PRC, UMR 7247, Ethologie Cognitive et Sociale, CNRS-IFCE-INRAE-Université de Tours, Strasbourg, France
| |
Collapse
|
6
|
Zhao Z, Teoh HK, Carpenter J, Nemon F, Kardon B, Cohen I, Goldberg JH. Anterior forebrain pathway in parrots is necessary for producing learned vocalizations with individual signatures. Curr Biol 2023; 33:5415-5426.e4. [PMID: 38070505 PMCID: PMC10799565 DOI: 10.1016/j.cub.2023.11.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/30/2023] [Accepted: 11/08/2023] [Indexed: 12/21/2023]
Abstract
Parrots have enormous vocal imitation capacities and produce individually unique vocal signatures. Like songbirds, parrots have a nucleated neural song system with distinct anterior (AFP) and posterior forebrain pathways (PFP). To test if song systems of parrots and songbirds, which diverged over 50 million years ago, have a similar functional organization, we first established a neuroscience-compatible call-and-response behavioral paradigm to elicit learned contact calls in budgerigars (Melopsittacus undulatus). Using variational autoencoder-based machine learning methods, we show that contact calls within affiliated groups converge but that individuals maintain unique acoustic features, or vocal signatures, even after call convergence. Next, we transiently inactivated the outputs of AFP to test if learned vocalizations can be produced by the PFP alone. As in songbirds, AFP inactivation had an immediate effect on vocalizations, consistent with a premotor role. But in contrast to songbirds, where the isolated PFP is sufficient to produce stereotyped and acoustically normal vocalizations, isolation of the budgerigar PFP caused a degradation of call acoustic structure, stereotypy, and individual uniqueness. Thus, the contribution of AFP and the capacity of isolated PFP to produce learned vocalizations have diverged substantially between songbirds and parrots, likely driven by their distinct behavioral ecology and neural connectivity.
Collapse
Affiliation(s)
- Zhilei Zhao
- Department of Neurobiology and Behavior, Cornell University, Ithaca, NY 14853, USA
| | - Han Kheng Teoh
- Department of Physics, Cornell University, Ithaca, NY 14853, USA
| | - Julie Carpenter
- Department of Neurobiology and Behavior, Cornell University, Ithaca, NY 14853, USA
| | - Frieda Nemon
- Department of Neurobiology and Behavior, Cornell University, Ithaca, NY 14853, USA
| | - Brian Kardon
- Department of Neurobiology and Behavior, Cornell University, Ithaca, NY 14853, USA
| | - Itai Cohen
- Department of Physics, Cornell University, Ithaca, NY 14853, USA
| | - Jesse H Goldberg
- Department of Neurobiology and Behavior, Cornell University, Ithaca, NY 14853, USA.
| |
Collapse
|
7
|
Torok Z, Luebbert L, Feldman J, Duffy A, Nevue AA, Wongso S, Mello CV, Fairhall A, Pachter L, Gonzalez WG, Lois C. Recovery of a learned behavior despite partial restoration of neuronal dynamics after chronic inactivation of inhibitory neurons. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.17.541057. [PMID: 37292888 PMCID: PMC10245685 DOI: 10.1101/2023.05.17.541057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Maintaining motor skills is crucial for an animal's survival, enabling it to endure diverse perturbations throughout its lifespan, such as trauma, disease, and aging. What mechanisms orchestrate brain circuit reorganization and recovery to preserve the stability of behavior despite the continued presence of a disturbance? To investigate this question, we chronically silenced a fraction of inhibitory neurons in a brain circuit necessary for singing in zebra finches. Song in zebra finches is a complex, learned motor behavior and central to reproduction. This manipulation altered brain activity and severely perturbed song for around two months, after which time it was precisely restored. Electrophysiology recordings revealed abnormal offline dynamics, resulting from chronic inhibition loss, some aspects of which returned to normal as the song recovered. However, even after the song had fully recovered, the levels of neuronal firing in the premotor and motor areas did not return to a control-like state. Single-cell RNA sequencing revealed that chronic silencing of interneurons led to elevated levels of microglia and MHC I, which were also observed in normal juveniles during song learning. These experiments demonstrate that the adult brain can overcome extended periods of abnormal activity, and precisely restore a complex behavior, without recovering normal neuronal dynamics. These findings suggest that the successful functional recovery of a brain circuit after a perturbation can involve more than mere restoration to its initial configuration. Instead, the circuit seems to adapt and reorganize into a new state capable of producing the original behavior despite the persistence of some abnormal neuronal dynamics.
Collapse
Affiliation(s)
- Zsofia Torok
- Division of Biology and Biological Engineering, California Institute of Technology; Pasadena, CA, USA
| | - Laura Luebbert
- Division of Biology and Biological Engineering, California Institute of Technology; Pasadena, CA, USA
| | - Jordan Feldman
- Division of Biology and Biological Engineering, California Institute of Technology; Pasadena, CA, USA
| | | | | | - Shelyn Wongso
- Division of Biology and Biological Engineering, California Institute of Technology; Pasadena, CA, USA
| | | | | | - Lior Pachter
- Division of Biology and Biological Engineering, California Institute of Technology; Pasadena, CA, USA
- Department of Computing and Mathematical Sciences, California Institute of Technology; Pasadena, CA, USA
| | - Walter G. Gonzalez
- Department of Physiology, University of San Francisco; San Francisco, CA, USA
| | - Carlos Lois
- Division of Biology and Biological Engineering, California Institute of Technology; Pasadena, CA, USA
| |
Collapse
|
8
|
Coffey KR, Nickelson WB, Dawkins AJ, Neumaier JF. Rapid appearance of negative emotion during oral fentanyl self-administration in male and female rats. Addict Biol 2023; 28:e13344. [PMID: 38017643 PMCID: PMC10745948 DOI: 10.1111/adb.13344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 09/14/2023] [Accepted: 09/21/2023] [Indexed: 11/30/2023]
Abstract
Opioid use disorder has become an epidemic in the United States, fuelled by the widespread availability of fentanyl, which produces rapid and intense euphoria followed by severe withdrawal and emotional distress. We developed a new preclinical model of fentanyl seeking in outbred male and female rats using volitional oral self-administration (SA) that can be readily applied in labs without intravascular access. Using a traditional two-lever operant procedure, rats learned to take oral fentanyl vigorously, escalated intake across sessions, and readily reinstated responding to conditioned cues after extinction. Oral SA also revealed individual and sex differences that are essential to studying substance use risk propensity. During a behavioural economics task, rats displayed inelastic demand curves and maintained stable intake across a wide range of fentanyl concentrations. Oral SA was also neatly patterned, with distinct 'loading' and 'maintenance' phases of responding within each session. Using our software DeepSqueak, we analysed ultrasonic vocalizations (USVs), which are innate expressions of current emotional state in rats. Rats produced 50 kHz USVs during loading then shifted quickly to 22 kHz calls despite ongoing maintenance of oral fentanyl taking, reflecting a transition to negative reinforcement. Using fibre photometry, we found that the lateral habenula differentially processed drug cues and drug consumption depending on affective state, with potentiated modulation by drug cues and consumption during the negative affective maintenance phase. Together, these results indicate a rapid progression from positive to negative reinforcement occurs even within an active drug taking session, revealing a within-session opponent process.
Collapse
Affiliation(s)
- Kevin R. Coffey
- Department of Psychiatry & Behavioral Sciences, University of Washington School of Medicine, Seattle, WA, 98105, USA
| | - William B. Nickelson
- Mental Illness Research, Education and Clinical Center, Puget Sound VA Health Care System, 1660 S Columbian Way, Seattle, WA 98108
| | - Aliyah J. Dawkins
- Mental Illness Research, Education and Clinical Center, Puget Sound VA Health Care System, 1660 S Columbian Way, Seattle, WA 98108
| | - John F. Neumaier
- Mental Illness Research, Education and Clinical Center, Puget Sound VA Health Care System, 1660 S Columbian Way, Seattle, WA 98108
- Department of Psychiatry & Behavioral Sciences, University of Washington School of Medicine, Seattle, WA, 98105, USA
- Department of Pharmacology, University of Washington School of Medicine, Seattle, WA, 98105, USA
| |
Collapse
|
9
|
Rutz C, Bronstein M, Raskin A, Vernes SC, Zacarian K, Blasi DE. Using machine learning to decode animal communication. Science 2023; 381:152-155. [PMID: 37440653 DOI: 10.1126/science.adg7314] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/15/2023]
Abstract
New methods promise transformative insights and conservation benefits.
Collapse
Affiliation(s)
- Christian Rutz
- School of Biology, University of St Andrews, St Andrews, Scotland, UK
| | - Michael Bronstein
- School of Biology, University of St Andrews, St Andrews, Scotland, UK
| | - Aza Raskin
- School of Biology, University of St Andrews, St Andrews, Scotland, UK
| | - Sonja C Vernes
- School of Biology, University of St Andrews, St Andrews, Scotland, UK
| | | | - Damián E Blasi
- School of Biology, University of St Andrews, St Andrews, Scotland, UK
| |
Collapse
|
10
|
Best P, Paris S, Glotin H, Marxer R. Deep audio embeddings for vocalisation clustering. PLoS One 2023; 18:e0283396. [PMID: 37428759 DOI: 10.1371/journal.pone.0283396] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 06/25/2023] [Indexed: 07/12/2023] Open
Abstract
The study of non-human animals' communication systems generally relies on the transcription of vocal sequences using a finite set of discrete units. This set is referred to as a vocal repertoire, which is specific to a species or a sub-group of a species. When conducted by human experts, the formal description of vocal repertoires can be laborious and/or biased. This motivates computerised assistance for this procedure, for which machine learning algorithms represent a good opportunity. Unsupervised clustering algorithms are suited for grouping close points together, provided a relevant representation. This paper therefore studies a new method for encoding vocalisations, allowing for automatic clustering to alleviate vocal repertoire characterisation. Borrowing from deep representation learning, we use a convolutional auto-encoder network to learn an abstract representation of vocalisations. We report on the quality of the learnt representation, as well as of state of the art methods, by quantifying their agreement with expert labelled vocalisation types from 8 datasets of other studies across 6 species (birds and marine mammals). With this benchmark, we demonstrate that using auto-encoders improves the relevance of vocalisation representation which serves repertoire characterisation using a very limited number of settings. We also publish a Python package for the bioacoustic community to train their own vocalisation auto-encoders or use a pretrained encoder to browse vocal repertoires and ease unit wise annotation.
Collapse
Affiliation(s)
- Paul Best
- Université de Toulon, Aix Marseille Univ, CNRS, LIS, Toulon, France
| | - Sébastien Paris
- Université de Toulon, Aix Marseille Univ, CNRS, LIS, Toulon, France
| | - Hervé Glotin
- Université de Toulon, Aix Marseille Univ, CNRS, LIS, Toulon, France
| | - Ricard Marxer
- Université de Toulon, Aix Marseille Univ, CNRS, LIS, Toulon, France
| |
Collapse
|
11
|
Coffey KR, Nickelson W, Dawkins AJ, Neumaier JF. Rapid appearance of negative emotion during oral fentanyl self-administration in male and female rats. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.27.538613. [PMID: 37163074 PMCID: PMC10168304 DOI: 10.1101/2023.04.27.538613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Opioid use disorder has become an epidemic in the United States, fueled by the widespread availability of fentanyl, which produces rapid and intense euphoria followed by severe withdrawal and emotional distress. We developed a new preclinical model of fentanyl seeking in outbred male and female rats using volitional oral self-administration that can be readily applied in labs without intravascular access. Using a traditional two lever operant procedure, rats learned to take oral fentanyl vigorously, escalated intake across sessions, and readily reinstated responding to conditioned cues after extinction. Oral self-administration also revealed individual and sex differences that are essential to studying substance use risk propensity. During a behavioral economics task, rats displayed inelastic demand curves and maintained stable intake across a wide range of fentanyl concentrations. Oral SA was also neatly patterned, with distinct "loading" and "maintenance" phases of responding within each session. Using our software DeepSqueak, we analyzed thousands of ultrasonic vocalizations (USVs), which are innate expressions of current emotional state in rats. Rats produced 50 kHz USVs during loading then shifted quickly to 22 kHz calls despite ongoing maintenance oral fentanyl taking, reflecting a transition to negative reinforcement. Using fiber photometry, we found that the lateral habenula differentially processed drug-cues and drug-consumption depending on affective state, with potentiated modulation by drug cues and consumption during the negative affective maintenance phase. Together, these results indicate a rapid progression from positive to negative reinforcement occurs even within an active drug taking session, revealing a within-session opponent process.
Collapse
Affiliation(s)
- Kevin R. Coffey
- Department of Psychiatry & Behavioral Sciences, University of Washington School of Medicine, Seattle, WA, 98104, USA
| | - William Nickelson
- Mental Illness Research, Education and Clinical Center, Puget Sound VA Health Care System, 660 S Columbian Way, Seattle, WA 98108
| | - Aliyah J. Dawkins
- Mental Illness Research, Education and Clinical Center, Puget Sound VA Health Care System, 660 S Columbian Way, Seattle, WA 98108
| | - John F. Neumaier
- Mental Illness Research, Education and Clinical Center, Puget Sound VA Health Care System, 660 S Columbian Way, Seattle, WA 98108
- Department of Psychiatry & Behavioral Sciences, University of Washington School of Medicine, Seattle, WA, 98104, USA
- Department of Pharmacology, University of Washington School of Medicine, Seattle, WA, 98104, USA
| |
Collapse
|
12
|
Brudner S, Pearson J, Mooney R. Generative models of birdsong learning link circadian fluctuations in song variability to changes in performance. PLoS Comput Biol 2023; 19:e1011051. [PMID: 37126511 PMCID: PMC10150982 DOI: 10.1371/journal.pcbi.1011051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 03/27/2023] [Indexed: 05/02/2023] Open
Abstract
Learning skilled behaviors requires intensive practice over days, months, or years. Behavioral hallmarks of practice include exploratory variation and long-term improvements, both of which can be impacted by circadian processes. During weeks of vocal practice, the juvenile male zebra finch transforms highly variable and simple song into a stable and precise copy of an adult tutor's complex song. Song variability and performance in juvenile finches also exhibit circadian structure that could influence this long-term learning process. In fact, one influential study reported juvenile song regresses towards immature performance overnight, while another suggested a more complex pattern of overnight change. However, neither of these studies thoroughly examined how circadian patterns of variability may structure the production of more or less mature songs. Here we relate the circadian dynamics of song maturation to circadian patterns of song variation, leveraging a combination of data-driven approaches. In particular we analyze juvenile singing in learned feature space that supports both data-driven measures of song maturity and generative developmental models of song production. These models reveal that circadian fluctuations in variability lead to especially regressive morning variants even without overall overnight regression, and highlight the utility of data-driven generative models for untangling these contributions.
Collapse
Affiliation(s)
- Samuel Brudner
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina, United States of America
| | - John Pearson
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina, United States of America
- Department of Biostatistics & Bioinformatics, Duke University, Durham, North Carolina, United States of America
| | - Richard Mooney
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina, United States of America
| |
Collapse
|
13
|
Arnaud V, Pellegrino F, Keenan S, St-Gelais X, Mathevon N, Levréro F, Coupé C. Improving the workflow to crack Small, Unbalanced, Noisy, but Genuine (SUNG) datasets in bioacoustics: The case of bonobo calls. PLoS Comput Biol 2023; 19:e1010325. [PMID: 37053268 PMCID: PMC10129004 DOI: 10.1371/journal.pcbi.1010325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 04/25/2023] [Accepted: 03/01/2023] [Indexed: 04/15/2023] Open
Abstract
Despite the accumulation of data and studies, deciphering animal vocal communication remains challenging. In most cases, researchers must deal with the sparse recordings composing Small, Unbalanced, Noisy, but Genuine (SUNG) datasets. SUNG datasets are characterized by a limited number of recordings, most often noisy, and unbalanced in number between the individuals or categories of vocalizations. SUNG datasets therefore offer a valuable but inevitably distorted vision of communication systems. Adopting the best practices in their analysis is essential to effectively extract the available information and draw reliable conclusions. Here we show that the most recent advances in machine learning applied to a SUNG dataset succeed in unraveling the complex vocal repertoire of the bonobo, and we propose a workflow that can be effective with other animal species. We implement acoustic parameterization in three feature spaces and run a Supervised Uniform Manifold Approximation and Projection (S-UMAP) to evaluate how call types and individual signatures cluster in the bonobo acoustic space. We then implement three classification algorithms (Support Vector Machine, xgboost, neural networks) and their combination to explore the structure and variability of bonobo calls, as well as the robustness of the individual signature they encode. We underscore how classification performance is affected by the feature set and identify the most informative features. In addition, we highlight the need to address data leakage in the evaluation of classification performance to avoid misleading interpretations. Our results lead to identifying several practical approaches that are generalizable to any other animal communication system. To improve the reliability and replicability of vocal communication studies with SUNG datasets, we thus recommend: i) comparing several acoustic parameterizations; ii) visualizing the dataset with supervised UMAP to examine the species acoustic space; iii) adopting Support Vector Machines as the baseline classification approach; iv) explicitly evaluating data leakage and possibly implementing a mitigation strategy.
Collapse
Affiliation(s)
- Vincent Arnaud
- Département des arts, des lettres et du langage, Université du Québec à Chicoutimi, Chicoutimi, Canada
- Laboratoire Dynamique Du Langage, UMR 5596, Université de Lyon, CNRS, Lyon, France
| | - François Pellegrino
- Laboratoire Dynamique Du Langage, UMR 5596, Université de Lyon, CNRS, Lyon, France
| | - Sumir Keenan
- ENES Bioacoustics Research Laboratory, University of Saint Étienne, CRNL, CNRS UMR 5292, Inserm UMR_S 1028, Saint-Étienne, France
| | - Xavier St-Gelais
- Département des arts, des lettres et du langage, Université du Québec à Chicoutimi, Chicoutimi, Canada
| | - Nicolas Mathevon
- ENES Bioacoustics Research Laboratory, University of Saint Étienne, CRNL, CNRS UMR 5292, Inserm UMR_S 1028, Saint-Étienne, France
| | - Florence Levréro
- ENES Bioacoustics Research Laboratory, University of Saint Étienne, CRNL, CNRS UMR 5292, Inserm UMR_S 1028, Saint-Étienne, France
| | - Christophe Coupé
- Laboratoire Dynamique Du Langage, UMR 5596, Université de Lyon, CNRS, Lyon, France
- Department of Linguistics, The University of Hong Kong, Hong Kong, China
| |
Collapse
|
14
|
Barker AJ. Acoustic communication: Deer mice join the chorus. Curr Biol 2023; 33:R264-R266. [PMID: 37040707 DOI: 10.1016/j.cub.2023.02.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/13/2023]
Abstract
A new study has identified two distinct pup vocalizations in deer mice, showing that discrete genetic loci explain the acoustic variation between these two call types and that the calls elicit different levels of maternal responsiveness.
Collapse
Affiliation(s)
- Alison J Barker
- Social Systems, Social Systems and Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany.
| |
Collapse
|
15
|
Jourjine N, Woolfolk ML, Sanguinetti-Scheck JI, Sabatini JE, McFadden S, Lindholm AK, Hoekstra HE. Two pup vocalization types are genetically and functionally separable in deer mice. Curr Biol 2023; 33:1237-1248.e4. [PMID: 36893759 DOI: 10.1016/j.cub.2023.02.045] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 02/11/2023] [Accepted: 02/14/2023] [Indexed: 03/10/2023]
Abstract
Vocalization is a widespread social behavior in vertebrates that can affect fitness in the wild. Although many vocal behaviors are highly conserved, heritable features of specific vocalization types can vary both within and between species, raising the questions of why and how some vocal behaviors evolve. Here, using new computational tools to automatically detect and cluster vocalizations into distinct acoustic categories, we compare pup isolation calls across neonatal development in eight taxa of deer mice (genus Peromyscus) and compare them with laboratory mice (C57BL6/J strain) and free-living, wild house mice (Mus musculus domesticus). Whereas both Peromyscus and Mus pups produce ultrasonic vocalizations (USVs), Peromyscus pups also produce a second call type with acoustic features, temporal rhythms, and developmental trajectories that are distinct from those of USVs. In deer mice, these lower frequency "cries" are predominantly emitted in postnatal days one through nine, whereas USVs are primarily made after day 9. Using playback assays, we show that cries result in a more rapid approach by Peromyscus mothers than USVs, suggesting a role for cries in eliciting parental care early in neonatal development. Using a genetic cross between two sister species of deer mice exhibiting large, innate differences in the acoustic structure of cries and USVs, we find that variation in vocalization rate, duration, and pitch displays different degrees of genetic dominance and that cry and USV features can be uncoupled in second-generation hybrids. Taken together, this work shows that vocal behavior can evolve quickly between closely related rodent species in which vocalization types, likely serving distinct functions in communication, are controlled by distinct genetic loci.
Collapse
Affiliation(s)
- Nicholas Jourjine
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - Maya L Woolfolk
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - Juan I Sanguinetti-Scheck
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - John E Sabatini
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - Sade McFadden
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - Anna K Lindholm
- Department of Evolutionary Biology & Environmental Studies, University of Zürich, Winterthurerstrasse, 190 8057 Zürich, Switzerland
| | - Hopi E Hoekstra
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA.
| |
Collapse
|
16
|
Lorenz C, Hao X, Tomka T, Rüttimann L, Hahnloser RH. Interactive extraction of diverse vocal units from a planar embedding without the need for prior sound segmentation. FRONTIERS IN BIOINFORMATICS 2023; 2:966066. [PMID: 36710910 PMCID: PMC9880044 DOI: 10.3389/fbinf.2022.966066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 11/14/2022] [Indexed: 01/15/2023] Open
Abstract
Annotating and proofreading data sets of complex natural behaviors such as vocalizations are tedious tasks because instances of a given behavior need to be correctly segmented from background noise and must be classified with minimal false positive error rate. Low-dimensional embeddings have proven very useful for this task because they can provide a visual overview of a data set in which distinct behaviors appear in different clusters. However, low-dimensional embeddings introduce errors because they fail to preserve distances; and embeddings represent only objects of fixed dimensionality, which conflicts with vocalizations that have variable dimensions stemming from their variable durations. To mitigate these issues, we introduce a semi-supervised, analytical method for simultaneous segmentation and clustering of vocalizations. We define a given vocalization type by specifying pairs of high-density regions in the embedding plane of sound spectrograms, one region associated with vocalization onsets and the other with offsets. We demonstrate our two-neighborhood (2N) extraction method on the task of clustering adult zebra finch vocalizations embedded with UMAP. We show that 2N extraction allows the identification of short and long vocal renditions from continuous data streams without initially committing to a particular segmentation of the data. Also, 2N extraction achieves much lower false positive error rate than comparable approaches based on a single defining region. Along with our method, we present a graphical user interface (GUI) for visualizing and annotating data.
Collapse
Affiliation(s)
- Corinna Lorenz
- Institute of Neuroinformatics and Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland,Université Paris-Saclay, CNRS, Institut des Neurosciences Paris-Saclay, Saclay, France
| | - Xinyu Hao
- Institute of Neuroinformatics and Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland,Tianjin University, School of Electrical and Information Engineering, Tianjin, China
| | - Tomas Tomka
- Institute of Neuroinformatics and Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Linus Rüttimann
- Institute of Neuroinformatics and Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Richard H.R. Hahnloser
- Institute of Neuroinformatics and Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland,*Correspondence: Richard H.R. Hahnloser,
| |
Collapse
|
17
|
Pranic NM, Kornbrek C, Yang C, Cleland TA, Tschida KA. Rates of ultrasonic vocalizations are more strongly related than acoustic features to non-vocal behaviors in mouse pups. Front Behav Neurosci 2022; 16:1015484. [PMID: 36600992 PMCID: PMC9805956 DOI: 10.3389/fnbeh.2022.1015484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022] Open
Abstract
Mouse pups produce. ultrasonic vocalizations (USVs) in response to isolation from the nest (i.e., isolation USVs). Rates and acoustic features of isolation USVs change dramatically over the first two weeks of life, and there is also substantial variability in the rates and acoustic features of isolation USVs at a given postnatal age. The factors that contribute to within age variability in isolation USVs remain largely unknown. Here, we explore the extent to which non-vocal behaviors of mouse pups relate to the within age variability in rates and acoustic features of their USVs. We recorded non-vocal behaviors of isolated C57BL/6J mouse pups at four postnatal ages (postnatal days 5, 10, 15, and 20), measured rates of isolation USV production, and applied a combination of pre-defined acoustic feature measurements and an unsupervised machine learning-based vocal analysis method to examine USV acoustic features. When we considered different categories of non-vocal behavior, our analyses revealed that mice in all postnatal age groups produce higher rates of isolation USVs during active non-vocal behaviors than when lying still. Moreover, rates of isolation USVs are correlated with the intensity (i.e., magnitude) of non-vocal body and limb movements within a given trial. In contrast, USVs produced during different categories of non-vocal behaviors and during different intensities of non-vocal movement do not differ substantially in their acoustic features. Our findings suggest that levels of behavioral arousal contribute to within age variability in rates, but not acoustic features, of mouse isolation USVs.
Collapse
|
18
|
Michaud F, Sueur J, Le Cesne M, Haupert S. Unsupervised classification to improve the quality of a bird song recording dataset. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
19
|
Abstract
Have your ever felt as happy as a lark, feathered your nest or taken someone under your wing? As we watch birds, we cannot help but be struck by their uncannily familiar behaviors - singing, nest building, caring for their young - to name just a few. Songbirds - the oscine suborder of perching birds that constitute roughly half (∼4,000) of all known avian species - are noted for the songs that males and sometimes both sexes in this group sing to court mates and defend territory from rivals. Birdsongs contain several to many acoustically distinct syllables, typically organized into a stereotyped phrase, and span the same audio bandwidth that we exploit for speech and music, making them easy for us to hear and appreciate. Consequently, eavesdropping humans long ago detected the most striking parallel between songbirds and humans: juvenile songbirds learn to sing in a manner similar to a child learning to speak.
Collapse
Affiliation(s)
- Richard Mooney
- Department of Neurobiology, Duke University School of Medicine, Durham, NC 27710, USA.
| |
Collapse
|
20
|
Rookognise: Acoustic detection and identification of individual rooks in field recordings using multi-task neural networks. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
21
|
Stoumpou V, Vargas CDM, Schade PF, Boyd JL, Giannakopoulos T, Jarvis ED. Analysis of Mouse Vocal Communication (AMVOC): a deep, unsupervised method for rapid detection, analysis and classification of ultrasonic vocalisations. BIOACOUSTICS 2022. [DOI: 10.1080/09524622.2022.2099973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Affiliation(s)
- Vasiliki Stoumpou
- School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - César D. M. Vargas
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA
| | - Peter F. Schade
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA
- Laboratory of Neural Systems, The Rockefeller University, New York, NY, USA
| | - J. Lomax Boyd
- Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD, USA
| | - Theodoros Giannakopoulos
- Computational Intelligence Lab, Institute of Informatics and Telecommunications, National Center of Scientific Research 'Demokritos', Athens, Greece
| | - Erich D. Jarvis
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| |
Collapse
|
22
|
Mai L, Inada H, Kimura R, Kanno K, Matsuda T, Tachibana RO, Tucci V, Komaki F, Hiroi N, Osumi N. Advanced paternal age diversifies individual trajectories of vocalization patterns in neonatal mice. iScience 2022; 25:104834. [PMID: 36039363 PMCID: PMC9418688 DOI: 10.1016/j.isci.2022.104834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 06/27/2022] [Accepted: 07/20/2022] [Indexed: 10/25/2022] Open
Abstract
Infant crying is a communicative behavior impaired in neurodevelopmental disorders (NDDs). Because advanced paternal age is a risk factor for NDDs, we performed computational approaches to evaluate how paternal age affected vocal communication and body weight development in C57BL/6 mouse offspring from young and aged fathers. Analyses of ultrasonic vocalization (USV) consisting of syllables showed that advanced paternal age reduced the number and duration of syllables, altered the syllable composition, and caused lower body weight gain in pups. Pups born to young fathers had convergent vocal characteristics with a rich repertoire, whereas those born to aged fathers exhibited more divergent vocal patterns with limited repertoire. Additional analyses revealed that some pups from aged fathers displayed atypical USV trajectories. Thus, our study indicates that advanced paternal age has a significant effect on offspring's vocal development. Our computational analyses are effective in characterizing altered individual diversity.
Collapse
Affiliation(s)
- Lingling Mai
- Department of Developmental Neuroscience, Tohoku University Graduate School of Medicine, Sendai 980-8575, Japan
| | - Hitoshi Inada
- Department of Developmental Neuroscience, Tohoku University Graduate School of Medicine, Sendai 980-8575, Japan.,Laboratory of Health and Sports Sciences, Division of Biomedical Engineering for Health and Welfare, Tohoku University Graduate School of Biomedical Engineering, Sendai 980-8575, Japan
| | - Ryuichi Kimura
- Department of Developmental Neuroscience, Tohoku University Graduate School of Medicine, Sendai 980-8575, Japan.,Department of Drug Discovery Medicine, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Kouta Kanno
- Faculty of Law, Economics and Humanities, Kagoshima University, Kagoshima 890-0065, Japan
| | - Takeru Matsuda
- Statistical Mathematics Unit, RIKEN Center for Brain Science, Wako 351-0198, Japan
| | - Ryosuke O Tachibana
- Department of Life Science, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo 153-8902, Japan
| | - Valter Tucci
- Genetics and Epigenetics of Behavior (GEB) Laboratory, Istituto Italiano di Tecnologia, Genova 16163, Italy
| | - Fumiyasu Komaki
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 113-8656, Japan.,Mathematical Informatics Collaboration Unit, RIKEN Center for Brain Science, Wako 351-0198, Japan
| | - Noboru Hiroi
- Department of Pharmacology, University of Texas Health Science Center at San Antonio, San Antonio 78229, USA.,Department of Cellular and Integrative Physiology, University of Texas Health Science Center at San Antonio, San Antonio 78229, USA.,Department of Cell Systems and Anatomy, University of Texas Health Science Center at San Antonio, San Antonio 78229, USA
| | - Noriko Osumi
- Department of Developmental Neuroscience, Tohoku University Graduate School of Medicine, Sendai 980-8575, Japan
| |
Collapse
|
23
|
Karigo T. Gaining insights into the internal states of the rodent brain through vocal communications. Neurosci Res 2022; 184:1-8. [PMID: 35908736 DOI: 10.1016/j.neures.2022.07.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Revised: 07/25/2022] [Accepted: 07/26/2022] [Indexed: 10/31/2022]
Abstract
Animals display various behaviors during social interactions. Social behaviors have been proposed to be driven by the internal states of the animals, reflecting their emotional or motivational states. However, the internal states that drive social behaviors are complex and difficult to interpret. Many animals, including mice, use vocalizations for communication in various social contexts. This review provides an overview of current understandings of mouse vocal communications, its underlying neural circuitry, and the potential to use vocal communications as a readout for the animal's internal states during social interactions.
Collapse
Affiliation(s)
- Tomomi Karigo
- Division of Biology and Biological Engineering 140-18,TianQiao and Chrissy Chen Institute for Neuroscience, California Institute of Technology, Pasadena CA 91125, USA; Present address: Kennedy Krieger Institute, Baltimore, MD 21205, USA; The Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA.
| |
Collapse
|
24
|
Matsumoto J, Kanno K, Kato M, Nishimaru H, Setogawa T, Chinzorig C, Shibata T, Nishijo H. Acoustic camera system for measuring ultrasound communication in mice. iScience 2022; 25:104812. [PMID: 35982786 PMCID: PMC9379670 DOI: 10.1016/j.isci.2022.104812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 06/15/2022] [Accepted: 07/18/2022] [Indexed: 11/01/2022] Open
Abstract
To investigate biological mechanisms underlying social behaviors and their deficits, social communication via ultrasonic vocalizations (USVs) in mice has received considerable attention as a powerful experimental model. The advances in sound localization technology have facilitated the analysis of vocal interactions between multiple mice. However, existing sound localization systems are built around distributed-microphone arrays, which require a special recording arena and long processing time. Here, we report a novel acoustic camera system, USVCAM, which enables simpler and faster USV localization and assignment. The system comprises recently developed USV segmentation algorithms with a modification for overlapping vocalizations that results in high accuracy. Using USVCAM, we analyzed USV communications in a conventional home cage, and demonstrated novel vocal interactions in female ICR mice under a resident-intruder paradigm. The extended applicability and usability of USVCAM may facilitate future studies investigating typical and atypical vocal communication and social behaviors, as well as the underlying mechanisms. A new sound localization system for ultrasound vocalizations in mice was proposed Simpler recording setup and faster processing were achieved by utilizing phase lag Vocal interactions in a resident-intruder paradigm were analyzed with the system The system may facilitate future studies investigating social behaviors
Collapse
|
25
|
Trainor BC, Falkner AL. Quantifying Sex Differences in Behavior in the Era of "Big" Data. Cold Spring Harb Perspect Biol 2022; 14:a039164. [PMID: 34607831 PMCID: PMC9159265 DOI: 10.1101/cshperspect.a039164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Sex differences are commonly observed in behaviors that are closely linked to adaptive function, but sex differences can also be observed in behavioral "building blocks" such as locomotor activity and reward processing. Modern neuroscientific inquiry, in pursuit of generalizable principles of functioning across sexes, has often ignored these more subtle sex differences in behavioral building blocks that may result from differences in these behavioral building blocks. A frequent assumption is that there is a default (often male) way to perform a behavior. This approach misses fundamental drivers of individual variability within and between sexes. Incomplete behavioral descriptions of both sexes can lead to an overreliance on reduced "single-variable" readouts of complex behaviors, the design of which may be based on male-biased samples. Here, we advocate that the incorporation of new machine-learning tools for collecting and analyzing multimodal "big behavior" data allows for a more holistic and richer approach to the quantification of behavior in both sexes. These new tools make behavioral description more robust and replicable across laboratories and species, and may open up new lines of neuroscientific inquiry by facilitating the discovery of novel behavioral states. Having more accurate measures of behavioral diversity in males and females could serve as a hypothesis generator for where and when we should look in the brain for meaningful neural differences.
Collapse
Affiliation(s)
- Brian C Trainor
- Department of Psychology, University of California, Davis, California 95616, USA
| | | |
Collapse
|
26
|
Abbasi R, Balazs P, Marconi MA, Nicolakis D, Zala SM, Penn DJ. Capturing the songs of mice with an improved detection and classification method for ultrasonic vocalizations (BootSnap). PLoS Comput Biol 2022; 18:e1010049. [PMID: 35551265 PMCID: PMC9098080 DOI: 10.1371/journal.pcbi.1010049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Accepted: 03/22/2022] [Indexed: 12/02/2022] Open
Abstract
House mice communicate through ultrasonic vocalizations (USVs), which are above the range of human hearing (>20 kHz), and several automated methods have been developed for USV detection and classification. Here we evaluate their advantages and disadvantages in a full, systematic comparison, while also presenting a new approach. This study aims to 1) determine the most efficient USV detection tool among the existing methods, and 2) develop a classification model that is more generalizable than existing methods. In both cases, we aim to minimize the user intervention required for processing new data. We compared the performance of four detection methods in an out-of-the-box approach, pretrained DeepSqueak detector, MUPET, USVSEG, and the Automatic Mouse Ultrasound Detector (A-MUD). We also compared these methods to human visual or ‘manual’ classification (ground truth) after assessing its reliability. A-MUD and USVSEG outperformed the other methods in terms of true positive rates using default and adjusted settings, respectively, and A-MUD outperformed USVSEG when false detection rates were also considered. For automating the classification of USVs, we developed BootSnap for supervised classification, which combines bootstrapping on Gammatone Spectrograms and Convolutional Neural Networks algorithms with Snapshot ensemble learning. It successfully classified calls into 12 types, including a new class of false positives that is useful for detection refinement. BootSnap outperformed the pretrained and retrained state-of-the-art tool, and thus it is more generalizable. BootSnap is freely available for scientific use. House mice and many other species use ultrasonic vocalizations to communicate in various contexts including social and sexual interactions. These vocalizations are increasingly investigated in research on animal communication and as a phenotype for studying the genetic basis of autism and speech disorders. Because manual methods for analyzing vocalizations are extremely time consuming, automatic tools for detection and classification are needed. We evaluated the performance of the available tools for analyzing ultrasonic vocalizations, and we compared detection tools for the first time to manual methods (“ground truth”) using recordings from wild-derived and laboratory mice. For the first time, class-wise inter-observer reliability of manual labels used for ground truth are analyzed and reported. Moreover, we developed a new classification method based on ensemble deep learning that provides more generalizability than the current state-of-the-art tool (both pretrained and retrained). Our new classification method is free for scientific use.
Collapse
Affiliation(s)
- Reyhaneh Abbasi
- Acoustic Research Institute, Austrian Academy of Sciences, Vienna, Austria
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
- Vienna Doctoral School of Cognition, Behaviour and Neuroscience, University of Vienna, Vienna, Austria
- * E-mail:
| | - Peter Balazs
- Acoustic Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Maria Adelaide Marconi
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
| | - Doris Nicolakis
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
| | - Sarah M. Zala
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
| | - Dustin J. Penn
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
| |
Collapse
|
27
|
Duffy A, Latimer KW, Goldberg JH, Fairhall AL, Gadagkar V. Dopamine neurons evaluate natural fluctuations in performance quality. Cell Rep 2022; 38:110574. [PMID: 35354031 PMCID: PMC9013488 DOI: 10.1016/j.celrep.2022.110574] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 01/04/2022] [Accepted: 03/04/2022] [Indexed: 11/25/2022] Open
Abstract
Many motor skills are learned by comparing ongoing behavior to internal performance benchmarks. Dopamine neurons encode performance error in behavioral paradigms where error is externally induced, but it remains unknown whether dopamine also signals the quality of natural performance fluctuations. Here, we record dopamine neurons in singing birds and examine how spontaneous dopamine spiking activity correlates with natural fluctuations in ongoing song. Antidromically identified basal ganglia-projecting dopamine neurons correlate with recent, and not future, song variations, consistent with a role in evaluation, not production. Furthermore, maximal dopamine spiking occurs at a single vocal target, consistent with either actively maintaining the existing song or shifting the song to a nearby form. These data show that spontaneous dopamine spiking can evaluate natural behavioral fluctuations unperturbed by experimental events such as cues or rewards. Learning and producing skilled behavior requires an internal measure of performance. Duffy et al. examine dopamine neurons’ relationship to natural song in singing birds. Spontaneous dopamine activity correlates with song fluctuations in a manner consistent with evaluation of natural behavioral variations, independent of external perturbations, cues, or rewards.
Collapse
Affiliation(s)
- Alison Duffy
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA; Computational Neuroscience Center, University of Washington, Seattle, WA 98195, USA
| | - Kenneth W Latimer
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA; Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA
| | - Jesse H Goldberg
- Department of Neurobiology and Behavior, Cornell University, Ithaca, NY 14853, USA
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA; Computational Neuroscience Center, University of Washington, Seattle, WA 98195, USA.
| | - Vikram Gadagkar
- Department of Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
28
|
Cohen Y, Nicholson DA, Sanchioni A, Mallaber EK, Skidanova V, Gardner TJ. Automated annotation of birdsong with a neural network that segments spectrograms. eLife 2022; 11:63853. [PMID: 35050849 PMCID: PMC8860439 DOI: 10.7554/elife.63853] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 01/19/2022] [Indexed: 11/13/2022] Open
Abstract
Songbirds provide a powerful model system for studying sensory-motor learning. However, many analyses of birdsong require time-consuming, manual annotation of its elements, called syllables. Automated methods for annotation have been proposed, but these methods assume that audio can be cleanly segmented into syllables, or they require carefully tuning multiple statistical models. Here we present TweetyNet: a single neural network model that learns how to segment spectrograms of birdsong into annotated syllables. We show that TweetyNet mitigates limitations of methods that rely on segmented audio. We also show that TweetyNet performs well across multiple individuals from two species of songbirds, Bengalese finches and canaries. Lastly, we demonstrate that using TweetyNet we can accurately annotate very large datasets containing multiple days of song, and that these predicted annotations replicate key findings from behavioral studies. In addition, we provide open-source software to assist other researchers, and a large dataset of annotated canary song that can serve as a benchmark. We conclude that TweetyNet makes it possible to address a wide range of new questions about birdsong.
Collapse
Affiliation(s)
- Yarden Cohen
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | | | - Alexa Sanchioni
- Department of Biology, Boston University, Boston, United States
| | | | | | - Timothy J Gardner
- Phil and Penny Knight Campus for Accelerating Scientific Impact, University of Oregon, Eugene, United States
| |
Collapse
|
29
|
Vocal Learning and Behaviors in Birds and Human Bilinguals: Parallels, Divergences and Directions for Research. LANGUAGES 2021. [DOI: 10.3390/languages7010005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Comparisons between the communication systems of humans and animals are instrumental in contextualizing speech and language into an evolutionary and biological framework and for illuminating mechanisms of human communication. As a complement to previous work that compares developmental vocal learning and use among humans and songbirds, in this article we highlight phenomena associated with vocal learning subsequent to the development of primary vocalizations (i.e., the primary language (L1) in humans and the primary song (S1) in songbirds). By framing avian “second-song” (S2) learning and use within the human second-language (L2) context, we lay the groundwork for a scientifically-rich dialogue between disciplines. We begin by summarizing basic birdsong research, focusing on how songs are learned and on constraints on learning. We then consider commonalities in vocal learning across humans and birds, in particular the timing and neural mechanisms of learning, variability of input, and variability of outcomes. For S2 and L2 learning outcomes, we address the respective roles of age, entrenchment, and social interactions. We proceed to orient current and future birdsong inquiry around foundational features of human bilingualism: L1 effects on the L2, L1 attrition, and L1<–>L2 switching. Throughout, we highlight characteristics that are shared across species as well as the need for caution in interpreting birdsong research. Thus, from multiple instructive perspectives, our interdisciplinary dialogue sheds light on biological and experiential principles of L2 acquisition that are informed by birdsong research, and leverages well-studied characteristics of bilingualism in order to clarify, contextualize, and further explore S2 learning and use in songbirds.
Collapse
|
30
|
Sainburg T, Gentner TQ. Toward a Computational Neuroethology of Vocal Communication: From Bioacoustics to Neurophysiology, Emerging Tools and Future Directions. Front Behav Neurosci 2021; 15:811737. [PMID: 34987365 PMCID: PMC8721140 DOI: 10.3389/fnbeh.2021.811737] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 11/29/2021] [Indexed: 11/23/2022] Open
Abstract
Recently developed methods in computational neuroethology have enabled increasingly detailed and comprehensive quantification of animal movements and behavioral kinematics. Vocal communication behavior is well poised for application of similar large-scale quantification methods in the service of physiological and ethological studies. This review describes emerging techniques that can be applied to acoustic and vocal communication signals with the goal of enabling study beyond a small number of model species. We review a range of modern computational methods for bioacoustics, signal processing, and brain-behavior mapping. Along with a discussion of recent advances and techniques, we include challenges and broader goals in establishing a framework for the computational neuroethology of vocal communication.
Collapse
Affiliation(s)
- Tim Sainburg
- Department of Psychology, University of California, San Diego, La Jolla, CA, United States
- Center for Academic Research & Training in Anthropogeny, University of California, San Diego, La Jolla, CA, United States
| | - Timothy Q. Gentner
- Department of Psychology, University of California, San Diego, La Jolla, CA, United States
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA, United States
- Neurobiology Section, Division of Biological Sciences, University of California, San Diego, La Jolla, CA, United States
- Kavli Institute for Brain and Mind, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
31
|
Singh Alvarado J, Goffinet J, Michael V, Liberti W, Hatfield J, Gardner T, Pearson J, Mooney R. Neural dynamics underlying birdsong practice and performance. Nature 2021; 599:635-639. [PMID: 34671166 PMCID: PMC9118926 DOI: 10.1038/s41586-021-04004-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 09/07/2021] [Indexed: 11/09/2022]
Abstract
Musical and athletic skills are learned and maintained through intensive practice to enable precise and reliable performance for an audience. Consequently, understanding such complex behaviours requires insight into how the brain functions during both practice and performance. Male zebra finches learn to produce courtship songs that are more varied when alone and more stereotyped in the presence of females1. These differences are thought to reflect song practice and performance, respectively2,3, providing a useful system in which to explore how neurons encode and regulate motor variability in these two states. Here we show that calcium signals in ensembles of spiny neurons (SNs) in the basal ganglia are highly variable relative to their cortical afferents during song practice. By contrast, SN calcium signals are strongly suppressed during female-directed performance, and optogenetically suppressing SNs during practice strongly reduces vocal variability. Unsupervised learning methods4,5 show that specific SN activity patterns map onto distinct song practice variants. Finally, we establish that noradrenergic signalling reduces vocal variability by directly suppressing SN activity. Thus, SN ensembles encode and drive vocal exploration during practice, and the noradrenergic suppression of SN activity promotes stereotyped and precise song performance for an audience.
Collapse
Affiliation(s)
| | - Jack Goffinet
- Department of Computer Science, Duke University, Durham, NC, USA
| | - Valerie Michael
- Department of Neurobiology, Duke University, Durham, NC, USA
| | - William Liberti
- Department of Electrical Engineering and Computer Science, University of California Berkeley, Berkeley, CA, USA
| | - Jordan Hatfield
- Department of Neurobiology, Duke University, Durham, NC, USA
| | - Timothy Gardner
- Phil and Penny Knight Campus for Accelerating Scientific Impact, University of Oregon, Eugene, OR, USA
| | - John Pearson
- Department of Neurobiology, Duke University, Durham, NC, USA.
- Department of Biostatistics & Bioinformatics, Duke University, Durham, NC, USA.
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA.
| | - Richard Mooney
- Department of Neurobiology, Duke University, Durham, NC, USA.
| |
Collapse
|
32
|
Steinfath E, Palacios-Muñoz A, Rottschäfer JR, Yuezak D, Clemens J. Fast and accurate annotation of acoustic signals with deep neural networks. eLife 2021; 10:e68837. [PMID: 34723794 PMCID: PMC8560090 DOI: 10.7554/elife.68837] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 10/04/2021] [Indexed: 01/06/2023] Open
Abstract
Acoustic signals serve communication within and across species throughout the animal kingdom. Studying the genetics, evolution, and neurobiology of acoustic communication requires annotating acoustic signals: segmenting and identifying individual acoustic elements like syllables or sound pulses. To be useful, annotations need to be accurate, robust to noise, and fast. We here introduce DeepAudioSegmenter (DAS), a method that annotates acoustic signals across species based on a deep-learning derived hierarchical presentation of sound. We demonstrate the accuracy, robustness, and speed of DAS using acoustic signals with diverse characteristics from insects, birds, and mammals. DAS comes with a graphical user interface for annotating song, training the network, and for generating and proofreading annotations. The method can be trained to annotate signals from new species with little manual annotation and can be combined with unsupervised methods to discover novel signal types. DAS annotates song with high throughput and low latency for experimental interventions in realtime. Overall, DAS is a universal, versatile, and accessible tool for annotating acoustic communication signals.
Collapse
Affiliation(s)
- Elsa Steinfath
- European Neuroscience Institute - A Joint Initiative of the University Medical Center Göttingen and the Max-Planck-SocietyGöttingenGermany
- International Max Planck Research School and Göttingen Graduate School for Neurosciences, Biophysics, and Molecular Biosciences (GGNB) at the University of GöttingenGöttingenGermany
| | - Adrian Palacios-Muñoz
- European Neuroscience Institute - A Joint Initiative of the University Medical Center Göttingen and the Max-Planck-SocietyGöttingenGermany
- International Max Planck Research School and Göttingen Graduate School for Neurosciences, Biophysics, and Molecular Biosciences (GGNB) at the University of GöttingenGöttingenGermany
| | - Julian R Rottschäfer
- European Neuroscience Institute - A Joint Initiative of the University Medical Center Göttingen and the Max-Planck-SocietyGöttingenGermany
- International Max Planck Research School and Göttingen Graduate School for Neurosciences, Biophysics, and Molecular Biosciences (GGNB) at the University of GöttingenGöttingenGermany
| | - Deniz Yuezak
- European Neuroscience Institute - A Joint Initiative of the University Medical Center Göttingen and the Max-Planck-SocietyGöttingenGermany
- International Max Planck Research School and Göttingen Graduate School for Neurosciences, Biophysics, and Molecular Biosciences (GGNB) at the University of GöttingenGöttingenGermany
| | - Jan Clemens
- European Neuroscience Institute - A Joint Initiative of the University Medical Center Göttingen and the Max-Planck-SocietyGöttingenGermany
- Bernstein Center for Computational NeuroscienceGöttingenGermany
| |
Collapse
|
33
|
Goffinet J, Brudner S, Mooney R, Pearson J. Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires. eLife 2021; 10:e67855. [PMID: 33988503 PMCID: PMC8213406 DOI: 10.7554/elife.67855] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 05/12/2021] [Indexed: 11/16/2022] Open
Abstract
Increases in the scale and complexity of behavioral data pose an increasing challenge for data analysis. A common strategy involves replacing entire behaviors with small numbers of handpicked, domain-specific features, but this approach suffers from several crucial limitations. For example, handpicked features may miss important dimensions of variability, and correlations among them complicate statistical testing. Here, by contrast, we apply the variational autoencoder (VAE), an unsupervised learning method, to learn features directly from data and quantify the vocal behavior of two model species: the laboratory mouse and the zebra finch. The VAE converges on a parsimonious representation that outperforms handpicked features on a variety of common analysis tasks, enables the measurement of moment-by-moment vocal variability on the timescale of tens of milliseconds in the zebra finch, provides strong evidence that mouse ultrasonic vocalizations do not cluster as is commonly believed, and captures the similarity of tutor and pupil birdsong with qualitatively higher fidelity than previous approaches. In all, we demonstrate the utility of modern unsupervised learning approaches to the quantification of complex and high-dimensional vocal behavior.
Collapse
Affiliation(s)
- Jack Goffinet
- Department of Computer Science, Duke UniversityDurhamUnited States
- Center for Cognitive Neurobiology, Duke UniversityDurhamUnited States
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Samuel Brudner
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Richard Mooney
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - John Pearson
- Center for Cognitive Neurobiology, Duke UniversityDurhamUnited States
- Department of Neurobiology, Duke UniversityDurhamUnited States
- Department of Biostatistics & Bioinformatics, Duke UniversityDurhamUnited States
- Department of Electrical and Computer Engineering, Duke UniversityDurhamUnited States
| |
Collapse
|