1
|
Manikandan V, Neethirajan S. Decoding Poultry Welfare from Sound-A Machine Learning Framework for Non-Invasive Acoustic Monitoring. SENSORS (BASEL, SWITZERLAND) 2025; 25:2912. [PMID: 40363349 PMCID: PMC12074417 DOI: 10.3390/s25092912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/06/2025] [Revised: 05/02/2025] [Accepted: 05/04/2025] [Indexed: 05/15/2025]
Abstract
Acoustic monitoring presents a promising, non-invasive modality for assessing animal welfare in precision livestock farming. In poultry, vocalizations encode biologically relevant cues linked to health status, behavioral states, and environmental stress. This study proposes an integrated analytical framework that combines signal-level statistical analysis with machine learning and deep learning classifiers to interpret chicken vocalizations in a welfare assessment context. The framework was evaluated using three complementary datasets encompassing health-related vocalizations, behavioral call types, and stress-induced acoustic responses. The pipeline employs a multistage process comprising high-fidelity signal acquisition, feature extraction (e.g., mel-frequency cepstral coefficients, spectral contrast, zero-crossing rate), and classification using models including Random Forest, HistGradientBoosting, CatBoost, TabNet, and LSTM. Feature importance analysis and statistical tests (e.g., t-tests, correlation metrics) confirmed that specific MFCC bands and spectral descriptors were significantly associated with welfare indicators. LSTM-based temporal modeling revealed distinct acoustic trajectories under visual and auditory stress, supporting the presence of habituation and stressor-specific vocal adaptations over time. Model performance, validated through stratified cross-validation and multiple statistical metrics (e.g., F1-score, Matthews correlation coefficient), demonstrated high classification accuracy and generalizability. Importantly, the approach emphasizes model interpretability, facilitating alignment with known physiological and behavioral processes in poultry. The findings underscore the potential of acoustic sensing and interpretable AI as scalable, biologically grounded tools for real-time poultry welfare monitoring, contributing to the advancement of sustainable and ethical livestock production systems.
Collapse
Affiliation(s)
| | - Suresh Neethirajan
- Faculty of Computer Science, Dalhousie University, Halifax, NS B3H 4R2, Canada
- Faculty of Agriculture, Dalhousie University, Halifax, NS B3H 4R2, Canada
| |
Collapse
|
2
|
Malone CA, Ziobro P, Khinno J, Tschida KA. Rates of female mouse ultrasonic vocalizations are low and are not modulated by estrous state during interactions with muted males. Sci Rep 2025; 15:6841. [PMID: 40000725 PMCID: PMC11862114 DOI: 10.1038/s41598-025-91479-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2024] [Accepted: 02/20/2025] [Indexed: 02/27/2025] Open
Abstract
Adult male mice produce high rates of ultrasonic vocalizations (USVs) during courtship interactions with females. It was long thought that only males produced courtship USVs, but recent studies using microphone arrays to assign USVs to individual signalers report that females produce a portion (5-18%) of total courtship USVs. The factors that regulate female courtship USV production are poorly understood. Here, we tested the idea that female courtship USV production is regulated by estrous state. To facilitate the detection of female USVs, we paired females with males that were muted for USV production via caspase-mediated ablation of midbrain neurons that are required for USV production. We report that total USVs recorded during interactions between group-housed B6 females and muted males are low and are not modulated by female estrous state. Similar results were obtained for single-housed B6 females and for single-housed outbred wild-derived female mice paired with muted males. These findings suggest either that female mice produce substantial rates of courtship USVs only when interacting with vocal male partners or that prior studies have overestimated female courtship USV production. Studies employing methods that can unambiguously assign USVs to individual signalers, regardless of inter-mouse distances, are needed to distinguish between these possibilities.
Collapse
Affiliation(s)
- Cassidy A Malone
- Department of Psychology, Cornell University, Ithaca, NY, 14853, USA
| | - Patryk Ziobro
- Department of Psychology, Cornell University, Ithaca, NY, 14853, USA
| | - Julia Khinno
- Department of Psychology, Cornell University, Ithaca, NY, 14853, USA
| | | |
Collapse
|
3
|
Sharif A, Matsumoto J, Choijiljav C, Badarch A, Setogawa T, Nishijo H, Nishimaru H. Characterization of Ultrasonic Vocalization-Modulated Neurons in Rat Motor Cortex Based on Their Activity Modulation and Axonal Projection to the Periaqueductal Gray. eNeuro 2024; 11:ENEURO.0452-23.2024. [PMID: 38490744 PMCID: PMC10988357 DOI: 10.1523/eneuro.0452-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 12/13/2023] [Accepted: 01/02/2024] [Indexed: 03/17/2024] Open
Abstract
Vocalization, a means of social communication, is prevalent among many species, including humans. Both rats and mice use ultrasonic vocalizations (USVs) in various social contexts and affective states. The motor cortex is hypothesized to be involved in precisely controlling USVs through connections with critical regions of the brain for vocalization, such as the periaqueductal gray matter (PAG). However, it is unclear how neurons in the motor cortex are modulated during USVs. Moreover, the relationship between USV modulation of neurons and anatomical connections from the motor cortex to PAG is also not clearly understood. In this study, we first characterized the activity patterns of neurons in the primary and secondary motor cortices during emission of USVs in rats using large-scale electrophysiological recordings. We also examined the axonal projection of the motor cortex to PAG using retrograde labeling and identified two clusters of PAG-projecting neurons in the anterior and posterior parts of the motor cortex. The neural activity patterns around the emission of USVs differed between the anterior and posterior regions, which were divided based on the distribution of PAG-projecting neurons in the motor cortex. Furthermore, using optogenetic tagging, we recorded the USV modulation of PAG-projecting neurons in the posterior part of the motor cortex and found that they showed predominantly sustained excitatory responses during USVs. These results contribute to our understanding of the involvement of the motor cortex in the generation of USV at the neuronal and circuit levels.
Collapse
Affiliation(s)
- Aamir Sharif
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
| | - Jumpei Matsumoto
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama 930-0194, Japan
| | - Chinzorig Choijiljav
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
| | - Amarbayasgalant Badarch
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
| | - Tsuyoshi Setogawa
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama 930-0194, Japan
| | - Hisao Nishijo
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama 930-0194, Japan
- Department of Sport and Health Sciences, Faculty of Human Sciences, University of East Asia, Shimonoseki 751-0807, Japan
| | - Hiroshi Nishimaru
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama 930-0194, Japan
| |
Collapse
|
4
|
Beck J, Wernisch B, Klaus T, Penn DJ, Zala SM. Attraction of female house mice to male ultrasonic courtship vocalizations depends on their social experience and estrous stage. PLoS One 2023; 18:e0285642. [PMID: 37816035 PMCID: PMC10564145 DOI: 10.1371/journal.pone.0285642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 09/25/2023] [Indexed: 10/12/2023] Open
Abstract
Male house mice (Mus musculus) produce complex ultrasonic vocalizations (USVs), especially during courtship and mating. Playback experiments suggest that female attraction towards recordings of male USVs depends on their social experience, paternal exposure, and estrous stage. We conducted a playback experiment with wild-derived female house mice (M. musculus musculus) and compared their attraction to male USVs versus the same recording without USVs (background noise). We tested whether female attraction to USVs is influenced by the following factors: (1) social housing (two versus one female per cage); (2) neonatal paternal exposure (rearing females with versus without father); and (3) estrous stage. We found that females showed a significant attraction to male USVs but only when they were housed socially with another female. Individually housed females showed the opposite response. We found no evidence that pre-weaning exposure to a father influenced females' preferences, whereas estrous stage influenced females' attraction to male USVs: females not in estrus showed preferences towards male USVs, whereas estrous females did not. Finally, we found that individually housed females were more likely to be in sexually receptive estrous stages than those housed socially, and that attraction to male USVs was most pronounced amongst non-receptive females that were socially housed. Our findings indicate that the attraction of female mice to male USVs depends upon their social experience and estrous stage, though not paternal exposure. They contribute to the growing number of studies showing that social housing and estrous stage can influence the behavior of house mice and we show how such unreported variables can contribute to the replication crisis.
Collapse
Affiliation(s)
- Jakob Beck
- Department of Interdisciplinary Life Sciences, Konrad Lorenz Institute of Ethology, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Bettina Wernisch
- Department of Interdisciplinary Life Sciences, Konrad Lorenz Institute of Ethology, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Teresa Klaus
- Department of Interdisciplinary Life Sciences, Konrad Lorenz Institute of Ethology, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Dustin J. Penn
- Department of Interdisciplinary Life Sciences, Konrad Lorenz Institute of Ethology, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Sarah M. Zala
- Department of Interdisciplinary Life Sciences, Konrad Lorenz Institute of Ethology, University of Veterinary Medicine Vienna, Vienna, Austria
| |
Collapse
|
5
|
Phaniraj N, Wierucka K, Zürcher Y, Burkart JM. Who is calling? Optimizing source identification from marmoset vocalizations with hierarchical machine learning classifiers. J R Soc Interface 2023; 20:20230399. [PMID: 37848054 PMCID: PMC10581777 DOI: 10.1098/rsif.2023.0399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 09/25/2023] [Indexed: 10/19/2023] Open
Abstract
With their highly social nature and complex vocal communication system, marmosets are important models for comparative studies of vocal communication and, eventually, language evolution. However, our knowledge about marmoset vocalizations predominantly originates from playback studies or vocal interactions between dyads, and there is a need to move towards studying group-level communication dynamics. Efficient source identification from marmoset vocalizations is essential for this challenge, and machine learning algorithms (MLAs) can aid it. Here we built a pipeline capable of plentiful feature extraction, meaningful feature selection, and supervised classification of vocalizations of up to 18 marmosets. We optimized the classifier by building a hierarchical MLA that first learned to determine the sex of the source, narrowed down the possible source individuals based on their sex and then determined the source identity. We were able to correctly identify the source individual with high precisions (87.21%-94.42%, depending on call type, and up to 97.79% after the removal of twins from the dataset). We also examine the robustness of identification across varying sample sizes. Our pipeline is a promising tool not only for source identification from marmoset vocalizations but also for analysing vocalizations of other species.
Collapse
Affiliation(s)
- Nikhil Phaniraj
- Institute of Evolutionary Anthropology (IEA), University of Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
- Neuroscience Center Zurich (ZNZ), University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
- Department of Biology, Indian Institute of Science Education and Research (IISER) Pune, Dr. Homi Bhabha Road, Pune 411008, India
| | - Kaja Wierucka
- Institute of Evolutionary Anthropology (IEA), University of Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
- Behavioral Ecology & Sociobiology Unit, German Primate Center, Leibniz Institute for Primate Research, Kellnerweg 4, 37077 Göttingen, Germany
| | - Yvonne Zürcher
- Institute of Evolutionary Anthropology (IEA), University of Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
| | - Judith M. Burkart
- Institute of Evolutionary Anthropology (IEA), University of Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
- Neuroscience Center Zurich (ZNZ), University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Affolternstrasse 56, 8050 Zürich, Switzerland
| |
Collapse
|
6
|
Gan-Or B, London M. Cortical circuits modulate mouse social vocalizations. SCIENCE ADVANCES 2023; 9:eade6992. [PMID: 37774030 PMCID: PMC10541007 DOI: 10.1126/sciadv.ade6992] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 08/30/2023] [Indexed: 10/01/2023]
Abstract
Vocalizations provide a means of communication with high fidelity and information rate for many species. Diencephalon and brainstem neural circuits have been shown to control mouse vocal production; however, the role of cortical circuits in this process is debatable. Using electrical and optogenetic stimulation, we identified a cortical region in the anterior cingulate cortex in which stimulation elicits ultrasonic vocalizations. Moreover, fiber photometry showed an increase in Ca2+ dynamics preceding vocal initiation, whereas optogenetic suppression in this cortical area caused mice to emit fewer vocalizations. Last, electrophysiological recordings indicated a differential increase in neural activity in response to female social exposure dependent on vocal output. Together, these results indicate that the cortex is a key node in the neuronal circuits controlling vocal behavior in mice.
Collapse
Affiliation(s)
- Benjamin Gan-Or
- Edmond and Lily Safra Center for Brain Sciences and Alexander Silberman Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| | | |
Collapse
|
7
|
Agarwalla S, De A, Bandyopadhyay S. Predictive Mouse Ultrasonic Vocalization Sequences: Uncovering Behavioral Significance, Auditory Cortex Neuronal Preferences, and Social-Experience-Driven Plasticity. J Neurosci 2023; 43:6141-6163. [PMID: 37541836 PMCID: PMC10476644 DOI: 10.1523/jneurosci.2353-22.2023] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 07/29/2023] [Accepted: 07/31/2023] [Indexed: 08/06/2023] Open
Abstract
Mouse ultrasonic vocalizations (USVs) contain predictable sequential structures like bird songs and speech. Neural representation of USVs in the mouse primary auditory cortex (Au1) and its plasticity with experience has been largely studied with single-syllables or dyads, without using the predictability in USV sequences. Studies using playback of USV sequences have used randomly selected sequences from numerous possibilities. The current study uses mutual information to obtain context-specific natural sequences (NSeqs) of USV syllables capturing the observed predictability in male USVs in different contexts of social interaction with females. Behavioral and physiological significance of NSeqs over random sequences (RSeqs) lacking predictability were examined. Female mice, never having the social experience of being exposed to males, showed higher selectivity for NSeqs behaviorally and at cellular levels probed by expression of immediate early gene c-fos in Au1. The Au1 supragranular single units also showed higher selectivity to NSeqs over RSeqs. Social-experience-driven plasticity in encoding NSeqs and RSeqs in adult females was probed by examining neural selectivities to the same sequences before and after the above social experience. Single units showed enhanced selectivity for NSeqs over RSeqs after the social experience. Further, using two-photon Ca2+ imaging, we observed social experience-dependent changes in the selectivity of sequences of excitatory and somatostatin-positive inhibitory neurons but not parvalbumin-positive inhibitory neurons of Au1. Using optogenetics, somatostatin-positive neurons were identified as a possible mediator of the observed social-experience-driven plasticity. Our study uncovers the importance of predictive sequences and introduces mouse USVs as a promising model to study context-dependent speech like communications.SIGNIFICANCE STATEMENT Humans need to detect patterns in the sensory world. For instance, speech is meaningful sequences of acoustic tokens easily differentiated from random ordered tokens. The structure derives from the predictability of the tokens. Similarly, mouse vocalization sequences have predictability and undergo context-dependent modulation. Our work investigated whether mice differentiate such informative predictable sequences (NSeqs) of communicative significance from RSeqs at the behavioral, molecular, and neuronal levels. Following a social experience in which NSeqs occur as a crucial component, mouse auditory cortical neurons become more sensitive to differences between NSeqs and RSeqs, although preference for individual tokens is unchanged. Thus, speech-like communication and its dysfunction may be studied in circuit, cellular, and molecular levels in mice.
Collapse
Affiliation(s)
- Swapna Agarwalla
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
| | - Amiyangshu De
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
- Advanced Technology Development Centre, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
| | - Sharba Bandyopadhyay
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
- Advanced Technology Development Centre, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
| |
Collapse
|
8
|
Sterling ML, Teunisse R, Englitz B. Rodent ultrasonic vocal interaction resolved with millimeter precision using hybrid beamforming. eLife 2023; 12:e86126. [PMID: 37493217 PMCID: PMC10522333 DOI: 10.7554/elife.86126] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 07/25/2023] [Indexed: 07/27/2023] Open
Abstract
Ultrasonic vocalizations (USVs) fulfill an important role in communication and navigation in many species. Because of their social and affective significance, rodent USVs are increasingly used as a behavioral measure in neurodevelopmental and neurolinguistic research. Reliably attributing USVs to their emitter during close interactions has emerged as a difficult, key challenge. If addressed, all subsequent analyses gain substantial confidence. We present a hybrid ultrasonic tracking system, Hybrid Vocalization Localizer (HyVL), that synergistically integrates a high-resolution acoustic camera with high-quality ultrasonic microphones. HyVL is the first to achieve millimeter precision (~3.4-4.8 mm, 91% assigned) in localizing USVs, ~3× better than other systems, approaching the physical limits (mouse snout ~10 mm). We analyze mouse courtship interactions and demonstrate that males and females vocalize in starkly different relative spatial positions, and that the fraction of female vocalizations has likely been overestimated previously due to imprecise localization. Further, we find that when two male mice interact with one female, one of the males takes a dominant role in the interaction both in terms of the vocalization rate and the location relative to the female. HyVL substantially improves the precision with which social communication between rodents can be studied. It is also affordable, open-source, easy to set up, can be integrated with existing setups, and reduces the required number of experiments and animals.
Collapse
Affiliation(s)
- Max L Sterling
- Computational Neuroscience Lab, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
- Visual Neuroscience Lab, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
- Department of Human Genetics, Radboudumc, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Ruben Teunisse
- Computational Neuroscience Lab, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Bernhard Englitz
- Computational Neuroscience Lab, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
9
|
Arnaud V, Pellegrino F, Keenan S, St-Gelais X, Mathevon N, Levréro F, Coupé C. Improving the workflow to crack Small, Unbalanced, Noisy, but Genuine (SUNG) datasets in bioacoustics: The case of bonobo calls. PLoS Comput Biol 2023; 19:e1010325. [PMID: 37053268 PMCID: PMC10129004 DOI: 10.1371/journal.pcbi.1010325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 04/25/2023] [Accepted: 03/01/2023] [Indexed: 04/15/2023] Open
Abstract
Despite the accumulation of data and studies, deciphering animal vocal communication remains challenging. In most cases, researchers must deal with the sparse recordings composing Small, Unbalanced, Noisy, but Genuine (SUNG) datasets. SUNG datasets are characterized by a limited number of recordings, most often noisy, and unbalanced in number between the individuals or categories of vocalizations. SUNG datasets therefore offer a valuable but inevitably distorted vision of communication systems. Adopting the best practices in their analysis is essential to effectively extract the available information and draw reliable conclusions. Here we show that the most recent advances in machine learning applied to a SUNG dataset succeed in unraveling the complex vocal repertoire of the bonobo, and we propose a workflow that can be effective with other animal species. We implement acoustic parameterization in three feature spaces and run a Supervised Uniform Manifold Approximation and Projection (S-UMAP) to evaluate how call types and individual signatures cluster in the bonobo acoustic space. We then implement three classification algorithms (Support Vector Machine, xgboost, neural networks) and their combination to explore the structure and variability of bonobo calls, as well as the robustness of the individual signature they encode. We underscore how classification performance is affected by the feature set and identify the most informative features. In addition, we highlight the need to address data leakage in the evaluation of classification performance to avoid misleading interpretations. Our results lead to identifying several practical approaches that are generalizable to any other animal communication system. To improve the reliability and replicability of vocal communication studies with SUNG datasets, we thus recommend: i) comparing several acoustic parameterizations; ii) visualizing the dataset with supervised UMAP to examine the species acoustic space; iii) adopting Support Vector Machines as the baseline classification approach; iv) explicitly evaluating data leakage and possibly implementing a mitigation strategy.
Collapse
Affiliation(s)
- Vincent Arnaud
- Département des arts, des lettres et du langage, Université du Québec à Chicoutimi, Chicoutimi, Canada
- Laboratoire Dynamique Du Langage, UMR 5596, Université de Lyon, CNRS, Lyon, France
| | - François Pellegrino
- Laboratoire Dynamique Du Langage, UMR 5596, Université de Lyon, CNRS, Lyon, France
| | - Sumir Keenan
- ENES Bioacoustics Research Laboratory, University of Saint Étienne, CRNL, CNRS UMR 5292, Inserm UMR_S 1028, Saint-Étienne, France
| | - Xavier St-Gelais
- Département des arts, des lettres et du langage, Université du Québec à Chicoutimi, Chicoutimi, Canada
| | - Nicolas Mathevon
- ENES Bioacoustics Research Laboratory, University of Saint Étienne, CRNL, CNRS UMR 5292, Inserm UMR_S 1028, Saint-Étienne, France
| | - Florence Levréro
- ENES Bioacoustics Research Laboratory, University of Saint Étienne, CRNL, CNRS UMR 5292, Inserm UMR_S 1028, Saint-Étienne, France
| | - Christophe Coupé
- Laboratoire Dynamique Du Langage, UMR 5596, Université de Lyon, CNRS, Lyon, France
- Department of Linguistics, The University of Hong Kong, Hong Kong, China
| |
Collapse
|
10
|
Oliveira-Stahl G, Farboud S, Sterling ML, Heckman JJ, van Raalte B, Lenferink D, van der Stam A, Smeets CJLM, Fisher SE, Englitz B. High-precision spatial analysis of mouse courtship vocalization behavior reveals sex and strain differences. Sci Rep 2023; 13:5219. [PMID: 36997591 PMCID: PMC10063627 DOI: 10.1038/s41598-023-31554-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 03/14/2023] [Indexed: 04/01/2023] Open
Abstract
Mice display a wide repertoire of vocalizations that varies with sex, strain, and context. Especially during social interaction, including sexually motivated dyadic interaction, mice emit sequences of ultrasonic vocalizations (USVs) of high complexity. As animals of both sexes vocalize, a reliable attribution of USVs to their emitter is essential. The state-of-the-art in sound localization for USVs in 2D allows spatial localization at a resolution of multiple centimeters. However, animals interact at closer ranges, e.g. snout-to-snout. Hence, improved algorithms are required to reliably assign USVs. We present a novel algorithm, SLIM (Sound Localization via Intersecting Manifolds), that achieves a 2-3-fold improvement in accuracy (13.1-14.3 mm) using only 4 microphones and extends to many microphones and localization in 3D. This accuracy allows reliable assignment of 84.3% of all USVs in our dataset. We apply SLIM to courtship interactions between adult C57Bl/6J wildtype mice and those carrying a heterozygous Foxp2 variant (R552H). The improved spatial accuracy reveals that vocalization behavior is dependent on the spatial relation between the interacting mice. Female mice vocalized more in close snout-to-snout interaction while male mice vocalized more when the male snout was in close proximity to the female's ano-genital region. Further, we find that the acoustic properties of the ultrasonic vocalizations (duration, Wiener Entropy, and sound level) are dependent on the spatial relation between the interacting mice as well as on the genotype. In conclusion, the improved attribution of vocalizations to their emitters provides a foundation for better understanding social vocal behaviors.
Collapse
Affiliation(s)
- Gabriel Oliveira-Stahl
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Soha Farboud
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Max L Sterling
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Jesse J Heckman
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Bram van Raalte
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Dionne Lenferink
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Amber van der Stam
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Cleo J L M Smeets
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Simon E Fisher
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Bernhard Englitz
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
11
|
Wölfl S, Zala SM, Penn DJ. Male scent but not courtship vocalizations induce estrus in wild female house mice. Physiol Behav 2023; 259:114053. [PMID: 36502894 DOI: 10.1016/j.physbeh.2022.114053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/05/2022] [Accepted: 12/06/2022] [Indexed: 12/13/2022]
Abstract
Exposure to males or male urinary scent can induce and accelerate the rate of female estrous cycling in house mice ("Whitten effect"), and this response has been replicated many times since its discovery over 60 years ago. Here, we tested whether exposing female mice to recordings of male courtship ultrasonic vocalizations (USVs) induces estrous cycling, and whether exposure to both male scent and USVs has a stronger effect than to either of these stimuli alone. We conducted our study with 60 wild-derived female house mice (Mus musculus musculus). After singly housing females for 14 days, we monitored estrous stages via vaginal cytology for two weeks while isolated from males or male stimuli. We continued monitoring estrus for two more weeks during experimental exposure to one of four different types of stimuli: (1) clean bedding and background noise playback (negative control); (2) recordings of male USVs (16 min per day) and clean bedding (male USV treatment); (3) soiled male bedding and background noise playback (male odor treatment; positive control); or (4) male USVs and soiled male bedding (male odor and USV treatment). Females were then paired with males to test whether any of the four treatments influenced female reproduction (especially latency to birth). We confirmed that exposure to male odor increased female cycling, as expected, but exposure to recordings of male USVs had no effect on estrus. Females exposed to both USVs and odor went through more cycles compared to controls, but did not differ significantly from exposure to male odor (and background noise). After pairing females with a male, females showing male odor-induced cycling produced their first litter sooner than controls, whereas USVs did not have such an effect. This is the first study to our knowledge to show that male odor induces estrus in wild house mice and to show functional effects on reproduction. Our results do not support the hypothesis that male vocalizations induce female estrus, although we suggest other approaches that could be used to further test this hypothesis.
Collapse
Affiliation(s)
- Simon Wölfl
- Konrad Lorenz Institute of Ethology, University of Veterinary Medicine Vienna, Savoyenstrasse 1a, 1160, Vienna, Austria
| | - Sarah M Zala
- Konrad Lorenz Institute of Ethology, University of Veterinary Medicine Vienna, Savoyenstrasse 1a, 1160, Vienna, Austria
| | - Dustin J Penn
- Konrad Lorenz Institute of Ethology, University of Veterinary Medicine Vienna, Savoyenstrasse 1a, 1160, Vienna, Austria.
| |
Collapse
|
12
|
Hood KE, Long E, Navarro E, Hurley LM. Playback of broadband vocalizations of female mice suppresses male ultrasonic calls. PLoS One 2023; 18:e0273742. [PMID: 36603000 PMCID: PMC9815654 DOI: 10.1371/journal.pone.0273742] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Accepted: 08/15/2022] [Indexed: 01/06/2023] Open
Abstract
Although male vocalizations during opposite- sex interaction have been heavily studied as sexually selected signals, the understanding of the roles of female vocal signals produced in this context is more limited. During intersexual interactions between mice, males produce a majority of ultrasonic vocalizations (USVs), while females produce a majority of human-audible squeaks, also called broadband vocalizations (BBVs). BBVs may be produced in conjunction with defensive aggression, making it difficult to assess whether males respond to BBVs themselves. To assess the direct effect of BBVs on male behavior, we used a split-cage paradigm in which high rates of male USVs were elicited by female presence on the other side of a barrier, but which precluded extensive male-female contact and the spontaneous production of BBVs. In this paradigm, playback of female BBVs decreased USV production, which recovered after the playback period. Trials in which female vocalizations were prevented by the use of female bedding alone or of anesthetized females as stimuli also showed a decrease in response to BBV playback. No non-vocal behaviors declined during playback, although digging behavior increased. Similar to BBVs, WNs also robustly suppressed USV production, albeit to a significantly larger extent. USVs suppression had two distinct temporal components. When grouped in 5-second bins, USVs interleaved with bursts of stimulus BBVs. USV suppression also adapted to BBV playback on the order of minutes. Adaptation occurred more rapidly in males that were housed individually as opposed to socially for a week prior to testing, suggesting that the adaptation trajectory is sensitive to social experience. These findings suggest the possibility that vocal interaction between male and female mice, with males suppressing USVs in response to BBVs, may influence the dynamics of communicative behavior.
Collapse
Affiliation(s)
- Kayleigh E. Hood
- Department of Biology, Indiana University, Bloomington, Indiana, United States of America
- Center for the Integrative Study of Animal Behavior, Indiana University, Bloomington, Indiana, United States of America
| | - Eden Long
- Department of Biology, Indiana University, Bloomington, Indiana, United States of America
| | - Eric Navarro
- Department of Biology, Indiana University, Bloomington, Indiana, United States of America
- Center for the Integrative Study of Animal Behavior, Indiana University, Bloomington, Indiana, United States of America
| | - Laura M. Hurley
- Department of Biology, Indiana University, Bloomington, Indiana, United States of America
- Center for the Integrative Study of Animal Behavior, Indiana University, Bloomington, Indiana, United States of America
| |
Collapse
|
13
|
Jabarin R, Netser S, Wagner S. Beyond the three-chamber test: toward a multimodal and objective assessment of social behavior in rodents. Mol Autism 2022; 13:41. [PMID: 36284353 PMCID: PMC9598038 DOI: 10.1186/s13229-022-00521-6] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 10/06/2022] [Indexed: 12/31/2022] Open
Abstract
MAIN: In recent years, substantial advances in social neuroscience have been realized, including the generation of numerous rodent models of autism spectrum disorder. Still, it can be argued that those methods currently being used to analyze animal social behavior create a bottleneck that significantly slows down progress in this field. Indeed, the bulk of research still relies on a small number of simple behavioral paradigms, the results of which are assessed without considering behavioral dynamics. Moreover, only few variables are examined in each paradigm, thus overlooking a significant portion of the complexity that characterizes social interaction between two conspecifics, subsequently hindering our understanding of the neural mechanisms governing different aspects of social behavior. We further demonstrate these constraints by discussing the most commonly used paradigm for assessing rodent social behavior, the three-chamber test. We also point to the fact that although emotions greatly influence human social behavior, we lack reliable means for assessing the emotional state of animals during social tasks. As such, we also discuss current evidence supporting the existence of pro-social emotions and emotional cognition in animal models. We further suggest that adequate social behavior analysis requires a novel multimodal approach that employs automated and simultaneous measurements of multiple behavioral and physiological variables at high temporal resolution in socially interacting animals. We accordingly describe several computerized systems and computational tools for acquiring and analyzing such measurements. Finally, we address several behavioral and physiological variables that can be used to assess socio-emotional states in animal models and thus elucidate intricacies of social behavior so as to attain deeper insight into the brain mechanisms that mediate such behaviors. CONCLUSIONS: In summary, we suggest that combining automated multimodal measurements with machine-learning algorithms will help define socio-emotional states and determine their dynamics during various types of social tasks, thus enabling a more thorough understanding of the complexity of social behavior.
Collapse
Affiliation(s)
- Renad Jabarin
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel.
| | - Shai Netser
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel
| | - Shlomo Wagner
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel
| |
Collapse
|
14
|
Premoli M, Petroni V, Bulthuis R, Bonini SA, Pietropaolo S. Ultrasonic Vocalizations in Adult C57BL/6J Mice: The Role of Sex Differences and Repeated Testing. Front Behav Neurosci 2022; 16:883353. [PMID: 35910678 PMCID: PMC9330122 DOI: 10.3389/fnbeh.2022.883353] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 06/20/2022] [Indexed: 11/15/2022] Open
Abstract
Ultrasonic vocalizations (USVs) are a major tool for assessing social communication in laboratory mice during their entire lifespan. At adulthood, male mice preferentially emit USVs toward a female conspecific, while females mostly produce ultrasonic calls when facing an adult intruder of the same sex. Recent studies have developed several sophisticated tools to analyze adult mouse USVs, especially in males, because of the increasing relevance of adult communication for behavioral phenotyping of mouse models of autism spectrum disorder (ASD). Little attention has been instead devoted to adult female USVs and impact of sex differences on the quantitative and qualitative characteristics of mouse USVs. Most of the studies have also focused on a single testing session, often without concomitant assessment of other social behaviors (e.g., sniffing), so little is still known about the link between USVs and other aspects of social interaction and their stability/variations across multiple encounters. Here, we evaluated the USVs emitted by adult male and female mice during 3 repeated encounters with an unfamiliar female, with equal or different pre-testing isolation periods between sexes. We demonstrated clear sex differences in several USVs' characteristics and other social behaviors, and these were mostly stable across the encounters and independent of pre-testing isolation. The estrous cycle of the tested females exerted quantitative effects on their vocal and non-vocal behaviors, although it did not affect the qualitative composition of ultrasonic calls. Our findings obtained in B6 mice, i.e., the strain most widely used for engineering of transgenic mouse lines, contribute to provide new guidelines for assessing ultrasonic communication in male and female adult mice.
Collapse
Affiliation(s)
- Marika Premoli
- Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
| | | | | | - Sara Anna Bonini
- Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
| | | |
Collapse
|
15
|
Pessoa D, Petrella L, Martins P, Castelo-Branco M, Teixeira C. Automatic segmentation and classification of mice ultrasonic vocalizations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:266. [PMID: 35931540 DOI: 10.1121/10.0012350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 06/20/2022] [Indexed: 06/15/2023]
Abstract
This paper addresses the development of a system for classifying mouse ultrasonic vocalizations (USVs) present in audio recordings. The automatic labeling process for USVs is usually divided into two main steps: USV segmentation followed by the matching classification. Three main contributions can be highlighted: (i) a new segmentation algorithm, (ii) a new set of features, and (iii) the discrimination of a higher number of classes when compared to similar studies. The developed segmentation algorithm is based on spectral entropy analysis. This novel segmentation approach can detect USVs with 94% and 74% recall and precision, respectively. When compared to other methods/software, our segmentation algorithm achieves a higher recall. Regarding the classification phase, besides the traditional features from time, frequency, and time-frequency domains, a new set of contour-based features were extracted and used as inputs of shallow machine learning classification models. The contour-based features were obtained from the time-frequency ridge representation of USVs. The classification methods can differentiate among ten different syllable types with 81.1% accuracy and 80.5% weighted F1-score. The algorithms were developed and evaluated based on a large dataset, acquired on diverse social interaction conditions between the animals, to stimulate a varied vocal repertoire.
Collapse
Affiliation(s)
- Diogo Pessoa
- University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| | - Lorena Petrella
- University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| | - Pedro Martins
- University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| | - Miguel Castelo-Branco
- University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| | - César Teixeira
- University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| |
Collapse
|
16
|
Caruso A, Marconi MA, Scattoni ML, Ricceri L. Ultrasonic vocalizations in laboratory mice: strain, age, and sex differences. GENES, BRAIN, AND BEHAVIOR 2022; 21:e12815. [PMID: 35689354 PMCID: PMC9744514 DOI: 10.1111/gbb.12815] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Revised: 04/29/2022] [Accepted: 05/04/2022] [Indexed: 12/31/2022]
Abstract
Mice produce ultrasonic vocalizations (USVs) in different social contexts across lifespan. There is ethological evidence that pup USVs elicit maternal retrieval and adult USVs facilitate social interaction with a conspecific. Analysis of mouse vocal and social repertoire across strains, sex and contexts remains not well explored. To address these issues, in inbred (C57BL/6, FVB) and outbred (CD-1) mouse strains, we recorded and evaluated USVs as neonates and during adult social encounters (male-female and female-female social interaction). We showed significant strain differences in the quantitative (call rate and duration of USVs) and qualitative vocal analysis (spectrographic characterization) from early stage to adulthood, in line with specific patterns of social behaviors. Inbred C57BL/6 mice produced a lower number of calls with less internal changes and shorter duration; inbred FVB mice displayed more social behaviors and produced more syllables with repeated internal changes; outbred CD-1 mice had an intermediate profile. Our results suggest specific vocal signatures in each mouse strain, thus helping to better define socio-communicative profiles of mouse strains and to guide the choice of an appropriate strain according to the experimental settings.
Collapse
Affiliation(s)
- Angela Caruso
- Research Coordination and Support ServiceIstituto Superiore di SanitàRomeItaly
| | - Maria Adelaide Marconi
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life SciencesUniversity of Veterinary MedicineViennaAustria
| | | | - Laura Ricceri
- Center for Behavioral Sciences and Mental HealthIstituto Superiore di SanitàRomeItaly
| |
Collapse
|
17
|
Sahu PK, Campbell KA, Oprea A, Phillmore LS, Sturdy CB. Comparing methodologies for classification of zebra finch distance calls. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3305. [PMID: 35649952 DOI: 10.1121/10.0011401] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 05/04/2022] [Indexed: 06/15/2023]
Abstract
Bioacoustic analysis has been used for a variety of purposes including classifying vocalizations for biodiversity monitoring and understanding mechanisms of cognitive processes. A wide range of statistical methods, including various automated methods, have been used to successfully classify vocalizations based on species, sex, geography, and individual. A comprehensive approach focusing on identifying acoustic features putatively involved in classification is required for the prediction of features necessary for discrimination in the real world. Here, we used several classification techniques, namely discriminant function analyses (DFAs), support vector machines (SVMs), and artificial neural networks (ANNs), for sex-based classification of zebra finch (Taeniopygia guttata) distance calls using acoustic features measured from spectrograms. We found that all three methods (DFAs, SVMs, and ANNs) correctly classified the calls to respective sex-based categories with high accuracy between 92 and 96%. Frequency modulation of ascending frequency, total duration, and end frequency of the distance call were the most predictive features underlying this classification in all of our models. Our results corroborate evidence of the importance of total call duration and frequency modulation in the classification of male and female distance calls. Moreover, we provide a methodological approach for bioacoustic classification problems using multiple statistical analyses.
Collapse
Affiliation(s)
- Prateek K Sahu
- Department of Psychology, University of Alberta, Edmonton, Alberta T6G 2R3, Canada
| | - Kimberley A Campbell
- Department of Psychology, University of Alberta, Edmonton, Alberta T6G 2R3, Canada
| | - Alexandra Oprea
- Department of Psychology and Neuroscience, Dalhousie University, Halifax, Nova Scotia B3H 4R2, Canada
| | - Leslie S Phillmore
- Department of Psychology and Neuroscience, Dalhousie University, Halifax, Nova Scotia B3H 4R2, Canada
| | - Christopher B Sturdy
- Department of Psychology, University of Alberta, Edmonton, Alberta T6G 2R3, Canada
| |
Collapse
|
18
|
Stowell D. Computational bioacoustics with deep learning: a review and roadmap. PeerJ 2022; 10:e13152. [PMID: 35341043 PMCID: PMC8944344 DOI: 10.7717/peerj.13152] [Citation(s) in RCA: 63] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 03/01/2022] [Indexed: 01/20/2023] Open
Abstract
Animal vocalisations and natural soundscapes are fascinating objects of study, and contain valuable evidence about animal behaviours, populations and ecosystems. They are studied in bioacoustics and ecoacoustics, with signal processing and analysis an important component. Computational bioacoustics has accelerated in recent decades due to the growth of affordable digital sound recording devices, and to huge progress in informatics such as big data, signal processing and machine learning. Methods are inherited from the wider field of deep learning, including speech and image processing. However, the tasks, demands and data characteristics are often different from those addressed in speech or music analysis. There remain unsolved problems, and tasks for which evidence is surely present in many acoustic signals, but not yet realised. In this paper I perform a review of the state of the art in deep learning for computational bioacoustics, aiming to clarify key concepts and identify and analyse knowledge gaps. Based on this, I offer a subjective but principled roadmap for computational bioacoustics with deep learning: topics that the community should aim to address, in order to make the most of future developments in AI and informatics, and to use audio data in answering zoological and ecological questions.
Collapse
Affiliation(s)
- Dan Stowell
- Department of Cognitive Science and Artificial Intelligence, Tilburg University, Tilburg, The Netherlands,Naturalis Biodiversity Center, Leiden, The Netherlands
| |
Collapse
|
19
|
Qian K, Koike T, Tamada K, Takumi T, Schuller BW, Yamamoto Y. Sensing the Sounds of Silence: A Pilot Study on the Detection of Model Mice of Autism Spectrum Disorder from Ultrasonic Vocalisations. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:68-71. [PMID: 34891241 DOI: 10.1109/embc46164.2021.9630793] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Studying the animal models of human neuropsychiatric disorders can facilitate the understanding of mechanisms of symptoms both physiologically and genetically. Previous studies have shown that ultrasonic vocalisations (USVs) of mice might be efficient markers to distinguish the wild type group and the model of autism spectrum disorder (mASD). Nevertheless, in-depth analysis of these 'silence' sounds by leveraging the power of advanced computer audition technologies (e. g., deep learning) is limited. To this end, we propose a pilot study on using a large-scale pre-trained audio neural network to extract high-level representations from the USVs of mice for the task on detection of mASD. Experiments have shown a best result reaching an unweighted average recall of 79.2 % for the binary classification task in a rigorous subject-independent scenario. To the best of our knowledge, this is the first time to analyse the sounds that cannot be heard by human beings for the detection of mASD mice. The novel findings can be significant to motivate future works with according means on studying animal models of human patients.
Collapse
|
20
|
Steinfath E, Palacios-Muñoz A, Rottschäfer JR, Yuezak D, Clemens J. Fast and accurate annotation of acoustic signals with deep neural networks. eLife 2021; 10:e68837. [PMID: 34723794 PMCID: PMC8560090 DOI: 10.7554/elife.68837] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 10/04/2021] [Indexed: 01/06/2023] Open
Abstract
Acoustic signals serve communication within and across species throughout the animal kingdom. Studying the genetics, evolution, and neurobiology of acoustic communication requires annotating acoustic signals: segmenting and identifying individual acoustic elements like syllables or sound pulses. To be useful, annotations need to be accurate, robust to noise, and fast. We here introduce DeepAudioSegmenter (DAS), a method that annotates acoustic signals across species based on a deep-learning derived hierarchical presentation of sound. We demonstrate the accuracy, robustness, and speed of DAS using acoustic signals with diverse characteristics from insects, birds, and mammals. DAS comes with a graphical user interface for annotating song, training the network, and for generating and proofreading annotations. The method can be trained to annotate signals from new species with little manual annotation and can be combined with unsupervised methods to discover novel signal types. DAS annotates song with high throughput and low latency for experimental interventions in realtime. Overall, DAS is a universal, versatile, and accessible tool for annotating acoustic communication signals.
Collapse
Affiliation(s)
- Elsa Steinfath
- European Neuroscience Institute - A Joint Initiative of the University Medical Center Göttingen and the Max-Planck-SocietyGöttingenGermany
- International Max Planck Research School and Göttingen Graduate School for Neurosciences, Biophysics, and Molecular Biosciences (GGNB) at the University of GöttingenGöttingenGermany
| | - Adrian Palacios-Muñoz
- European Neuroscience Institute - A Joint Initiative of the University Medical Center Göttingen and the Max-Planck-SocietyGöttingenGermany
- International Max Planck Research School and Göttingen Graduate School for Neurosciences, Biophysics, and Molecular Biosciences (GGNB) at the University of GöttingenGöttingenGermany
| | - Julian R Rottschäfer
- European Neuroscience Institute - A Joint Initiative of the University Medical Center Göttingen and the Max-Planck-SocietyGöttingenGermany
- International Max Planck Research School and Göttingen Graduate School for Neurosciences, Biophysics, and Molecular Biosciences (GGNB) at the University of GöttingenGöttingenGermany
| | - Deniz Yuezak
- European Neuroscience Institute - A Joint Initiative of the University Medical Center Göttingen and the Max-Planck-SocietyGöttingenGermany
- International Max Planck Research School and Göttingen Graduate School for Neurosciences, Biophysics, and Molecular Biosciences (GGNB) at the University of GöttingenGöttingenGermany
| | - Jan Clemens
- European Neuroscience Institute - A Joint Initiative of the University Medical Center Göttingen and the Max-Planck-SocietyGöttingenGermany
- Bernstein Center for Computational NeuroscienceGöttingenGermany
| |
Collapse
|
21
|
Grieco F, Bernstein BJ, Biemans B, Bikovski L, Burnett CJ, Cushman JD, van Dam EA, Fry SA, Richmond-Hacham B, Homberg JR, Kas MJH, Kessels HW, Koopmans B, Krashes MJ, Krishnan V, Logan S, Loos M, McCann KE, Parduzi Q, Pick CG, Prevot TD, Riedel G, Robinson L, Sadighi M, Smit AB, Sonntag W, Roelofs RF, Tegelenbosch RAJ, Noldus LPJJ. Measuring Behavior in the Home Cage: Study Design, Applications, Challenges, and Perspectives. Front Behav Neurosci 2021; 15:735387. [PMID: 34630052 PMCID: PMC8498589 DOI: 10.3389/fnbeh.2021.735387] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 08/27/2021] [Indexed: 12/14/2022] Open
Abstract
The reproducibility crisis (or replication crisis) in biomedical research is a particularly existential and under-addressed issue in the field of behavioral neuroscience, where, in spite of efforts to standardize testing and assay protocols, several known and unknown sources of confounding environmental factors add to variance. Human interference is a major contributor to variability both within and across laboratories, as well as novelty-induced anxiety. Attempts to reduce human interference and to measure more "natural" behaviors in subjects has led to the development of automated home-cage monitoring systems. These systems enable prolonged and longitudinal recordings, and provide large continuous measures of spontaneous behavior that can be analyzed across multiple time scales. In this review, a diverse team of neuroscientists and product developers share their experiences using such an automated monitoring system that combines Noldus PhenoTyper® home-cages and the video-based tracking software, EthoVision® XT, to extract digital biomarkers of motor, emotional, social and cognitive behavior. After presenting our working definition of a "home-cage", we compare home-cage testing with more conventional out-of-cage tests (e.g., the open field) and outline the various advantages of the former, including opportunities for within-subject analyses and assessments of circadian and ultradian activity. Next, we address technical issues pertaining to the acquisition of behavioral data, such as the fine-tuning of the tracking software and the potential for integration with biotelemetry and optogenetics. Finally, we provide guidance on which behavioral measures to emphasize, how to filter, segment, and analyze behavior, and how to use analysis scripts. We summarize how the PhenoTyper has applications to study neuropharmacology as well as animal models of neurodegenerative and neuropsychiatric illness. Looking forward, we examine current challenges and the impact of new developments. Examples include the automated recognition of specific behaviors, unambiguous tracking of individuals in a social context, the development of more animal-centered measures of behavior and ways of dealing with large datasets. Together, we advocate that by embracing standardized home-cage monitoring platforms like the PhenoTyper, we are poised to directly assess issues pertaining to reproducibility, and more importantly, measure features of rodent behavior under more ethologically relevant scenarios.
Collapse
Affiliation(s)
| | - Briana J Bernstein
- Neurobiology Laboratory, National Institute of Environmental Health Sciences, National Institutes of Health, Research Triangle Park, NC, United States
| | | | - Lior Bikovski
- Myers Neuro-Behavioral Core Facility, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- School of Behavioral Sciences, Netanya Academic College, Netanya, Israel
| | - C Joseph Burnett
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Jesse D Cushman
- Neurobiology Laboratory, National Institute of Environmental Health Sciences, National Institutes of Health, Research Triangle Park, NC, United States
| | | | - Sydney A Fry
- Neurobiology Laboratory, National Institute of Environmental Health Sciences, National Institutes of Health, Research Triangle Park, NC, United States
| | - Bar Richmond-Hacham
- Department of Anatomy and Anthropology, Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Judith R Homberg
- Department of Cognitive Neuroscience, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Medical Centre, Nijmegen, Netherlands
| | - Martien J H Kas
- Groningen Institute for Evolutionary Life Sciences, University of Groningen, Groningen, Netherlands
| | - Helmut W Kessels
- Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
| | | | - Michael J Krashes
- National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD, United States
| | - Vaishnav Krishnan
- Laboratory of Epilepsy and Emotional Behavior, Baylor Comprehensive Epilepsy Center, Departments of Neurology, Neuroscience, and Psychiatry & Behavioral Sciences, Baylor College of Medicine, Houston, TX, United States
| | - Sreemathi Logan
- Department of Rehabilitation Sciences, College of Allied Health, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Maarten Loos
- Sylics (Synaptologics BV), Amsterdam, Netherlands
| | - Katharine E McCann
- Neurobiology Laboratory, National Institute of Environmental Health Sciences, National Institutes of Health, Research Triangle Park, NC, United States
| | | | - Chaim G Pick
- Department of Anatomy and Anthropology, Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
- The Dr. Miriam and Sheldon G. Adelson Chair and Center for the Biology of Addictive Diseases, Tel Aviv University, Tel Aviv, Israel
| | - Thomas D Prevot
- Centre for Addiction and Mental Health and Department of Psychiatry, University of Toronto, Toronto, ON, Canada
| | - Gernot Riedel
- Institute of Medical Sciences, University of Aberdeen, Aberdeen, United Kingdom
| | - Lianne Robinson
- Institute of Medical Sciences, University of Aberdeen, Aberdeen, United Kingdom
| | - Mina Sadighi
- Department of Cognitive Neuroscience, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Medical Centre, Nijmegen, Netherlands
| | - August B Smit
- Department of Molecular and Cellular Neurobiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, Netherlands
| | - William Sonntag
- Department of Biochemistry & Molecular Biology, Center for Geroscience, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | | | | | - Lucas P J J Noldus
- Noldus Information Technology BV, Wageningen, Netherlands
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
22
|
Goffinet J, Brudner S, Mooney R, Pearson J. Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires. eLife 2021; 10:e67855. [PMID: 33988503 PMCID: PMC8213406 DOI: 10.7554/elife.67855] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 05/12/2021] [Indexed: 11/16/2022] Open
Abstract
Increases in the scale and complexity of behavioral data pose an increasing challenge for data analysis. A common strategy involves replacing entire behaviors with small numbers of handpicked, domain-specific features, but this approach suffers from several crucial limitations. For example, handpicked features may miss important dimensions of variability, and correlations among them complicate statistical testing. Here, by contrast, we apply the variational autoencoder (VAE), an unsupervised learning method, to learn features directly from data and quantify the vocal behavior of two model species: the laboratory mouse and the zebra finch. The VAE converges on a parsimonious representation that outperforms handpicked features on a variety of common analysis tasks, enables the measurement of moment-by-moment vocal variability on the timescale of tens of milliseconds in the zebra finch, provides strong evidence that mouse ultrasonic vocalizations do not cluster as is commonly believed, and captures the similarity of tutor and pupil birdsong with qualitatively higher fidelity than previous approaches. In all, we demonstrate the utility of modern unsupervised learning approaches to the quantification of complex and high-dimensional vocal behavior.
Collapse
Affiliation(s)
- Jack Goffinet
- Department of Computer Science, Duke UniversityDurhamUnited States
- Center for Cognitive Neurobiology, Duke UniversityDurhamUnited States
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Samuel Brudner
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Richard Mooney
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - John Pearson
- Center for Cognitive Neurobiology, Duke UniversityDurhamUnited States
- Department of Neurobiology, Duke UniversityDurhamUnited States
- Department of Biostatistics & Bioinformatics, Duke UniversityDurhamUnited States
- Department of Electrical and Computer Engineering, Duke UniversityDurhamUnited States
| |
Collapse
|
23
|
Premoli M, Baggi D, Bianchetti M, Gnutti A, Bondaschi M, Mastinu A, Migliorati P, Signoroni A, Leonardi R, Memo M, Bonini SA. Automatic classification of mice vocalizations using Machine Learning techniques and Convolutional Neural Networks. PLoS One 2021; 16:e0244636. [PMID: 33465075 PMCID: PMC7815145 DOI: 10.1371/journal.pone.0244636] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Accepted: 12/14/2020] [Indexed: 12/03/2022] Open
Abstract
Ultrasonic vocalizations (USVs) analysis is a well-recognized tool to investigate animal communication. It can be used for behavioral phenotyping of murine models of different disorders. The USVs are usually recorded with a microphone sensitive to ultrasound frequencies and they are analyzed by specific software. Different calls typologies exist, and each ultrasonic call can be manually classified, but the qualitative analysis is highly time-consuming. Considering this framework, in this work we proposed and evaluated a set of supervised learning methods for automatic USVs classification. This could represent a sustainable procedure to deeply analyze the ultrasonic communication, other than a standardized analysis. We used manually built datasets obtained by segmenting the USVs audio tracks analyzed with the Avisoft software, and then by labelling each of them into 10 representative classes. For the automatic classification task, we designed a Convolutional Neural Network that was trained receiving as input the spectrogram images associated to the segmented audio files. In addition, we also tested some other supervised learning algorithms, such as Support Vector Machine, Random Forest and Multilayer Perceptrons, exploiting informative numerical features extracted from the spectrograms. The performance showed how considering the whole time/frequency information of the spectrogram leads to significantly higher performance than considering a subset of numerical features. In the authors’ opinion, the experimental results may represent a valuable benchmark for future work in this research field.
Collapse
Affiliation(s)
- Marika Premoli
- Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
- * E-mail:
| | - Daniele Baggi
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Marco Bianchetti
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Alessandro Gnutti
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Marco Bondaschi
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Andrea Mastinu
- Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
| | | | - Alberto Signoroni
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Riccardo Leonardi
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Maurizio Memo
- Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
| | - Sara Anna Bonini
- Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
| |
Collapse
|