1
|
Gencturk S, Unal G. Rodent tests of depression and anxiety: Construct validity and translational relevance. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2024; 24:191-224. [PMID: 38413466 PMCID: PMC11039509 DOI: 10.3758/s13415-024-01171-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/03/2024] [Indexed: 02/29/2024]
Abstract
Behavioral testing constitutes the primary method to measure the emotional states of nonhuman animals in preclinical research. Emerging as the characteristic tool of the behaviorist school of psychology, behavioral testing of animals, particularly rodents, is employed to understand the complex cognitive and affective symptoms of neuropsychiatric disorders. Following the symptom-based diagnosis model of the DSM, rodent models and tests of depression and anxiety focus on behavioral patterns that resemble the superficial symptoms of these disorders. While these practices provided researchers with a platform to screen novel antidepressant and anxiolytic drug candidates, their construct validity-involving relevant underlying mechanisms-has been questioned. In this review, we present the laboratory procedures used to assess depressive- and anxiety-like behaviors in rats and mice. These include constructs that rely on stress-triggered responses, such as behavioral despair, and those that emerge with nonaversive training, such as cognitive bias. We describe the specific behavioral tests that are used to assess these constructs and discuss the criticisms on their theoretical background. We review specific concerns about the construct validity and translational relevance of individual behavioral tests, outline the limitations of the traditional, symptom-based interpretation, and introduce novel, ethologically relevant frameworks that emphasize simple behavioral patterns. Finally, we explore behavioral monitoring and morphological analysis methods that can be integrated into behavioral testing and discuss how they can enhance the construct validity of these tests.
Collapse
Affiliation(s)
- Sinem Gencturk
- Behavioral Neuroscience Laboratory, Department of Psychology, Boğaziçi University, 34342, Istanbul, Turkey
| | - Gunes Unal
- Behavioral Neuroscience Laboratory, Department of Psychology, Boğaziçi University, 34342, Istanbul, Turkey.
| |
Collapse
|
2
|
Baggi D, Premoli M, Gnutti A, Bonini SA, Leonardi R, Memo M, Migliorati P. Extended performance analysis of deep-learning algorithms for mice vocalization segmentation. Sci Rep 2023; 13:11238. [PMID: 37433808 DOI: 10.1038/s41598-023-38186-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 07/04/2023] [Indexed: 07/13/2023] Open
Abstract
Ultrasonic vocalizations (USVs) analysis represents a fundamental tool to study animal communication. It can be used to perform a behavioral investigation of mice for ethological studies and in the field of neuroscience and neuropharmacology. The USVs are usually recorded with a microphone sensitive to ultrasound frequencies and then processed by specific software, which help the operator to identify and characterize different families of calls. Recently, many automated systems have been proposed for automatically performing both the detection and the classification of the USVs. Of course, the USV segmentation represents the crucial step for the general framework, since the quality of the call processing strictly depends on how accurately the call itself has been previously detected. In this paper, we investigate the performance of three supervised deep learning methods for automated USV segmentation: an Auto-Encoder Neural Network (AE), a U-NET Neural Network (UNET) and a Recurrent Neural Network (RNN). The proposed models receive as input the spectrogram associated with the recorded audio track and return as output the regions in which the USV calls have been detected. To evaluate the performance of the models, we have built a dataset by recording several audio tracks and manually segmenting the corresponding USV spectrograms generated with the Avisoft software, producing in this way the ground-truth (GT) used for training. All three proposed architectures demonstrated precision and recall scores exceeding [Formula: see text], with UNET and AE achieving values above [Formula: see text], surpassing other state-of-the-art methods that were considered for comparison in this study. Additionally, the evaluation was extended to an external dataset, where UNET once again exhibited the highest performance. We suggest that our experimental results may represent a valuable benchmark for future works.
Collapse
Affiliation(s)
- Daniele Baggi
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Marika Premoli
- Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
| | - Alessandro Gnutti
- Department of Information Engineering, University of Brescia, Brescia, Italy.
| | - Sara Anna Bonini
- Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
| | - Riccardo Leonardi
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Maurizio Memo
- Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
| | | |
Collapse
|
3
|
Arnaud V, Pellegrino F, Keenan S, St-Gelais X, Mathevon N, Levréro F, Coupé C. Improving the workflow to crack Small, Unbalanced, Noisy, but Genuine (SUNG) datasets in bioacoustics: The case of bonobo calls. PLoS Comput Biol 2023; 19:e1010325. [PMID: 37053268 PMCID: PMC10129004 DOI: 10.1371/journal.pcbi.1010325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 04/25/2023] [Accepted: 03/01/2023] [Indexed: 04/15/2023] Open
Abstract
Despite the accumulation of data and studies, deciphering animal vocal communication remains challenging. In most cases, researchers must deal with the sparse recordings composing Small, Unbalanced, Noisy, but Genuine (SUNG) datasets. SUNG datasets are characterized by a limited number of recordings, most often noisy, and unbalanced in number between the individuals or categories of vocalizations. SUNG datasets therefore offer a valuable but inevitably distorted vision of communication systems. Adopting the best practices in their analysis is essential to effectively extract the available information and draw reliable conclusions. Here we show that the most recent advances in machine learning applied to a SUNG dataset succeed in unraveling the complex vocal repertoire of the bonobo, and we propose a workflow that can be effective with other animal species. We implement acoustic parameterization in three feature spaces and run a Supervised Uniform Manifold Approximation and Projection (S-UMAP) to evaluate how call types and individual signatures cluster in the bonobo acoustic space. We then implement three classification algorithms (Support Vector Machine, xgboost, neural networks) and their combination to explore the structure and variability of bonobo calls, as well as the robustness of the individual signature they encode. We underscore how classification performance is affected by the feature set and identify the most informative features. In addition, we highlight the need to address data leakage in the evaluation of classification performance to avoid misleading interpretations. Our results lead to identifying several practical approaches that are generalizable to any other animal communication system. To improve the reliability and replicability of vocal communication studies with SUNG datasets, we thus recommend: i) comparing several acoustic parameterizations; ii) visualizing the dataset with supervised UMAP to examine the species acoustic space; iii) adopting Support Vector Machines as the baseline classification approach; iv) explicitly evaluating data leakage and possibly implementing a mitigation strategy.
Collapse
Affiliation(s)
- Vincent Arnaud
- Département des arts, des lettres et du langage, Université du Québec à Chicoutimi, Chicoutimi, Canada
- Laboratoire Dynamique Du Langage, UMR 5596, Université de Lyon, CNRS, Lyon, France
| | - François Pellegrino
- Laboratoire Dynamique Du Langage, UMR 5596, Université de Lyon, CNRS, Lyon, France
| | - Sumir Keenan
- ENES Bioacoustics Research Laboratory, University of Saint Étienne, CRNL, CNRS UMR 5292, Inserm UMR_S 1028, Saint-Étienne, France
| | - Xavier St-Gelais
- Département des arts, des lettres et du langage, Université du Québec à Chicoutimi, Chicoutimi, Canada
| | - Nicolas Mathevon
- ENES Bioacoustics Research Laboratory, University of Saint Étienne, CRNL, CNRS UMR 5292, Inserm UMR_S 1028, Saint-Étienne, France
| | - Florence Levréro
- ENES Bioacoustics Research Laboratory, University of Saint Étienne, CRNL, CNRS UMR 5292, Inserm UMR_S 1028, Saint-Étienne, France
| | - Christophe Coupé
- Laboratoire Dynamique Du Langage, UMR 5596, Université de Lyon, CNRS, Lyon, France
- Department of Linguistics, The University of Hong Kong, Hong Kong, China
| |
Collapse
|
4
|
Stoumpou V, Vargas CDM, Schade PF, Boyd JL, Giannakopoulos T, Jarvis ED. Analysis of Mouse Vocal Communication (AMVOC): a deep, unsupervised method for rapid detection, analysis and classification of ultrasonic vocalisations. BIOACOUSTICS 2022. [DOI: 10.1080/09524622.2022.2099973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Affiliation(s)
- Vasiliki Stoumpou
- School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - César D. M. Vargas
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA
| | - Peter F. Schade
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA
- Laboratory of Neural Systems, The Rockefeller University, New York, NY, USA
| | - J. Lomax Boyd
- Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD, USA
| | - Theodoros Giannakopoulos
- Computational Intelligence Lab, Institute of Informatics and Telecommunications, National Center of Scientific Research 'Demokritos', Athens, Greece
| | - Erich D. Jarvis
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| |
Collapse
|
5
|
Premoli M, Petroni V, Bulthuis R, Bonini SA, Pietropaolo S. Ultrasonic Vocalizations in Adult C57BL/6J Mice: The Role of Sex Differences and Repeated Testing. Front Behav Neurosci 2022; 16:883353. [PMID: 35910678 PMCID: PMC9330122 DOI: 10.3389/fnbeh.2022.883353] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 06/20/2022] [Indexed: 11/15/2022] Open
Abstract
Ultrasonic vocalizations (USVs) are a major tool for assessing social communication in laboratory mice during their entire lifespan. At adulthood, male mice preferentially emit USVs toward a female conspecific, while females mostly produce ultrasonic calls when facing an adult intruder of the same sex. Recent studies have developed several sophisticated tools to analyze adult mouse USVs, especially in males, because of the increasing relevance of adult communication for behavioral phenotyping of mouse models of autism spectrum disorder (ASD). Little attention has been instead devoted to adult female USVs and impact of sex differences on the quantitative and qualitative characteristics of mouse USVs. Most of the studies have also focused on a single testing session, often without concomitant assessment of other social behaviors (e.g., sniffing), so little is still known about the link between USVs and other aspects of social interaction and their stability/variations across multiple encounters. Here, we evaluated the USVs emitted by adult male and female mice during 3 repeated encounters with an unfamiliar female, with equal or different pre-testing isolation periods between sexes. We demonstrated clear sex differences in several USVs' characteristics and other social behaviors, and these were mostly stable across the encounters and independent of pre-testing isolation. The estrous cycle of the tested females exerted quantitative effects on their vocal and non-vocal behaviors, although it did not affect the qualitative composition of ultrasonic calls. Our findings obtained in B6 mice, i.e., the strain most widely used for engineering of transgenic mouse lines, contribute to provide new guidelines for assessing ultrasonic communication in male and female adult mice.
Collapse
Affiliation(s)
- Marika Premoli
- Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
| | | | | | - Sara Anna Bonini
- Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
| | - Susanna Pietropaolo
- Univ. Bordeaux, CNRS, INCIA, UMR 5287, Bordeaux, France
- *Correspondence: Susanna Pietropaolo
| |
Collapse
|
6
|
Abbasi R, Balazs P, Marconi MA, Nicolakis D, Zala SM, Penn DJ. Capturing the songs of mice with an improved detection and classification method for ultrasonic vocalizations (BootSnap). PLoS Comput Biol 2022; 18:e1010049. [PMID: 35551265 PMCID: PMC9098080 DOI: 10.1371/journal.pcbi.1010049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Accepted: 03/22/2022] [Indexed: 12/02/2022] Open
Abstract
House mice communicate through ultrasonic vocalizations (USVs), which are above the range of human hearing (>20 kHz), and several automated methods have been developed for USV detection and classification. Here we evaluate their advantages and disadvantages in a full, systematic comparison, while also presenting a new approach. This study aims to 1) determine the most efficient USV detection tool among the existing methods, and 2) develop a classification model that is more generalizable than existing methods. In both cases, we aim to minimize the user intervention required for processing new data. We compared the performance of four detection methods in an out-of-the-box approach, pretrained DeepSqueak detector, MUPET, USVSEG, and the Automatic Mouse Ultrasound Detector (A-MUD). We also compared these methods to human visual or ‘manual’ classification (ground truth) after assessing its reliability. A-MUD and USVSEG outperformed the other methods in terms of true positive rates using default and adjusted settings, respectively, and A-MUD outperformed USVSEG when false detection rates were also considered. For automating the classification of USVs, we developed BootSnap for supervised classification, which combines bootstrapping on Gammatone Spectrograms and Convolutional Neural Networks algorithms with Snapshot ensemble learning. It successfully classified calls into 12 types, including a new class of false positives that is useful for detection refinement. BootSnap outperformed the pretrained and retrained state-of-the-art tool, and thus it is more generalizable. BootSnap is freely available for scientific use. House mice and many other species use ultrasonic vocalizations to communicate in various contexts including social and sexual interactions. These vocalizations are increasingly investigated in research on animal communication and as a phenotype for studying the genetic basis of autism and speech disorders. Because manual methods for analyzing vocalizations are extremely time consuming, automatic tools for detection and classification are needed. We evaluated the performance of the available tools for analyzing ultrasonic vocalizations, and we compared detection tools for the first time to manual methods (“ground truth”) using recordings from wild-derived and laboratory mice. For the first time, class-wise inter-observer reliability of manual labels used for ground truth are analyzed and reported. Moreover, we developed a new classification method based on ensemble deep learning that provides more generalizability than the current state-of-the-art tool (both pretrained and retrained). Our new classification method is free for scientific use.
Collapse
Affiliation(s)
- Reyhaneh Abbasi
- Acoustic Research Institute, Austrian Academy of Sciences, Vienna, Austria
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
- Vienna Doctoral School of Cognition, Behaviour and Neuroscience, University of Vienna, Vienna, Austria
- * E-mail:
| | - Peter Balazs
- Acoustic Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Maria Adelaide Marconi
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
| | - Doris Nicolakis
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
| | - Sarah M. Zala
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
| | - Dustin J. Penn
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
| |
Collapse
|
7
|
Stowell D. Computational bioacoustics with deep learning: a review and roadmap. PeerJ 2022; 10:e13152. [PMID: 35341043 PMCID: PMC8944344 DOI: 10.7717/peerj.13152] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 03/01/2022] [Indexed: 01/20/2023] Open
Abstract
Animal vocalisations and natural soundscapes are fascinating objects of study, and contain valuable evidence about animal behaviours, populations and ecosystems. They are studied in bioacoustics and ecoacoustics, with signal processing and analysis an important component. Computational bioacoustics has accelerated in recent decades due to the growth of affordable digital sound recording devices, and to huge progress in informatics such as big data, signal processing and machine learning. Methods are inherited from the wider field of deep learning, including speech and image processing. However, the tasks, demands and data characteristics are often different from those addressed in speech or music analysis. There remain unsolved problems, and tasks for which evidence is surely present in many acoustic signals, but not yet realised. In this paper I perform a review of the state of the art in deep learning for computational bioacoustics, aiming to clarify key concepts and identify and analyse knowledge gaps. Based on this, I offer a subjective but principled roadmap for computational bioacoustics with deep learning: topics that the community should aim to address, in order to make the most of future developments in AI and informatics, and to use audio data in answering zoological and ecological questions.
Collapse
Affiliation(s)
- Dan Stowell
- Department of Cognitive Science and Artificial Intelligence, Tilburg University, Tilburg, The Netherlands,Naturalis Biodiversity Center, Leiden, The Netherlands
| |
Collapse
|
8
|
Wightman PH, Henrichs DW, Collier BA, Chamberlain MJ. Comparison of methods for automated identification of wild turkey gobbles. WILDLIFE SOC B 2022. [DOI: 10.1002/wsb.1246] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Patrick H. Wightman
- Warnell School of Forestry and Natural Resources University of Georgia Athens 30602 GA USA
| | - Darren W. Henrichs
- Department of Oceanography Texas A&M University College Station 77843 TX USA
| | - Bret A. Collier
- School of Renewable Natural Resources Louisiana State University Agricultural Center Baton Rouge 70803 LA USA
| | - Michael J. Chamberlain
- Warnell School of Forestry and Natural Resources University of Georgia Athens 30602 GA USA
| |
Collapse
|
9
|
Lawson KA, Flores AY, Hokenson RE, Ruiz CM, Mahler SV. Nucleus Accumbens Chemogenetic Inhibition Suppresses Amphetamine-Induced Ultrasonic Vocalizations in Male and Female Rats. Brain Sci 2021; 11:1255. [PMID: 34679320 PMCID: PMC8534195 DOI: 10.3390/brainsci11101255] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 09/15/2021] [Accepted: 09/17/2021] [Indexed: 11/21/2022] Open
Abstract
Adult rats emit ultrasonic vocalizations (USVs) related to their affective states, potentially providing information about their subjective experiences during behavioral neuroscience experiments. If so, USVs might provide an important link between invasive animal preclinical studies and human studies in which subjective states can be readily queried. Here, we induced USVs in male and female Long Evans rats using acute amphetamine (2 mg/kg), and asked how reversibly inhibiting nucleus accumbens neurons using designer receptors exclusively activated by designer drugs (DREADDs) impacts USV production. We analyzed USV characteristics using "Deepsqueak" software, and manually categorized detected calls into four previously defined subtypes. We found that systemic administration of the DREADD agonist clozapine-n-oxide, relative to vehicle in the same rats, suppressed the number of frequency-modulated and trill-containing USVs without impacting high frequency, unmodulated (flat) USVs, nor the small number of low-frequency USVs observed. Using chemogenetics, these results thus confirm that nucleus accumbens neurons are essential for production of amphetamine-induced frequency-modulated USVs. They also support the premise of further investigating the characteristics and subcategories of these calls as a window into the subjective effects of neural manipulations, with potential future clinical applications.
Collapse
Affiliation(s)
| | | | | | | | - Stephen V. Mahler
- Department of Neurobiology & Behavior, University of California, Irvine. 1203 McGaugh Hall, Irvine, CA 92697, USA; (K.A.L.); (A.Y.F.); (R.E.H.); (C.M.R.)
| |
Collapse
|