1
|
Wei Q, Wang J, Zhai G, Pang R, Yu H, Deng Q, Liu X, Zhou Y. PursuitNet: A deep learning model for predicting competitive pursuit-like behavior in mice. Brain Res 2025; 1858:149634. [PMID: 40210144 DOI: 10.1016/j.brainres.2025.149634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2025] [Revised: 03/23/2025] [Accepted: 04/07/2025] [Indexed: 04/12/2025]
Abstract
Predator-prey interactions exemplify adaptive intelligence refined by evolution, yet replicating these behaviors in artificial systems remains challenging. Here, we introduce PursuitNet, a deep learning framework specifically designed to model the competitive, real-time dynamics of pursuit-escape scenarios. Our approach is anchored by the Pursuit-Escape Confrontation (PEC) dataset, which records laboratory mice chasing a magnetically controlled robotic bait programmed to evade capture. Unlike conventional trajectory datasets, PEC emphasizes abrupt speed changes, evasive maneuvers, and continuous mutual adaptation. PursuitNet integrates a lightweight architecture that explicitly models dynamic interactions and spatial relationships using Graph Convolutional Networks, and fuses velocity and acceleration data to predict change using Temporal Convolutional Networks. In empirical evaluations, it outperforms standard models such as Social GAN and TUTR, exhibiting substantially lower displacement errors on the PEC dataset. Ablation experiments confirm that integrating spatial and temporal features is crucial for predicting the erratic turns and speed modulations inherent to pursuit-escape behavior. Beyond accurate trajectory prediction, PursuitNet simulates pursuit events that closely mirror real mouse-and-bait interactions, shedding light on how innate drives, rather than external instructions, guide adaptive decision-making. Although the framework is specialized for rapidly shifting trajectories, our findings suggest that this biologically inspired perspective can deepen understanding of predator-prey dynamics and inform the design of interactive robotics and autonomous systems.
Collapse
Affiliation(s)
- Qiaoqian Wei
- Guangxi Key Laboratory of Special Biomedicine and Advanced Institute for Brain and Intelligence, School of Medicine, Guangxi University, Nanning 530004, China; Department of Neurobiology, College of Basic Medicine, Army Medical University, Chongqing 400038, China
| | - Jincheng Wang
- Department of Neurobiology, College of Basic Medicine, Army Medical University, Chongqing 400038, China
| | - Guifeng Zhai
- Guangxi Key Laboratory of Special Biomedicine and Advanced Institute for Brain and Intelligence, School of Medicine, Guangxi University, Nanning 530004, China; Department of Neurobiology, College of Basic Medicine, Army Medical University, Chongqing 400038, China
| | - RuiQi Pang
- Guangxi Key Laboratory of Special Biomedicine and Advanced Institute for Brain and Intelligence, School of Medicine, Guangxi University, Nanning 530004, China; Department of Neurobiology, College of Basic Medicine, Army Medical University, Chongqing 400038, China
| | - Haipeng Yu
- Department of Neurobiology, College of Basic Medicine, Army Medical University, Chongqing 400038, China; Advanced Institute for Brain and Intelligence, School of Physical Science and Technology, Guangxi University, Nanning 530004, China
| | - Qiyue Deng
- Department of Neurobiology, College of Basic Medicine, Army Medical University, Chongqing 400038, China.
| | - Xue Liu
- Department of Neurobiology, College of Basic Medicine, Army Medical University, Chongqing 400038, China.
| | - Yi Zhou
- Department of Neurobiology, College of Basic Medicine, Army Medical University, Chongqing 400038, China.
| |
Collapse
|
2
|
Coulombe V, Rivera AM, Monfared S, Roussel DA, Leboulleux Q, Peralta MR, Gosselin B, Labonté B. The Tailtag: A multi-mouse tracking system to measure social dynamics in complex environments. Neuropsychopharmacology 2025:10.1038/s41386-025-02126-y. [PMID: 40404959 DOI: 10.1038/s41386-025-02126-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2025] [Revised: 04/18/2025] [Accepted: 05/01/2025] [Indexed: 05/24/2025]
Abstract
Despite recent advances, tracking individual movements safely and reliably over extended periods, particularly within complex social groups, remains a challenge. Traditional methods like color coding, tagging, and RFID tracking, while effective, have notable practical limitations. State-of-the-art neural network-based trackers often struggle to maintain individual identities in large groups for more than a few seconds. Fiducial tags such as ArUco codes present a potential solution to enable accurate tracking and identity management. However, their application to large groups of socially interacting mice in complex, enriched environments remain an open challenge. Here, we present the Tailtag system, a novel approach designed to address this challenge. The Tailtag is a non-invasive, safe, and ergonomic tail ring embedded with an ArUco marker allowing to track individual mice in colonies of up to 20 individuals in complex environments for at least seven days without performance degradation or behavioral alteration. We provide a comprehensive parameter optimization guide and practical recommendations for marker selection, for reproducibility across diverse experimental setups. Using data collected from Tailtag-equipped mice, we revealed the formation and evolution of social groups within the colony. Our analysis identified social hub regions within the vivarium where social contacts occur at different frequencies throughout one week of recordings. We quantified interactions and avoidance patterns between specific pairs of mice within the most active social hubs. Overall, our findings indicate that while the zone preferences and peer associations among the mice change over time, certain groups and pairwise interactions consistently form within the social colony.
Collapse
Affiliation(s)
- Vincent Coulombe
- CERVO Brain Research Centre, Université Laval, Québec, QC, Canada
- Smart Biomedical Microsystems Laboratory, Quebec City, QC, Canada
- Department of electronics, Faculty of Science and Engineering, Université Laval, Québec, QC, Canada
| | - Arturo Marroquin Rivera
- CERVO Brain Research Centre, Université Laval, Québec, QC, Canada
- Department of Psychiatry and Neuroscience, Faculty of Medicine, Université Laval, Québec, QC, Canada
| | - Sadegh Monfared
- CERVO Brain Research Centre, Université Laval, Québec, QC, Canada
- Smart Biomedical Microsystems Laboratory, Quebec City, QC, Canada
- Department of electronics, Faculty of Science and Engineering, Université Laval, Québec, QC, Canada
| | - David-Alexandre Roussel
- Smart Biomedical Microsystems Laboratory, Quebec City, QC, Canada
- Department of electronics, Faculty of Science and Engineering, Université Laval, Québec, QC, Canada
| | | | | | - Benoit Gosselin
- Smart Biomedical Microsystems Laboratory, Quebec City, QC, Canada
- Department of electronics, Faculty of Science and Engineering, Université Laval, Québec, QC, Canada
| | - Benoit Labonté
- CERVO Brain Research Centre, Université Laval, Québec, QC, Canada.
- Department of Psychiatry and Neuroscience, Faculty of Medicine, Université Laval, Québec, QC, Canada.
| |
Collapse
|
3
|
Tang G, Han Y, Sun X, Zhang R, Han MH, Liu Q, Wei P. Anti-drift pose tracker (ADPT), a transformer-based network for robust animal pose estimation cross-species. eLife 2025; 13:RP95709. [PMID: 40326557 PMCID: PMC12055000 DOI: 10.7554/elife.95709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/07/2025] Open
Abstract
Deep learning-based methods have advanced animal pose estimation, enhancing accuracy, and efficiency in quantifying animal behavior. However, these methods frequently experience tracking drift, where noise-induced jumps in body point estimates compromise reliability. Here, we present the anti-drift pose tracker (ADPT), a transformer-based tool that mitigates tracking drift in behavioral analysis. Extensive experiments across cross-species datasets-including proprietary mouse and monkey recordings and public Drosophila and macaque datasets-demonstrate that ADPT significantly reduces drift and surpasses existing models like DeepLabCut and SLEAP in accuracy. Moreover, ADPT achieved 93.16% identification accuracy for 10 unmarked mice and 90.36% accuracy for freely interacting unmarked mice, which can be further refined to 99.72%, enhancing both anti-drift performance and pose estimation accuracy in social interactions. With its end-to-end design, ADPT is computationally efficient and suitable for real-time analysis, offering a robust solution for reproducible animal behavior studies. The ADPT code is available at https://github.com/tangguoling/ADPT.
Collapse
Affiliation(s)
- Guoling Tang
- University of Chinese Academy of SciencesShenzhenChina
- University of Chinese Academy of SciencesBeijingChina
| | - Yaning Han
- University of Chinese Academy of SciencesShenzhenChina
- University of Chinese Academy of SciencesBeijingChina
| | - Xing Sun
- University of Chinese Academy of SciencesShenzhenChina
- University of Chinese Academy of SciencesBeijingChina
| | - Ruonan Zhang
- Guangxi University of Science and TechnologyLiuzhouChina
| | - Ming-Hu Han
- University of Chinese Academy of SciencesShenzhenChina
- Shenzhen University of Advanced TechnologyShenzhenChina
| | - Quanying Liu
- Department of Biomedical Engineering, Southern University of Science and TechnologyShenzhenChina
| | - Pengfei Wei
- University of Chinese Academy of SciencesShenzhenChina
- University of Chinese Academy of SciencesBeijingChina
| |
Collapse
|
4
|
Wang Y, Su F, Cong R, Liu M, Shan K, Li X, Zhu D, Wei Y, Dai J, Zhang C, Tian Y. High-throughput markerless pose estimation and home-cage activity analysis of tree shrew using deep learning. Animal Model Exp Med 2025; 8:896-905. [PMID: 39846430 DOI: 10.1002/ame2.12530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Accepted: 12/15/2024] [Indexed: 01/24/2025] Open
Abstract
BACKGROUND Quantifying the rich home-cage activities of tree shrews provides a reliable basis for understanding their daily routines and building disease models. However, due to the lack of effective behavioral methods, most efforts on tree shrew behavior are limited to simple measures, resulting in the loss of much behavioral information. METHODS To address this issue, we present a deep learning (DL) approach to achieve markerless pose estimation and recognize multiple spontaneous behaviors of tree shrews, including drinking, eating, resting, and staying in the dark house, etc. RESULTS: This high-throughput approach can monitor the home-cage activities of 16 tree shrews simultaneously over an extended period. Additionally, we demonstrated an innovative system with reliable apparatus, paradigms, and analysis methods for investigating food grasping behavior. The median duration for each bout of grasping was 0.20 s. CONCLUSION This study provides an efficient tool for quantifying and understand tree shrews' natural behaviors.
Collapse
Affiliation(s)
- Yangzhen Wang
- Department of Automation, Tsinghua University, Beijing, China
| | - Feng Su
- College of Future Technology, Peking University, Beijing, China
| | - Rixu Cong
- Ministry of Education, Key Laboratory of Cell Proliferation and Differentiation, College of Life Sciences, Peking University, Beijing, China
| | - Mengna Liu
- School of Basic Medical Sciences, Beijing Key Laboratory of Neural Regeneration and Repair, Advanced Innovation Center for Human Brain Protection, Capital Medical University, Beijing, China
| | - Kaichen Shan
- Department of Automation, Tsinghua University, Beijing, China
| | - Xiaying Li
- Laboratory Animal Center, School of Life Sciences, Peking University, Beijing, China
| | - Desheng Zhu
- Laboratory Animal Center, School of Life Sciences, Peking University, Beijing, China
| | - Yusheng Wei
- Laboratory Animal Center, School of Life Sciences, Peking University, Beijing, China
| | - Jiejie Dai
- Institute of Medical Biology, Chinese Academy of Medical Sciences and Peking Union Medical College, Kunming, China
| | - Chen Zhang
- School of Basic Medical Sciences, Beijing Key Laboratory of Neural Regeneration and Repair, Advanced Innovation Center for Human Brain Protection, Capital Medical University, Beijing, China
| | - Yonglu Tian
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking University, Beijing, China
| |
Collapse
|
5
|
Boneh-Shitrit T, Finka L, Mills DS, Luna SP, Dalla Costa E, Zamansky A, Bremhorst A. A segment-based framework for explainability in animal affective computing. Sci Rep 2025; 15:13670. [PMID: 40258884 PMCID: PMC12012102 DOI: 10.1038/s41598-025-96634-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2025] [Accepted: 03/31/2025] [Indexed: 04/23/2025] Open
Abstract
Recent developments in animal motion tracking and pose recognition have revolutionized the study of animal behavior. More recent efforts extend beyond tracking towards affect recognition using facial and body language analysis, with far-reaching applications in animal welfare and health. Deep learning models are the most commonly used in this context. However, their "black box" nature poses a significant challenge to explainability, which is vital for building trust and encouraging adoption among researchers. Despite its importance, the field of explainability and its quantification remains under-explored. Saliency maps are among the most widely used methods for explainability, where each pixel is assigned a significance level indicating its relevance to the neural network's decision. Although these maps are frequently used in research, they are predominantly applied qualitatively, with limited methods for quantitatively analyzing them or identifying the most suitable method for a specific task. In this paper, we propose a framework aimed at enhancing explainability in the field of animal affective computing. Assuming the availability of a classifier for a specific affective state and the ability to generate saliency maps, our approach focuses on evaluating and comparing visual explanations by emphasizing the importance of meaningful semantic parts captured as segments, which are thought to be closely linked to behavioral indicators of affective states. Furthermore, our approach introduces a quantitative scoring mechanism to assess how well the saliency maps generated by a given classifier align with predefined semantic regions. This scoring system allows for systematic, measurable comparisons of different pipelines in terms of their visual explanations within animal affective computing. Such a metric can serve as a quality indicator when developing classifiers for known biologically relevant segments or help researchers assess whether a classifier is using expected meaningful regions when exploring new potential indicators. We evaluated the framework using three datasets focused on cat and horse pain and dog emotions. Across all datasets, the generated explanations consistently revealed that the eye area is the most significant feature for the classifiers. These results highlight the potential of the explainability frameworks such as the suggested one to uncover new insights into how machines 'see' animal affective states.
Collapse
Affiliation(s)
| | - Lauren Finka
- Cats Protection, National Cat Centre, Chelwood Gate, Sussex, UK
| | - Daniel S Mills
- School of Life&Environmental Sciences, Joseph Bank Laboratories, University of Lincoln, Lincoln, UK
| | - Stelio P Luna
- School of Veterinary Medicine and Animal Science, São Paulo State University (Unesp), São Paulo, Brazil
| | - Emanuella Dalla Costa
- Department of Veterinary Medicine and Animal Sciences, University of Milan, Milan, Italy
| | - Anna Zamansky
- Information Systems Department, University of Haifa, Haifa, Israel.
| | - Annika Bremhorst
- Dogs and Science, Bern, Switzerland
- Department for Clinical Veterinary Science, Vetsuisse Faculty, University of Bern, Bern, Switzerland
| |
Collapse
|
6
|
Schulz A, Vetter J, Gao R, Morales D, Lobato-Rios V, Ramdya P, Gonçalves PJ, Macke JH. Modeling conditional distributions of neural and behavioral data with masked variational autoencoders. Cell Rep 2025; 44:115338. [PMID: 39985768 DOI: 10.1016/j.celrep.2025.115338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 11/05/2024] [Accepted: 01/30/2025] [Indexed: 02/24/2025] Open
Abstract
Extracting the relationship between high-dimensional neural recordings and complex behavior is a ubiquitous problem in neuroscience. Encoding and decoding models target the conditional distribution of neural activity given behavior and vice versa, while dimensionality reduction techniques extract low-dimensional representations thereof. Variational autoencoders (VAEs) are flexible tools for inferring such low-dimensional embeddings but struggle to accurately model arbitrary conditional distributions such as those arising in neural encoding and decoding, let alone simultaneously. Here, we present a VAE-based approach for calculating such conditional distributions. We first validate our approach on a task with known ground truth. Next, we retrieve conditional distributions over masked body parts of walking flies. Finally, we decode motor trajectories from neural activity in a monkey-reach task and query the same VAE for the encoding distribution. Our approach unifies dimensionality reduction and learning conditional distributions, allowing the scaling of common analyses in neuroscience to today's high-dimensional multi-modal datasets.
Collapse
Affiliation(s)
- Auguste Schulz
- Machine Learning in Science, University of Tübingen & Tübingen AI Center, Tübingen, Germany.
| | - Julius Vetter
- Machine Learning in Science, University of Tübingen & Tübingen AI Center, Tübingen, Germany
| | - Richard Gao
- Machine Learning in Science, University of Tübingen & Tübingen AI Center, Tübingen, Germany
| | - Daniel Morales
- Neuroengineering Laboratory, Brain Mind Institute & Interfaculty Institute of Bioengineering, EPFL, Lausanne, Switzerland
| | - Victor Lobato-Rios
- Neuroengineering Laboratory, Brain Mind Institute & Interfaculty Institute of Bioengineering, EPFL, Lausanne, Switzerland
| | - Pavan Ramdya
- Neuroengineering Laboratory, Brain Mind Institute & Interfaculty Institute of Bioengineering, EPFL, Lausanne, Switzerland
| | - Pedro J Gonçalves
- Machine Learning in Science, University of Tübingen & Tübingen AI Center, Tübingen, Germany; VIB-Neuroelectronics Research Flanders (NERF), Leuven, Belgium; Imec, Leuven, Belgium; Department of Computer Science, KU Leuven, Leuven, Belgium; Department of Electrical Engineering, KU Leuven, Leuven, Belgium
| | - Jakob H Macke
- Machine Learning in Science, University of Tübingen & Tübingen AI Center, Tübingen, Germany; Max Planck Institute for Intelligent Systems, Tübingen, Germany.
| |
Collapse
|
7
|
Ye J, Xu Y, Huang K, Wang X, Wang L, Wang F. Hierarchical behavioral analysis framework as a platform for standardized quantitative identification of behaviors. Cell Rep 2025; 44:115239. [PMID: 40010299 DOI: 10.1016/j.celrep.2025.115239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2024] [Revised: 11/19/2024] [Accepted: 01/07/2025] [Indexed: 02/28/2025] Open
Abstract
Behavior is composed of modules that operate based on inherent logic. Understanding behavior and its neural mechanisms is facilitated by clear structural behavioral analysis. Here, we developed a hierarchical behavioral analysis framework (HBAF) that efficiently reveals the organizational logic of these modules by analyzing high-dimensional behavioral data. By creating a spontaneous behavior atlas for male and female mice, we discovered that spontaneous behavior patterns are hardwired, with sniffing serving as the hub node for movement transitions. The sniffing-to-grooming ratio accurately distinguished the spontaneous behavioral states in a high-throughput manner. These states are influenced by emotional status, circadian rhythms, and lighting conditions, leading to unique behavioral characteristics, spatiotemporal features, and dynamic patterns. By implementing the straightforward and achievable spontaneous behavior paradigm, HBAF enables swift and accurate assessment of animal behavioral states and bridges the gap between a theoretical understanding of the behavioral structure and practical analysis using comprehensive multidimensional behavioral information.
Collapse
Affiliation(s)
- Jialin Ye
- Shenzhen Key Laboratory of Neuropsychiatric Modulation, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, the Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yang Xu
- Shenzhen Key Laboratory of Neuropsychiatric Modulation, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, the Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Kang Huang
- Shenzhen Key Laboratory of Neuropsychiatric Modulation, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, the Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xinyu Wang
- Shenzhen Key Laboratory of Neuropsychiatric Modulation, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, the Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Liping Wang
- Shenzhen Key Laboratory of Neuropsychiatric Modulation, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, the Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; University of Chinese Academy of Sciences, Beijing 101408, China.
| | - Feng Wang
- Shenzhen Key Laboratory of Neuropsychiatric Modulation, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, the Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; University of Chinese Academy of Sciences, Beijing 101408, China.
| |
Collapse
|
8
|
Cheng C, Huang Z, Zhang R, Huang G, Wang H, Tang L, Wang X. A real-time, multi-subject three-dimensional pose tracking system for the behavioral analysis of non-human primates. CELL REPORTS METHODS 2025; 5:100986. [PMID: 39965567 PMCID: PMC11955267 DOI: 10.1016/j.crmeth.2025.100986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2024] [Revised: 11/28/2024] [Accepted: 01/27/2025] [Indexed: 02/20/2025]
Abstract
The ability to track the positions and poses of multiple animals in three-dimensional (3D) space in real time is highly desired by non-human primate (NHP) researchers in behavioral and systems neuroscience. This capability enables the analysis of social behaviors involving multiple NHPs and supports closed-loop experiments. Although several animal 3D pose tracking systems have been developed, most are difficult to deploy in new environments and lack real-time analysis capabilities. To address these limitations, we developed MarmoPose, a deep-learning-based, real-time 3D pose tracking system for multiple common marmosets, an increasingly critical NHP model in neuroscience research. This system can accurately track the 3D poses of multiple marmosets freely moving in their home cage with minimal hardware requirements. By employing a marmoset skeleton model, MarmoPose can further optimize 3D poses and estimate invisible body locations. Additionally, MarmoPose achieves high inference speeds and enables real-time closed-loop experimental control based on events detected from 3D poses.
Collapse
Affiliation(s)
- Chaoqun Cheng
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China; School of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Zijian Huang
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China; School of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Ruiming Zhang
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| | - Guozheng Huang
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China; School of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Han Wang
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| | - Likai Tang
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China; School of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Xiaoqin Wang
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China; School of Biomedical Engineering, Tsinghua University, Beijing, China; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
9
|
Duporge I, Pereira T, de Obeso SC, Ross JGB, J Lee S, G Hindle A. The utility of animal models to inform the next generation of human space exploration. NPJ Microgravity 2025; 11:7. [PMID: 39984492 PMCID: PMC11845785 DOI: 10.1038/s41526-025-00460-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Accepted: 02/10/2025] [Indexed: 02/23/2025] Open
Abstract
Animals have played a vital role in every stage of space exploration, from early sub-orbital flights to contemporary missions. New physiological and psychological challenges arise with plans to venture deeper into the solar system. Advances in chimeric and knockout animal models, along with genetic modification techniques have enhanced our ability to study the effects of microgravity in greater detail. However, increased investment in the purposeful design of habitats and payloads, as well as in AI-enhanced behavioral monitoring in orbit can better support the ethical and effective use of animals in deep space research.
Collapse
Affiliation(s)
- Isla Duporge
- Department of Ecology and Evolutionary Biology, Princeton University, Princeton, NJ, USA.
- National Academy of Sciences, Washington, DC, USA.
| | - Talmo Pereira
- Salk Institute for Biological Studies, La Jolla, CA, USA
| | | | - Julius G Bright Ross
- Wildlife Conservation Research Unit, Department of Biology, University of Oxford, Oxford, England, UK
| | - Stephen J Lee
- National Academy of Sciences, Washington, DC, USA
- U.S. Army Research Laboratory, Army Research Office, Durham, NC, USA
| | - Allyson G Hindle
- School of Life Sciences, University of Nevada Las Vegas, Las Vegas, NV, USA
| |
Collapse
|
10
|
Azechi H, Takahashi S. vmTracking enables highly accurate multi-animal pose tracking in crowded environments. PLoS Biol 2025; 23:e3003002. [PMID: 39928646 PMCID: PMC11845028 DOI: 10.1371/journal.pbio.3003002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 02/21/2025] [Accepted: 01/06/2025] [Indexed: 02/12/2025] Open
Abstract
In multi-animal tracking, addressing occlusion and crowding is crucial for accurate behavioral analysis. However, in situations where occlusion and crowding generate complex interactions, achieving accurate pose tracking remains challenging. Therefore, we introduced virtual marker tracking (vmTracking), which uses virtual markers for individual identification. Virtual markers are labels derived from conventional markerless multi-animal tracking tools, such as multi-animal DeepLabCut (maDLC) and Social LEAP Estimates Animal Poses (SLEAP). Unlike physical markers, virtual markers exist only within the video and attribute features to individuals, enabling consistent identification throughout the entire video while keeping the animals markerless in reality. Using these markers as cues, annotations were applied to multi-animal videos, and tracking was conducted with single-animal DeepLabCut (saDLC) and SLEAP's single-animal method. vmTracking minimized manual corrections and annotation frames needed for training, efficiently tackling occlusion and crowding. Experiments tracking multiple mice, fish, and human dancers confirmed vmTracking's variability and applicability. These findings could enhance the precision and reliability of tracking methods used in the analysis of complex naturalistic and social behaviors in animals, providing a simpler yet more effective solution.
Collapse
Affiliation(s)
- Hirotsugu Azechi
- Laboratory of Cognitive and Behavioral Neuroscience, Graduate School of Brain Science, Doshisha University, Kyotanabe, Japan
| | - Susumu Takahashi
- Laboratory of Cognitive and Behavioral Neuroscience, Graduate School of Brain Science, Doshisha University, Kyotanabe, Japan
| |
Collapse
|
11
|
Teicher G, Riffe RM, Barnaby W, Martin G, Clayton BE, Trapani JG, Downes GB. Marigold: a machine learning-based web app for zebrafish pose tracking. BMC Bioinformatics 2025; 26:30. [PMID: 39875867 PMCID: PMC11773884 DOI: 10.1186/s12859-025-06042-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 01/07/2025] [Indexed: 01/30/2025] Open
Abstract
BACKGROUND High-throughput behavioral analysis is important for drug discovery, toxicological studies, and the modeling of neurological disorders such as autism and epilepsy. Zebrafish embryos and larvae are ideal for such applications because they are spawned in large clutches, develop rapidly, feature a relatively simple nervous system, and have orthologs to many human disease genes. However, existing software for video-based behavioral analysis can be incompatible with recordings that contain dynamic backgrounds or foreign objects, lack support for multiwell formats, require expensive hardware, and/or demand considerable programming expertise. Here, we introduce Marigold, a free and open source web app for high-throughput behavioral analysis of embryonic and larval zebrafish. RESULTS Marigold features an intuitive graphical user interface, tracks up to 10 user-defined keypoints, supports both single- and multiwell formats, and exports a range of kinematic parameters in addition to publication-quality data visualizations. By leveraging a highly efficient, custom-designed neural network architecture, Marigold achieves reasonable training and inference speeds even on modestly powered computers lacking a discrete graphics processing unit. Notably, as a web app, Marigold does not require any installation and runs within popular web browsers on ChromeOS, Linux, macOS, and Windows. To demonstrate Marigold's utility, we used two sets of biological experiments. First, we examined novel aspects of the touch-evoked escape response in techno trousers (tnt) mutant embryos, which contain a previously described loss-of-function mutation in the gene encoding Eaat2b, a glial glutamate transporter. We identified differences and interactions between touch location (head vs. tail) and genotype. Second, we investigated the effects of feeding on larval visuomotor behavior at 5 and 7 days post-fertilization (dpf). We found differences in the number and vigor of swimming bouts between fed and unfed fish at both time points, as well as interactions between developmental stage and feeding regimen. CONCLUSIONS In both biological experiments presented here, the use of Marigold facilitated novel behavioral findings. Marigold's ease of use, robust pose tracking, amenability to diverse experimental paradigms, and flexibility regarding hardware requirements make it a powerful tool for analyzing zebrafish behavior, especially in low-resource settings such as course-based undergraduate research experiences. Marigold is available at: https://downeslab.github.io/marigold/ .
Collapse
Affiliation(s)
- Gregory Teicher
- Biology Department, University of Massachusetts Amherst, Amherst, MA, USA.
- Molecular and Cellular Biology Graduate Program, University of Massachusetts Amherst, Amherst, MA, USA.
| | - R Madison Riffe
- Biology Department, University of Massachusetts Amherst, Amherst, MA, USA
- Neuroscience and Behavior Graduate Program, University of Massachusetts Amherst, Amherst, MA, USA
| | - Wayne Barnaby
- Biology Department, University of Massachusetts Amherst, Amherst, MA, USA
- Neuroscience and Behavior Graduate Program, University of Massachusetts Amherst, Amherst, MA, USA
| | - Gabrielle Martin
- Biology Department, University of Massachusetts Amherst, Amherst, MA, USA
| | - Benjamin E Clayton
- Biology Department, University of Massachusetts Amherst, Amherst, MA, USA
| | - Josef G Trapani
- Neuroscience and Behavior Graduate Program, University of Massachusetts Amherst, Amherst, MA, USA
- Biology Department, Amherst College, Amherst, MA, USA
- Neuroscience Program, Amherst College, Amherst, MA, USA
| | - Gerald B Downes
- Biology Department, University of Massachusetts Amherst, Amherst, MA, USA.
- Molecular and Cellular Biology Graduate Program, University of Massachusetts Amherst, Amherst, MA, USA.
- Neuroscience and Behavior Graduate Program, University of Massachusetts Amherst, Amherst, MA, USA.
| |
Collapse
|
12
|
Zhou F, Yang X, Chen F, Chen L, Jiang Z, Zhu H, Heckel R, Wang H, Fei M, Zhou H. Cross-Skeleton Interaction Graph Aggregation Network for Representation Learning of Mouse Social Behaviour. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2025; PP:623-638. [PMID: 40030903 DOI: 10.1109/tip.2025.3528218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Automated social behaviour analysis of mice has become an increasingly popular research area in behavioural neuroscience. Recently, pose information (i.e., locations of keypoints or skeleton) has been used to interpret social behaviours of mice. Nevertheless, effective encoding and decoding of social interaction information underlying the keypoints of mice has been rarely investigated in the existing methods. In particular, it is challenging to model complex social interactions between mice due to highly deformable body shapes and ambiguous movement patterns. To deal with the interaction modelling problem, we here propose a Cross-Skeleton Interaction Graph Aggregation Network (CS-IGANet) to learn abundant dynamics of freely interacting mice, where a Cross-Skeleton Node-level Interaction module (CS-NLI) is used to model multi-level interactions (i.e., intra-, inter- and cross-skeleton interactions). Furthermore, we design a novel Interaction-Aware Transformer (IAT) to dynamically learn the graph-level representation of social behaviours and update the node-level representation, guided by our proposed interaction-aware self-attention mechanism. Finally, to enhance the representation ability of our model, an auxiliary self-supervised learning task is proposed for measuring the similarity between cross-skeleton nodes. Experimental results on the standard CRMI13-Skeleton and our PDMB-Skeleton datasets show that our proposed model outperforms several other state-of-the-art approaches.
Collapse
|
13
|
Kishi T, Kobayashi K, Sasagawa K, Sakimura K, Minato T, Kida M, Hata T, Kitagawa Y, Okuma C, Murata T. Automated analysis of a novel object recognition test in mice using image processing and machine learning. Behav Brain Res 2025; 476:115278. [PMID: 39357746 DOI: 10.1016/j.bbr.2024.115278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Revised: 09/22/2024] [Accepted: 09/29/2024] [Indexed: 10/04/2024]
Abstract
The novel object recognition test (NORT) is one of the most commonly employed behavioral tests in experimental animals designed to evaluate an animal's interest in and recognition of novelty. However, manual procedures, which rely on researchers' observations, prevent high throughput analysis. In this study, we developed an automated analysis method for NORT utilizing machine learning-assisted exploratory behavior detection. We recorded the exploratory behavior of the mice using a video camera. The coordinates of the mouse nose and tail base in recorded video files were detected using a pre-trained machine learning model, DeepLabCut. Each video was then segmented into frame images, which were categorized into "exploratory," or "non-exploratory" frames based on manual observation. Mouse feature vectors were calculated as vectors from the nose to the vertices of the object and were utilized for SVM training. The trained SVM effectively detected exploratory behaviors, showing a strong correlation with human observer assessments. Upon application to NORT, the duration of mouse exploratory behavior towards objects predicted by the SVM exhibited a significant correlation with the assessments made by human observers. The novelty discrimination index derived from the SVM predictions also aligned well with that from human observations.
Collapse
Affiliation(s)
- Takuya Kishi
- Food and Animal Systemics, Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan
| | - Koji Kobayashi
- Food and Animal Systemics, Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan
| | - Kazuo Sasagawa
- Biological/Pharmacological Research Laboratories, Central Pharmaceutical Research Institute, Japan Tobacco Inc., 1-1 Murasaki-cho, Takatsuki, Osaka, Japan
| | - Katsuya Sakimura
- Biological/Pharmacological Research Laboratories, Central Pharmaceutical Research Institute, Japan Tobacco Inc., 1-1 Murasaki-cho, Takatsuki, Osaka, Japan
| | - Takashi Minato
- Food and Animal Systemics, Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan
| | - Misato Kida
- Food and Animal Systemics, Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan
| | - Takahiro Hata
- Innovation to implementation Laboratories, Central Pharmaceutical Research Institute, Japan Tobacco Inc., 1-1 Murasaki-cho, Takatsuki, Osaka, Japan
| | - Yoshihiro Kitagawa
- Research Planning, Central Pharmaceutical Research Institute, Japan Tobacco Inc., 1-1 Murasaki-cho, Takatsuki, Osaka, Japan
| | - Chihiro Okuma
- Biological/Pharmacological Research Laboratories, Central Pharmaceutical Research Institute, Japan Tobacco Inc., 1-1 Murasaki-cho, Takatsuki, Osaka, Japan
| | - Takahisa Murata
- Food and Animal Systemics, Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan; Animal Radiology, Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan; Veterinary Pharmacology, Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
14
|
Newman JP, Zhang J, Cuevas-López A, Miller NJ, Honda T, van der Goes MSH, Leighton AH, Carvalho F, Lopes G, Lakunina A, Siegle JH, Harnett MT, Wilson MA, Voigts J. ONIX: a unified open-source platform for multimodal neural recording and perturbation during naturalistic behavior. Nat Methods 2025; 22:187-192. [PMID: 39528678 PMCID: PMC11725498 DOI: 10.1038/s41592-024-02521-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 10/17/2024] [Indexed: 11/16/2024]
Abstract
Behavioral neuroscience faces two conflicting demands: long-duration recordings from large neural populations and unimpeded animal behavior. To meet this challenge we developed ONIX, an open-source data acquisition system with high data throughput (2 GB s-1) and low closed-loop latencies (<1 ms) that uses a 0.3-mm thin tether to minimize behavioral impact. Head position and rotation are tracked in three dimensions and used to drive active commutation without torque measurements. ONIX can acquire data from combinations of passive electrodes, Neuropixels probes, head-mounted microscopes, cameras, three-dimensional trackers and other data sources. We performed uninterrupted, long (~7 h) neural recordings in mice as they traversed complex three-dimensional terrain, and multiday sleep-tracking recordings (~55 h). ONIX enabled exploration with similar mobility as nonimplanted animals, in contrast to conventional tethered systems, which have restricted movement. By combining long recordings with full mobility, our technology will enable progress on questions that require high-quality neural recordings during ethologically grounded behaviors.
Collapse
Grants
- R44 NS127725 NINDS NIH HHS
- R21 EY028381 NEI NIH HHS
- T32 GM007753 NIGMS NIH HHS
- F32 MH107086 NIMH NIH HHS
- R01 NS106031 NINDS NIH HHS
- R01 MH118928 NIMH NIH HHS
- T32GM007753 U.S. Department of Health & Human Services | NIH | National Institute of General Medical Sciences (NIGMS)
- R21 NS103098 NINDS NIH HHS
- K99 NS118112 NINDS NIH HHS
- NIH 1K99NS118112-01 and Simons Center for the Social Brain at MIT postdoctoral fellowship. This research was partially funded by the Howard Hughes Medical Institute at the Janelia Research Campus.
- U.S. Department of Health & Human Services | NIH | National Institute of General Medical Sciences (NIGMS)
- National Institute of General Medical Sciences T32GM007753 (E.H.S.T), the Center for Brains, Minds and Machines (CBMM) at MIT, funded by NSF STC award CCF-1231216, and NIH 1R44NS127725-01 to Open Ephys Inc.
- NIH 1R21EY028381
- Picower Fellowship by JPB Foundation and MIT Picower Institute, Brain Science Foundation Research Grant Award, Kavli-Grass-MBL Fellowship by Kavli Foundation, Grass Foundation, and Marine Biological Laboratory (MBL), Osamu Hayaishi Memorial Scholarship for Study Abroad, Uehara Memorial Foundation Overseas Fellowship, and Japan Society for the Promotion of Science (JSPS) Overseas Fellowship.
- Mathworks Graduate Fellowship
- Anna Lakunina and Joshua H. Siegle would like to thank the Allen Institute founder, Paul G. Allen, for his vision, encouragement, and support.
- NIH R01NS106031 and R21NS103098
- National Science Foundation STC award CCF-1231216, and NIH TR01-GM10498, NIH R01MH118928 and Picower Institute Innovation Fund.
Collapse
Affiliation(s)
- Jonathan P Newman
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- The Picower Institute for Learning and Memory, MIT, Cambridge, MA, USA
- Open Ephys, Atlanta, GA, USA
| | - Jie Zhang
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- The Picower Institute for Learning and Memory, MIT, Cambridge, MA, USA
| | - Aarón Cuevas-López
- Open Ephys, Atlanta, GA, USA
- Department of Electrical Engineering, Polytechnic University of Valencia, Valencia, Spain
- Open Ephys Production Site, Lisbon, Portugal
| | - Nicholas J Miller
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Takato Honda
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- The Picower Institute for Learning and Memory, MIT, Cambridge, MA, USA
| | - Marie-Sophie H van der Goes
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | | | | | | | - Anna Lakunina
- Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Joshua H Siegle
- Open Ephys, Atlanta, GA, USA
- Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Mark T Harnett
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Matthew A Wilson
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- The Picower Institute for Learning and Memory, MIT, Cambridge, MA, USA
| | - Jakob Voigts
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA.
- Open Ephys, Atlanta, GA, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA.
- HHMI Janelia Research Campus, Ashburn, VA, USA.
| |
Collapse
|
15
|
Blau A, Schaffer ES, Mishra N, Miska NJ, International Brain Laboratory, Paninski L, Whiteway MR. A study of animal action segmentation algorithms across supervised, unsupervised, and semi-supervised learning paradigms. ARXIV 2024:arXiv:2407.16727v2. [PMID: 39108294 PMCID: PMC11302674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/15/2024]
Abstract
Action segmentation of behavioral videos is the process of labeling each frame as belonging to one or more discrete classes, and is a crucial component of many studies that investigate animal behavior. A wide range of algorithms exist to automatically parse discrete animal behavior, encompassing supervised, unsupervised, and semi-supervised learning paradigms. These algorithms - which include tree-based models, deep neural networks, and graphical models - differ widely in their structure and assumptions on the data. Using four datasets spanning multiple species - fly, mouse, and human - we systematically study how the outputs of these various algorithms align with manually annotated behaviors of interest. Along the way, we introduce a semi-supervised action segmentation model that bridges the gap between supervised deep neural networks and unsupervised graphical models. We find that fully supervised temporal convolutional networks with the addition of temporal information in the observations perform the best on our supervised metrics across all datasets.
Collapse
Affiliation(s)
- Ari Blau
- Department of Statistics, Columbia University
| | | | | | | | | | - Liam Paninski
- Department of Statistics, Columbia University
- Zuckerman Institute, Columbia University
| | | |
Collapse
|
16
|
Liang Y, Sun Z, Chiu K, Hu Y. Effective identification of Alzheimer's disease in mouse models via deep learning and motion analysis. Heliyon 2024; 10:e39353. [PMID: 39687151 PMCID: PMC11647830 DOI: 10.1016/j.heliyon.2024.e39353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 09/24/2024] [Accepted: 10/12/2024] [Indexed: 12/18/2024] Open
Abstract
Spatial disorientation is an early symptom of Alzheimer's disease (AD). Detecting this impairment effectively in animal models can provide valuable insights into the disease and reduce experimental burdens. We have developed a markerless motion analysis system (MMAS) using deep learning techniques for the Morris water maze test. This system allows for precise analysis of behaviors and body movements from video recordings. Using the MMAS, we identified unilateral head-turning and tail-wagging preferences in AD mice, which distinguished them from wild-type mice with greater accuracy than traditional behavioral parameters. Furthermore, the cumulative turning and wagging angles were linearly correlated with escape latency and cognitive scores, demonstrating comparable effectiveness in differentiating AD mice. These findings underscore the potential of motion analysis as an advanced method for improving the effectiveness, sensitivity, and interpretability of AD mouse identification, ultimately aiding in disease diagnosis and drug development.
Collapse
Affiliation(s)
- Yuanhao Liang
- Department of Orthopaedics & Traumatology, School of Clinical Medicine, Li Kai Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
- Orthopedics Center, The University of Hong Kong-Shenzhen Hospital, Shenzhen, 518053, China
- AI and Big Data Lab, The University of Hong Kong-Shenzhen Hospital, Shenzhen, G.D, 518053, China
| | - Zhongqing Sun
- Department of Neurology, Xijing Hospital, Fourth Military Medical University, Xi’an, 710032, China
- Department of Ophthalmology, School of Clinical Medicine, Li Kai Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | - Kin Chiu
- Department of Ophthalmology, School of Clinical Medicine, Li Kai Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
- State Key Lab of Brain and Cognitive Sciences, Li Kai Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
- Department of Psychology, The University of Hong Kong, Hong Kong SAR, China
| | - Yong Hu
- Department of Orthopaedics & Traumatology, School of Clinical Medicine, Li Kai Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
- Orthopedics Center, The University of Hong Kong-Shenzhen Hospital, Shenzhen, 518053, China
- AI and Big Data Lab, The University of Hong Kong-Shenzhen Hospital, Shenzhen, G.D, 518053, China
| |
Collapse
|
17
|
Battivelli D, Fan Z, Hu H, Gross CT. How can ethology inform the neuroscience of fear, aggression and dominance? Nat Rev Neurosci 2024; 25:809-819. [PMID: 39402310 DOI: 10.1038/s41583-024-00858-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/20/2024] [Indexed: 11/20/2024]
Abstract
The study of behaviour is dominated by two approaches. On the one hand, ethologists aim to understand how behaviour promotes adaptation to natural contexts. On the other, neuroscientists aim to understand the molecular, cellular, circuit and psychological origins of behaviour. These two complementary approaches must be combined to arrive at a full understanding of behaviour in its natural setting. However, methodological limitations have restricted most neuroscientific research to the study of how discrete sensory stimuli elicit simple behavioural responses under controlled laboratory conditions that are only distantly related to those encountered in real life. Fortunately, the recent advent of neural monitoring and manipulation tools adapted for use in freely behaving animals has enabled neuroscientists to incorporate naturalistic behaviours into their studies and to begin to consider ethological questions. Here, we examine the promises and pitfalls of this trend by describing how investigations of rodent fear, aggression and dominance behaviours are changing to take advantage of an ethological appreciation of behaviour. We lay out current impediments to this approach and propose a framework for the evolution of the field that will allow us to take maximal advantage of an ethological approach to neuroscience and to increase its relevance for understanding human behaviour.
Collapse
Affiliation(s)
- Dorian Battivelli
- Epigenetics & Neurobiology Unit, EMBL Rome, European Molecular Biology Laboratory, Monterotondo, Italy
| | - Zhengxiao Fan
- School of Brain Science and Brain Medicine, New Cornerstone Science Laboratory, Zhejiang University School of Medicine, Hangzhou, China
| | - Hailan Hu
- School of Brain Science and Brain Medicine, New Cornerstone Science Laboratory, Zhejiang University School of Medicine, Hangzhou, China.
| | - Cornelius T Gross
- Epigenetics & Neurobiology Unit, EMBL Rome, European Molecular Biology Laboratory, Monterotondo, Italy.
| |
Collapse
|
18
|
Becchio C, Pullar K, Scaliti E, Panzeri S. Kinematic coding: Measuring information in naturalistic behaviour. Phys Life Rev 2024; 51:442-458. [PMID: 39603216 DOI: 10.1016/j.plrev.2024.11.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Accepted: 11/14/2024] [Indexed: 11/29/2024]
Abstract
Recent years have seen an explosion of interest in naturalistic behaviour and in machine learning tools for automatically tracking it. However, questions about what to measure, how to measure it, and how to relate naturalistic behaviour to neural activity and cognitive processes remain unresolved. In this Perspective, we propose a general experimental and computational framework - kinematic coding - for measuring how information about cognitive states is encoded in structured patterns of behaviour and how this information is read out by others during social interactions. This framework enables the design of new experiments and the generation of testable hypotheses that link behaviour, cognition, and neural activity at the single-trial level. Researchers can employ this framework to identify single-subject, single-trial encoding and readout computations and address meaningful questions about how information encoded in bodily motion is transmitted and communicated.
Collapse
Affiliation(s)
- Cristina Becchio
- Department of Neurology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany.
| | - Kiri Pullar
- Department of Neurology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany; Institute for Neural Information Processing, Center for Molecular Neurobiology Hamburg, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Eugenio Scaliti
- Department of Neurology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany; Department of Management "Valter Cantino", University of Turin, Turin, Italy; Human Science and Technologies, University of Turin, Turin, Italy
| | - Stefano Panzeri
- Institute for Neural Information Processing, Center for Molecular Neurobiology Hamburg, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany.
| |
Collapse
|
19
|
Hsieh CM, Hsu CH, Chen JK, Liao LD. AI-powered home cage system for real-time tracking and analysis of rodent behavior. iScience 2024; 27:111223. [PMID: 39605925 PMCID: PMC11600061 DOI: 10.1016/j.isci.2024.111223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 06/13/2024] [Accepted: 10/18/2024] [Indexed: 11/29/2024] Open
Abstract
Researchers in animal behavior and neuroscience devote considerable time to observing rodents behavior and physiological responses, with AI monitoring systems reducing personnel workload. This study presents the RodentWatch (RW) system, which leverages deep learning to automatically identify experimental animal behaviors in home cage environments. A single multifunctional camera and edge device are installed inside the animal's home cage, allowing continuous real-time monitoring of the animal's behavior, position, and body temperature for extended periods. We investigated identifying the drinking and resting behaviors of rats, with recognition accuracy enhanced through contextual object labeling and modified non-maximum suppression (NMS) schemes. Two tests-a light cycle change test and a sucrose preference test-were conducted to evaluate the usability of this system in rat behavioral experiments. This system enables notable advancements in image-based behavior recognition for living rodents.
Collapse
Affiliation(s)
- Chia-Ming Hsieh
- Laboratory Animal Center, National Health Research Institutes, 35, Keyan Road, Zhunan Town, Miaoli County 350401, Taiwan
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu City 300044, Taiwan
| | - Ching-Han Hsu
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu City 300044, Taiwan
| | - Jen-Kun Chen
- Laboratory Animal Center, National Health Research Institutes, 35, Keyan Road, Zhunan Town, Miaoli County 350401, Taiwan
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, 35, Keyan Road, Zhunan Town, Miaoli County 350401, Taiwan
| | - Lun-De Liao
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, 35, Keyan Road, Zhunan Town, Miaoli County 350401, Taiwan
| |
Collapse
|
20
|
Kazmierska-Grebowska P, Żakowski W, Myślińska D, Sahu R, Jankowski MM. Revisiting serotonin's role in spatial memory: A call for sensitive analytical approaches. Int J Biochem Cell Biol 2024; 176:106663. [PMID: 39321568 DOI: 10.1016/j.biocel.2024.106663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 09/17/2024] [Accepted: 09/17/2024] [Indexed: 09/27/2024]
Abstract
The serotonergic system is involved in various psychiatric and neurological conditions, with serotonergic drugs often used in treatment. These conditions frequently affect spatial memory, which can serve as a model of declarative memory due to well-known cellular components and advanced methods that track neural activity and behavior with high temporal resolution. However, most findings on serotonin's effects on spatial learning and memory come from studies lacking refined analytical techniques and modern approaches needed to uncover the underlying neuronal mechanisms. This In Focus review critically investigates available studies to identify areas for further exploration. It finds that well-established behavioral models could yield more insights with modern tracking and data analysis approaches, while the cellular aspects of spatial memory remain underexplored. The review highlights the complex role of serotonin in spatial memory, which holds the potential for better understanding and treating memory-related disorders.
Collapse
Affiliation(s)
| | - Witold Żakowski
- Department of Animal and Human Physiology, Faculty of Biology, University of Gdansk, Gdansk, Poland
| | - Dorota Myślińska
- Department of Animal and Human Physiology, Faculty of Biology, University of Gdansk, Gdansk, Poland
| | - Ravindra Sahu
- BioTechMed Center, Multimedia Systems Department, Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, Gdansk, Poland
| | - Maciej M Jankowski
- BioTechMed Center, Multimedia Systems Department, Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, Gdansk, Poland.
| |
Collapse
|
21
|
Lin S, Gillis WF, Weinreb C, Zeine A, Jones SC, Robinson EM, Markowitz J, Datta SR. Characterizing the structure of mouse behavior using Motion Sequencing. Nat Protoc 2024; 19:3242-3291. [PMID: 38926589 PMCID: PMC11552546 DOI: 10.1038/s41596-024-01015-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 04/12/2024] [Indexed: 06/28/2024]
Abstract
Spontaneous mouse behavior is composed from repeatedly used modules of movement (e.g., rearing, running or grooming) that are flexibly placed into sequences whose content evolves over time. By identifying behavioral modules and the order in which they are expressed, researchers can gain insight into the effect of drugs, genes, context, sensory stimuli and neural activity on natural behavior. Here we present a protocol for performing Motion Sequencing (MoSeq), an ethologically inspired method that uses three-dimensional machine vision and unsupervised machine learning to decompose spontaneous mouse behavior into a series of elemental modules called 'syllables'. This protocol is based upon a MoSeq pipeline that includes modules for depth video acquisition, data preprocessing and modeling, as well as a standardized set of visualization tools. Users are provided with instructions and code for building a MoSeq imaging rig and acquiring three-dimensional video of spontaneous mouse behavior for submission to the modeling framework; the outputs of this protocol include syllable labels for each frame of the video data as well as summary plots describing how often each syllable was used and how syllables transitioned from one to the other. In addition, we provide instructions for analyzing and visualizing the outputs of keypoint-MoSeq, a recently developed variant of MoSeq that can identify behavioral motifs from keypoints identified from standard (rather than depth) video. This protocol and the accompanying pipeline significantly lower the bar for users without extensive computational ethology experience to adopt this unsupervised, data-driven approach to characterize mouse behavior.
Collapse
Affiliation(s)
- Sherry Lin
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | | | - Caleb Weinreb
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Ayman Zeine
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Samuel C Jones
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Emma M Robinson
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Jeffrey Markowitz
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | | |
Collapse
|
22
|
McKeown CR, Ta AC, Marshall CL, McLain NJ, Archuleta KJ, Cline HT. X-Tracker: Automated Analysis of Xenopus Tadpole Visual Avoidance Behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.10.10.617688. [PMID: 39416226 PMCID: PMC11482948 DOI: 10.1101/2024.10.10.617688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
Abstract
Xenopus laevis tadpoles exhibit an avoidance behavior when they encounter a moving visual stimulus. A visual avoidance event occurs when a moving object approaches the eye of a free-swimming animal at an approximately 90-degree angle and the animal turns in response to the encounter. Analysis of this behavior requires tracking both the free-swimming animal and the moving visual stimulus both prior to and after the encounter. Previous automated tracking software does not discriminate the moving animal from the moving stimulus, requiring time-consuming manual analysis. Here we present X-Tracker, an automated behavior tracking code that can detect and discriminate moving visual stimuli and free-swimming animals and score encounters and avoidance events. X-Tracker is as accurate as human analysis without the human time commitment. We also present software improvements to our previous visual stimulus presentation and image capture that optimize videos for automated analysis, and hardware improvements that increase the number of animal-stimulus encounters. X-Tracker is a high throughput, unbiased, and significant time-saving analysis system that will greatly facilitate visual avoidance behavior analysis of Xenopus laevis tadpoles, and potentially other free-swimming organisms. The tool is available at https://github.com/ClineLab/Tadpole-Behavior-Automation.
Collapse
Affiliation(s)
| | - Aaron C Ta
- Department of Neuroscience, Scripps Research, La Jolla, CA, USA
| | | | | | | | - Hollis T Cline
- Department of Neuroscience, Scripps Research, La Jolla, CA, USA
| |
Collapse
|
23
|
Pratt BG, Lee SYJ, Chou GM, Tuthill JC. Miniature linear and split-belt treadmills reveal mechanisms of adaptive motor control in walking Drosophila. Curr Biol 2024; 34:4368-4381.e5. [PMID: 39216486 PMCID: PMC11461123 DOI: 10.1016/j.cub.2024.08.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 07/08/2024] [Accepted: 08/05/2024] [Indexed: 09/04/2024]
Abstract
To navigate complex environments, walking animals must detect and overcome unexpected perturbations. One technical challenge when investigating adaptive locomotion is measuring behavioral responses to precise perturbations during naturalistic walking; another is that manipulating neural activity in sensorimotor circuits often reduces spontaneous locomotion. To overcome these obstacles, we introduce miniature treadmill systems for coercing locomotion and tracking 3D kinematics of walking Drosophila. By systematically comparing walking in three experimental setups, we show that flies compelled to walk on the linear treadmill have similar stepping kinematics to freely walking flies, while kinematics of tethered walking flies are subtly different. Genetically silencing mechanosensory neurons altered step kinematics of flies walking on the linear treadmill across all speeds. We also discovered that flies can maintain a forward heading on a split-belt treadmill by specifically adapting the step distance of their middle legs. These findings suggest that proprioceptive feedback contributes to leg motor control irrespective of walking speed and that the fly's middle legs play a specialized role in stabilizing locomotion.
Collapse
Affiliation(s)
- Brandon G Pratt
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA
| | - Su-Yee J Lee
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA
| | - Grant M Chou
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA
| | - John C Tuthill
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA.
| |
Collapse
|
24
|
Marx V. 20 years of Nature Methods: how some papers shaped science and careers. Nat Methods 2024; 21:1786-1791. [PMID: 39384985 DOI: 10.1038/s41592-024-02452-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/11/2024]
|
25
|
Nagy M, Davidson JD, Vásárhelyi G, Ábel D, Kubinyi E, El Hady A, Vicsek T. Long-term tracking of social structure in groups of rats. Sci Rep 2024; 14:22857. [PMID: 39353967 PMCID: PMC11445254 DOI: 10.1038/s41598-024-72437-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Accepted: 09/06/2024] [Indexed: 10/03/2024] Open
Abstract
Rodents serve as an important model for examining both individual and collective behavior. Dominance within rodent social structures can determine access to critical resources, such as food and mating opportunities. Yet, many aspects of the intricate interplay between individual behaviors and the resulting group social hierarchy, especially its evolution over time, remain unexplored. In this study, we utilized an automated tracking system that continuously monitored groups of male rats for over 250 days to enable an in-depth analysis of individual behavior and the overarching group dynamic. We describe the evolution of social structures within a group and additionally investigate how past behaviors influence the emergence of new social hierarchies when group composition and experimental area changes. Notably, we find that conventional individual and pairwise tests exhibit a weak correlation with group behavior, highlighting their limited accuracy in predicting behavioral outcomes in a collective context. These results emphasize the context-dependence of social behavior as an emergent property of interactions within a group and highlight the need to measure and quantify social behavior in more naturalistic environments.
Collapse
Affiliation(s)
- Máté Nagy
- Department of Biological Physics, Eötvös Loránd University, Budapest, Hungary.
- MTA-ELTE 'Lendület' Collective Behaviour Research Group, Hungarian Academy of Sciences, Budapest, Hungary.
- MTA-ELTE Statistical and Biological Physics Research Group, Hungarian Academy of Sciences, Budapest, Hungary.
- Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Constance, Germany.
- Department of Biology, University of Konstanz, Constance, Germany.
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Constance, Germany.
| | - Jacob D Davidson
- Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Constance, Germany.
- Department of Biology, University of Konstanz, Constance, Germany.
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Constance, Germany.
| | - Gábor Vásárhelyi
- Department of Biological Physics, Eötvös Loránd University, Budapest, Hungary
- MTA-ELTE Statistical and Biological Physics Research Group, Hungarian Academy of Sciences, Budapest, Hungary
| | - Dániel Ábel
- Department of Biological Physics, Eötvös Loránd University, Budapest, Hungary
| | - Enikő Kubinyi
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary
- ELTE NAP Canine Brain Research Group, Budapest, Hungary
- MTA-ELTE Lendület 'Momentum' Companion Animal Research Group, Budapest, Hungary
| | - Ahmed El Hady
- Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Constance, Germany
- Department of Biology, University of Konstanz, Constance, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Constance, Germany
| | - Tamás Vicsek
- Department of Biological Physics, Eötvös Loránd University, Budapest, Hungary
- MTA-ELTE Statistical and Biological Physics Research Group, Hungarian Academy of Sciences, Budapest, Hungary
| |
Collapse
|
26
|
Vijatovic D, Toma FA, Harrington ZPM, Sommer C, Hauschild R, Trevisan AJ, Chapman P, Julseth MJ, Brenner-Morton S, Gabitto MI, Dasen JS, Bikoff JB, Sweeney LB. Spinal neuron diversity scales exponentially with swim-to-limb transformation during frog metamorphosis. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.20.614050. [PMID: 39345366 PMCID: PMC11430061 DOI: 10.1101/2024.09.20.614050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/01/2024]
Abstract
Vertebrates exhibit a wide range of motor behaviors, ranging from swimming to complex limb-based movements. Here we take advantage of frog metamorphosis, which captures a swim-to-limb-based movement transformation during the development of a single organism, to explore changes in the underlying spinal circuits. We find that the tadpole spinal cord contains small and largely homogeneous populations of motor neurons (MNs) and V1 interneurons (V1s) at early escape swimming stages. These neuronal populations only modestly increase in number and subtype heterogeneity with the emergence of free swimming. In contrast, during frog metamorphosis and the emergence of limb movement, there is a dramatic expansion of MN and V1 interneuron number and transcriptional heterogeneity, culminating in cohorts of neurons that exhibit striking molecular similarity to mammalian motor circuits. CRISPR/Cas9-mediated gene disruption of the limb MN and V1 determinants FoxP1 and Engrailed-1, respectively, results in severe but selective deficits in tail and limb function. Our work thus demonstrates that neural diversity scales exponentially with increasing behavioral complexity and illustrates striking evolutionary conservation in the molecular organization and function of motor circuits across species.
Collapse
Affiliation(s)
- David Vijatovic
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | | | | | | | - Robert Hauschild
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Alexandra J. Trevisan
- Department of Developmental Neurobiology, St. Jude Children’s Research Hospital, Memphis, TN, USA
| | - Phillip Chapman
- Department of Developmental Neurobiology, St. Jude Children’s Research Hospital, Memphis, TN, USA
| | - Mara J. Julseth
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | | | - Mariano I. Gabitto
- Allen Institute for Brain Science, Seattle, WA, USA
- Department of Statistics, University of Washington, Seattle, WA, 98109, USA
| | - Jeremy S. Dasen
- NYU Neuroscience Institute, Department of Neuroscience and Physiology, NYU School of Medicine, New York, NY, USA
| | - Jay B. Bikoff
- Department of Developmental Neurobiology, St. Jude Children’s Research Hospital, Memphis, TN, USA
| | - Lora B. Sweeney
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| |
Collapse
|
27
|
Pan Y, Lauder GV. Combining Computational Fluid Dynamics and Experimental Data to Understand Fish Schooling Behavior. Integr Comp Biol 2024; 64:753-768. [PMID: 38760887 DOI: 10.1093/icb/icae044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 05/12/2024] [Accepted: 05/14/2024] [Indexed: 05/20/2024] Open
Abstract
Understanding the flow physics behind fish schooling poses significant challenges due to the difficulties in directly measuring hydrodynamic performance and the three-dimensional, chaotic, and complex flow structures generated by collective moving organisms. Numerous previous simulations and experiments have utilized computational, mechanical, or robotic models to represent live fish. And existing studies of live fish schools have contributed significantly to dissecting the complexities of fish schooling. But the scarcity of combined approaches that include both computational and experimental studies, ideally of the same fish schools, has limited our ability to understand the physical factors that are involved in fish collective behavior. This underscores the necessity of developing new approaches to working directly with live fish schools. An integrated method that combines experiments on live fish schools with computational fluid dynamics (CFD) simulations represents an innovative method of studying the hydrodynamics of fish schooling. CFD techniques can deliver accurate performance measurements and high-fidelity flow characteristics for comprehensive analysis. Concurrently, experimental approaches can capture the precise locomotor kinematics of fish and offer additional flow information through particle image velocimetry (PIV) measurements, potentially enhancing the accuracy and efficiency of CFD studies via advanced data assimilation techniques. The flow patterns observed in PIV experiments with fish schools and the complex hydrodynamic interactions revealed by integrated analyses highlight the complexity of fish schooling, prompting a reevaluation of the classic Weihs model of school dynamics. The synergy between CFD models and experimental data grants us comprehensive insights into the flow dynamics of fish schools, facilitating the evaluation of their functional significance and enabling comparative studies of schooling behavior. In addition, we consider the challenges in developing integrated analytical methods and suggest promising directions for future research.
Collapse
Affiliation(s)
- Yu Pan
- Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA
- Museum of Comparative Zoology, Harvard University, Cambridge, MA 02138, USA
| | - George V Lauder
- Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA
- Museum of Comparative Zoology, Harvard University, Cambridge, MA 02138, USA
| |
Collapse
|
28
|
He Y, Mulqueeney JM, Watt EC, Salili-James A, Barber NS, Camaiti M, Hunt ESE, Kippax-Chui O, Knapp A, Lanzetti A, Rangel-de Lázaro G, McMinn JK, Minus J, Mohan AV, Roberts LE, Adhami D, Grisan E, Gu Q, Herridge V, Poon STS, West T, Goswami A. Opportunities and Challenges in Applying AI to Evolutionary Morphology. Integr Org Biol 2024; 6:obae036. [PMID: 40433986 PMCID: PMC12082097 DOI: 10.1093/iob/obae036] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 08/07/2024] [Accepted: 09/20/2024] [Indexed: 05/29/2025] Open
Abstract
Artificial intelligence (AI) is poised to revolutionize many aspects of science, including the study of evolutionary morphology. While classical AI methods such as principal component analysis and cluster analysis have been commonplace in the study of evolutionary morphology for decades, recent years have seen increasing application of deep learning to ecology and evolutionary biology. As digitized specimen databases become increasingly prevalent and openly available, AI is offering vast new potential to circumvent long-standing barriers to rapid, big data analysis of phenotypes. Here, we review the current state of AI methods available for the study of evolutionary morphology, which are most developed in the area of data acquisition and processing. We introduce the main available AI techniques, categorizing them into 3 stages based on their order of appearance: (1) machine learning, (2) deep learning, and (3) the most recent advancements in large-scale models and multimodal learning. Next, we present case studies of existing approaches using AI for evolutionary morphology, including image capture and segmentation, feature recognition, morphometrics, and phylogenetics. We then discuss the prospectus for near-term advances in specific areas of inquiry within this field, including the potential of new AI methods that have not yet been applied to the study of morphological evolution. In particular, we note key areas where AI remains underutilized and could be used to enhance studies of evolutionary morphology. This combination of current methods and potential developments has the capacity to transform the evolutionary analysis of the organismal phenotype into evolutionary phenomics, leading to an era of "big data" that aligns the study of phenotypes with genomics and other areas of bioinformatics.
Collapse
Affiliation(s)
- Y He
- Life Sciences, Natural History Museum, London, UK
| | - J M Mulqueeney
- Life Sciences, Natural History Museum, London, UK
- Department of Ocean & Earth Science, National Oceanography Centre Southampton, University of Southampton, Southampton, UK
| | - E C Watt
- Life Sciences, Natural History Museum, London, UK
- Division of Biosciences, University College London, London, UK
| | - A Salili-James
- AI and Innovation, Natural History Museum, London, UK
- Digital, Data and Informatics, Natural History Museum, London, UK
| | - N S Barber
- Life Sciences, Natural History Museum, London, UK
- Department of Anthropology, University College London, London, UK
| | - M Camaiti
- Life Sciences, Natural History Museum, London, UK
| | - E S E Hunt
- Life Sciences, Natural History Museum, London, UK
- Department of Life Sciences, Imperial College London, London, UK
- Grantham Institute, Imperial College London, London, UK
| | - O Kippax-Chui
- Life Sciences, Natural History Museum, London, UK
- Grantham Institute, Imperial College London, London, UK
- Department of Earth Science and Engineering, Imperial College London, London, UK
| | - A Knapp
- Life Sciences, Natural History Museum, London, UK
- Centre for Integrative Anatomy, University College London, London, UK
| | - A Lanzetti
- Life Sciences, Natural History Museum, London, UK
- School of Geography, Earth and Environmental Sciences, University of Birmingham, Birmingham, UK
| | - G Rangel-de Lázaro
- Life Sciences, Natural History Museum, London, UK
- School of Oriental and African Studies, London, UK
| | - J K McMinn
- Life Sciences, Natural History Museum, London, UK
- Department of Earth Sciences, University of Oxford, Oxford, UK
| | - J Minus
- Life Sciences, Natural History Museum, London, UK
- School of Biological and Behavioural Sciences, Queen Mary University of London, London, UK
| | - A V Mohan
- Life Sciences, Natural History Museum, London, UK
- Biodiversity Genomics Laboratory, Institute of Biology, University of Neuchâtel, Neuchâtel, Switzerland
| | - L E Roberts
- Life Sciences, Natural History Museum, London, UK
| | - D Adhami
- Life Sciences, Natural History Museum, London, UK
- Department of Life Sciences, Imperial College London, London, UK
- Imaging and Analysis Centre, Natural History Museum, London, UK
| | - E Grisan
- School of Engineering, London South Bank University, London, UK
| | - Q Gu
- AI and Innovation, Natural History Museum, London, UK
- Digital, Data and Informatics, Natural History Museum, London, UK
| | - V Herridge
- Life Sciences, Natural History Museum, London, UK
- School of Biosciences, University of Sheffield, Sheffield, UK
| | - S T S Poon
- AI and Innovation, Natural History Museum, London, UK
- Digital, Data and Informatics, Natural History Museum, London, UK
| | - T West
- Centre for Integrative Anatomy, University College London, London, UK
- Imaging and Analysis Centre, Natural History Museum, London, UK
| | - A Goswami
- Life Sciences, Natural History Museum, London, UK
| |
Collapse
|
29
|
Costa AC, Ahamed T, Jordan D, Stephens GJ. A Markovian dynamics for Caenorhabditis elegans behavior across scales. Proc Natl Acad Sci U S A 2024; 121:e2318805121. [PMID: 39083417 PMCID: PMC11317559 DOI: 10.1073/pnas.2318805121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 07/01/2024] [Indexed: 08/02/2024] Open
Abstract
How do we capture the breadth of behavior in animal movement, from rapid body twitches to aging? Using high-resolution videos of the nematode worm Caenorhabditis elegans, we show that a single dynamics connects posture-scale fluctuations with trajectory diffusion and longer-lived behavioral states. We take short posture sequences as an instantaneous behavioral measure, fixing the sequence length for maximal prediction. Within the space of posture sequences, we construct a fine-scale, maximum entropy partition so that transitions among microstates define a high-fidelity Markov model, which we also use as a means of principled coarse-graining. We translate these dynamics into movement using resistive force theory, capturing the statistical properties of foraging trajectories. Predictive across scales, we leverage the longest-lived eigenvectors of the inferred Markov chain to perform a top-down subdivision of the worm's foraging behavior, revealing both "runs-and-pirouettes" as well as previously uncharacterized finer-scale behaviors. We use our model to investigate the relevance of these fine-scale behaviors for foraging success, recovering a trade-off between local and global search strategies.
Collapse
Affiliation(s)
- Antonio C. Costa
- Department of Physics and Astronomy, Vrije Universiteit Amsterdam, Amsterdam1081HV, The Netherlands
| | | | - David Jordan
- Department of Biochemistry, University of Cambridge, CambridgeCB2 1GA, United Kingdom
| | - Greg J. Stephens
- Department of Physics and Astronomy, Vrije Universiteit Amsterdam, Amsterdam1081HV, The Netherlands
- Biological Physics Theory Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa904-0495, Japan
| |
Collapse
|
30
|
Albrecht B, Schatz A, Frei K, Winter Y. KineWheel-DeepLabCut Automated Paw Annotation Using Alternating Stroboscopic UV and White Light Illumination. eNeuro 2024; 11:ENEURO.0304-23.2024. [PMID: 39209542 PMCID: PMC11363514 DOI: 10.1523/eneuro.0304-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 05/27/2024] [Accepted: 06/26/2024] [Indexed: 09/04/2024] Open
Abstract
Uncovering the relationships between neural circuits, behavior, and neural dysfunction may require rodent pose tracking. While open-source toolkits such as DeepLabCut have revolutionized markerless pose estimation using deep neural networks, the training process still requires human intervention for annotating key points of interest in video data. To further reduce human labor for neural network training, we developed a method that automatically generates annotated image datasets of rodent paw placement in a laboratory setting. It uses invisible but fluorescent markers that become temporarily visible under UV light. Through stroboscopic alternating illumination, adjacent video frames taken at 720 Hz are either UV or white light illuminated. After color filtering the UV-exposed video frames, the UV markings are identified and the paw locations are deterministically mapped. This paw information is then transferred to automatically annotate paw positions in the next white light-exposed frame that is later used for training the neural network. We demonstrate the effectiveness of our method using a KineWheel-DeepLabCut setup for the markerless tracking of the four paws of a harness-fixed mouse running on top of the transparent wheel with mirror. Our automated approach, made available open-source, achieves high-quality position annotations and significantly reduces the need for human involvement in the neural network training process, paving the way for more efficient and streamlined rodent pose tracking in neuroscience research.
Collapse
Affiliation(s)
| | | | - Katja Frei
- Humboldt Universität, Berlin 10117, Germany
| | | |
Collapse
|
31
|
Aldarondo D, Merel J, Marshall JD, Hasenclever L, Klibaite U, Gellis A, Tassa Y, Wayne G, Botvinick M, Ölveczky BP. A virtual rodent predicts the structure of neural activity across behaviours. Nature 2024; 632:594-602. [PMID: 38862024 PMCID: PMC12080270 DOI: 10.1038/s41586-024-07633-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 05/30/2024] [Indexed: 06/13/2024]
Abstract
Animals have exquisite control of their bodies, allowing them to perform a diverse range of behaviours. How such control is implemented by the brain, however, remains unclear. Advancing our understanding requires models that can relate principles of control to the structure of neural activity in behaving animals. Here, to facilitate this, we built a 'virtual rodent', in which an artificial neural network actuates a biomechanically realistic model of the rat1 in a physics simulator2. We used deep reinforcement learning3-5 to train the virtual agent to imitate the behaviour of freely moving rats, thus allowing us to compare neural activity recorded in real rats to the network activity of a virtual rodent mimicking their behaviour. We found that neural activity in the sensorimotor striatum and motor cortex was better predicted by the virtual rodent's network activity than by any features of the real rat's movements, consistent with both regions implementing inverse dynamics6. Furthermore, the network's latent variability predicted the structure of neural variability across behaviours and afforded robustness in a way consistent with the minimal intervention principle of optimal feedback control7. These results demonstrate how physical simulation of biomechanically realistic virtual animals can help interpret the structure of neural activity across behaviour and relate it to theoretical principles of motor control.
Collapse
Affiliation(s)
- Diego Aldarondo
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Fauna Robotics, New York, NY, USA.
| | - Josh Merel
- DeepMind, Google, London, UK
- Fauna Robotics, New York, NY, USA
| | - Jesse D Marshall
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
- Reality Labs, Meta, New York, NY, USA
| | | | - Ugne Klibaite
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Amanda Gellis
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | | | | | - Matthew Botvinick
- DeepMind, Google, London, UK
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| | - Bence P Ölveczky
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
32
|
de Paula TMCG, de Sousa RV, Sarmiento MP, Kramer T, de Souza Sardinha EJ, Sabei L, Machado JS, Vilioti M, Zanella AJ. Deep learning pose detection model for sow locomotion. Sci Rep 2024; 14:16401. [PMID: 39013897 PMCID: PMC11252330 DOI: 10.1038/s41598-024-62151-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 05/14/2024] [Indexed: 07/18/2024] Open
Abstract
Lameness affects animal mobility, causing pain and discomfort. Lameness in early stages often goes undetected due to a lack of observation, precision, and reliability. Automated and non-invasive systems offer precision and detection ease and may improve animal welfare. This study was conducted to create a repository of images and videos of sows with different locomotion scores. Our goal is to develop a computer vision model for automatically identifying specific points on the sow's body. The automatic identification and ability to track specific body areas, will allow us to conduct kinematic studies with the aim of facilitating the detection of lameness using deep learning. The video database was collected on a pig farm with a scenario built to allow filming of sows in locomotion with different lameness scores. Two stereo cameras were used to record 2D videos images. Thirteen locomotion experts assessed the videos using the Locomotion Score System developed by Zinpro Corporation. From this annotated repository, computational models were trained and tested using the open-source deep learning-based animal pose tracking framework SLEAP (Social LEAP Estimates Animal Poses). The top-performing models were constructed using the LEAP architecture to accurately track 6 (lateral view) and 10 (dorsal view) skeleton keypoints. The architecture achieved average precisions values of 0.90 and 0.72, average distances of 6.83 and 11.37 in pixel, and similarities of 0.94 and 0.86 for the lateral and dorsal views, respectively. These computational models are proposed as a Precision Livestock Farming tool and method for identifying and estimating postures in pigs automatically and objectively. The 2D video image repository with different pig locomotion scores can be used as a tool for teaching and research. Based on our skeleton keypoint classification results, an automatic system could be developed. This could contribute to the objective assessment of locomotion scores in sows, improving their welfare.
Collapse
Affiliation(s)
- Tauana Maria Carlos Guimarães de Paula
- Department of Preventive Veterinary Medicine and Animal Health, School of Veterinary Medicine and Animal Science, Center for Comparative Studies in Sustainability, Health and Welfare, University of São Paulo, Pirassununga, SP, 13635-900, Brazil.
| | - Rafael Vieira de Sousa
- Robotics and Automation Group for Biosystems Engineering, Department of Biosystems Engineering, Faculty of Animal Science and Food Engineering (FZEA), University of São Paulo (USP), Pirassununga, SP, 13635-900, Brazil
| | - Marisol Parada Sarmiento
- Department of Preventive Veterinary Medicine and Animal Health, School of Veterinary Medicine and Animal Science, Center for Comparative Studies in Sustainability, Health and Welfare, University of São Paulo, Pirassununga, SP, 13635-900, Brazil
| | - Ton Kramer
- Zinpro Corporation, Piracicaba, SP, Brazil
| | - Edson José de Souza Sardinha
- Robotics and Automation Group for Biosystems Engineering, Department of Biosystems Engineering, Faculty of Animal Science and Food Engineering (FZEA), University of São Paulo (USP), Pirassununga, SP, 13635-900, Brazil
| | - Leandro Sabei
- Department of Preventive Veterinary Medicine and Animal Health, School of Veterinary Medicine and Animal Science, Center for Comparative Studies in Sustainability, Health and Welfare, University of São Paulo, Pirassununga, SP, 13635-900, Brazil
| | - Júlia Silvestrini Machado
- Department of Preventive Veterinary Medicine and Animal Health, School of Veterinary Medicine and Animal Science, Center for Comparative Studies in Sustainability, Health and Welfare, University of São Paulo, Pirassununga, SP, 13635-900, Brazil
| | - Mirela Vilioti
- Department of Preventive Veterinary Medicine and Animal Health, School of Veterinary Medicine and Animal Science, Center for Comparative Studies in Sustainability, Health and Welfare, University of São Paulo, Pirassununga, SP, 13635-900, Brazil
| | - Adroaldo José Zanella
- Department of Preventive Veterinary Medicine and Animal Health, School of Veterinary Medicine and Animal Science, Center for Comparative Studies in Sustainability, Health and Welfare, University of São Paulo, Pirassununga, SP, 13635-900, Brazil.
| |
Collapse
|
33
|
Ibáñez Alcalá RJ, Beck DW, Salcido AA, Davila LD, Giri A, Heaton CN, Villarreal Rodriguez K, Rakocevic LI, Hossain SB, Reyes NF, Batson SA, Macias AY, Drammis SM, Negishi K, Zhang Q, Umashankar Beck S, Vara P, Joshi A, Franco AJ, Hernandez Carbajal BJ, Ordonez MM, Ramirez FY, Lopez JD, Lozano N, Ramirez A, Legaspy L, Cruz PL, Armenta AA, Viel SN, Aguirre JI, Quintanar O, Medina F, Ordonez PM, Munoz AE, Martínez Gaudier GE, Naime GM, Powers RE, O'Dell LE, Moschak TM, Goosens KA, Friedman A. RECORD, a high-throughput, customizable system that unveils behavioral strategies leveraged by rodents during foraging-like decision-making. Commun Biol 2024; 7:822. [PMID: 38971889 PMCID: PMC11227549 DOI: 10.1038/s42003-024-06489-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 06/21/2024] [Indexed: 07/08/2024] Open
Abstract
Translational studies benefit from experimental designs where laboratory organisms use human-relevant behaviors. One such behavior is decision-making, however studying complex decision-making in rodents is labor-intensive and typically restricted to two levels of cost/reward. We design a fully automated, inexpensive, high-throughput framework to study decision-making across multiple levels of rewards and costs: the REward-COst in Rodent Decision-making (RECORD) system. RECORD integrates three components: 1) 3D-printed arenas, 2) custom electronic hardware, and 3) software. We validated four behavioral protocols without employing any food or water restriction, highlighting the versatility of our system. RECORD data exposes heterogeneity in decision-making both within and across individuals that is quantifiably constrained. Using oxycodone self-administration and alcohol-consumption as test cases, we reveal how analytic approaches that incorporate behavioral heterogeneity are sensitive to detecting perturbations in decision-making. RECORD is a powerful approach to studying decision-making in rodents, with features that facilitate translational studies of decision-making in psychiatric disorders.
Collapse
Affiliation(s)
| | - Dirk W Beck
- Computational Science Program, University of Texas at El Paso, El Paso, TX, USA
| | - Alexis A Salcido
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Luis D Davila
- Computational Science Program, University of Texas at El Paso, El Paso, TX, USA
| | - Atanu Giri
- Computational Science Program, University of Texas at El Paso, El Paso, TX, USA
| | - Cory N Heaton
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | | | - Lara I Rakocevic
- Computational Science Program, University of Texas at El Paso, El Paso, TX, USA
| | - Safa B Hossain
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Neftali F Reyes
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Serina A Batson
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Andrea Y Macias
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Sabrina M Drammis
- Artificial Intelligence Laboratory, Department of Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Qingyang Zhang
- Department of Biomedical Informatics, Harvard Medical School, Cambridge, MA, USA
| | | | - Paulina Vara
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Arnav Joshi
- Computational Science Program, University of Texas at El Paso, El Paso, TX, USA
| | - Austin J Franco
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | | | - Miguel M Ordonez
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Felix Y Ramirez
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Jonathan D Lopez
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Nayeli Lozano
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Abigail Ramirez
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Linnete Legaspy
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Paulina L Cruz
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Abril A Armenta
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Stephanie N Viel
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Jessica I Aguirre
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Odalys Quintanar
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Fernanda Medina
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Pablo M Ordonez
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Alfonzo E Munoz
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | | | - Gabriela M Naime
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Rosalie E Powers
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Laura E O'Dell
- Department of Psychology, University of Texas at El Paso, El Paso, TX, USA
| | - Travis M Moschak
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA
| | - Ki A Goosens
- Department of Psychiatry, Center for Translational Medicine and Pharmacology, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| | - Alexander Friedman
- Department of Biological Sciences, University of Texas at El Paso, El Paso, TX, USA.
- Computational Science Program, University of Texas at El Paso, El Paso, TX, USA.
| |
Collapse
|
34
|
Weinreb C, Pearl JE, Lin S, Osman MAM, Zhang L, Annapragada S, Conlin E, Hoffmann R, Makowska S, Gillis WF, Jay M, Ye S, Mathis A, Mathis MW, Pereira T, Linderman SW, Datta SR. Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics. Nat Methods 2024; 21:1329-1339. [PMID: 38997595 PMCID: PMC11245396 DOI: 10.1038/s41592-024-02318-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 05/22/2024] [Indexed: 07/14/2024]
Abstract
Keypoint tracking algorithms can flexibly quantify animal movement from videos obtained in a wide variety of settings. However, it remains unclear how to parse continuous keypoint data into discrete actions. This challenge is particularly acute because keypoint data are susceptible to high-frequency jitter that clustering algorithms can mistake for transitions between actions. Here we present keypoint-MoSeq, a machine learning-based platform for identifying behavioral modules ('syllables') from keypoint data without human supervision. Keypoint-MoSeq uses a generative model to distinguish keypoint noise from behavior, enabling it to identify syllables whose boundaries correspond to natural sub-second discontinuities in pose dynamics. Keypoint-MoSeq outperforms commonly used alternative clustering methods at identifying these transitions, at capturing correlations between neural activity and behavior and at classifying either solitary or social behaviors in accordance with human annotations. Keypoint-MoSeq also works in multiple species and generalizes beyond the syllable timescale, identifying fast sniff-aligned movements in mice and a spectrum of oscillatory behaviors in fruit flies. Keypoint-MoSeq, therefore, renders accessible the modular structure of behavior through standard video recordings.
Collapse
Affiliation(s)
- Caleb Weinreb
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Jonah E Pearl
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Sherry Lin
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | | | - Libby Zhang
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | | | - Eli Conlin
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Red Hoffmann
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Sofia Makowska
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | | | - Maya Jay
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Shaokai Ye
- Brain Mind and Neuro-X Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Alexander Mathis
- Brain Mind and Neuro-X Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Mackenzie W Mathis
- Brain Mind and Neuro-X Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Talmo Pereira
- Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Scott W Linderman
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.
- Department of Statistics, Stanford University, Stanford, CA, USA.
| | | |
Collapse
|
35
|
Biderman D, Whiteway MR, Hurwitz C, Greenspan N, Lee RS, Vishnubhotla A, Warren R, Pedraja F, Noone D, Schartner MM, Huntenburg JM, Khanal A, Meijer GT, Noel JP, Pan-Vazquez A, Socha KZ, Urai AE, Cunningham JP, Sawtell NB, Paninski L. Lightning Pose: improved animal pose estimation via semi-supervised learning, Bayesian ensembling and cloud-native open-source tools. Nat Methods 2024; 21:1316-1328. [PMID: 38918605 DOI: 10.1038/s41592-024-02319-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 05/17/2024] [Indexed: 06/27/2024]
Abstract
Contemporary pose estimation methods enable precise measurements of behavior via supervised deep learning with hand-labeled video frames. Although effective in many cases, the supervised approach requires extensive labeling and often produces outputs that are unreliable for downstream analyses. Here, we introduce 'Lightning Pose', an efficient pose estimation package with three algorithmic contributions. First, in addition to training on a few labeled video frames, we use many unlabeled videos and penalize the network whenever its predictions violate motion continuity, multiple-view geometry and posture plausibility (semi-supervised learning). Second, we introduce a network architecture that resolves occlusions by predicting pose on any given frame using surrounding unlabeled frames. Third, we refine the pose predictions post hoc by combining ensembling and Kalman smoothing. Together, these components render pose trajectories more accurate and scientifically usable. We released a cloud application that allows users to label data, train networks and process new videos directly from the browser.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | - Anup Khanal
- University of California, Los Angeles, Los Angeles, CA, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
36
|
Goodwin NL, Choong JJ, Hwang S, Pitts K, Bloom L, Islam A, Zhang YY, Szelenyi ER, Tong X, Newman EL, Miczek K, Wright HR, McLaughlin RJ, Norville ZC, Eshel N, Heshmati M, Nilsson SRO, Golden SA. Simple Behavioral Analysis (SimBA) as a platform for explainable machine learning in behavioral neuroscience. Nat Neurosci 2024; 27:1411-1424. [PMID: 38778146 PMCID: PMC11268425 DOI: 10.1038/s41593-024-01649-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 04/12/2024] [Indexed: 05/25/2024]
Abstract
The study of complex behaviors is often challenging when using manual annotation due to the absence of quantifiable behavioral definitions and the subjective nature of behavioral annotation. Integration of supervised machine learning approaches mitigates some of these issues through the inclusion of accessible and explainable model interpretation. To decrease barriers to access, and with an emphasis on accessible model explainability, we developed the open-source Simple Behavioral Analysis (SimBA) platform for behavioral neuroscientists. SimBA introduces several machine learning interpretability tools, including SHapley Additive exPlanation (SHAP) scores, that aid in creating explainable and transparent behavioral classifiers. Here we show how the addition of explainability metrics allows for quantifiable comparisons of aggressive social behavior across research groups and species, reconceptualizing behavior as a sharable reagent and providing an open-source framework. We provide an open-source, graphical user interface (GUI)-driven, well-documented package to facilitate the movement toward improved automation and sharing of behavioral classification tools across laboratories.
Collapse
Affiliation(s)
- Nastacia L Goodwin
- Department of Biological Structure, University of Washington, Seattle, WA, USA
- Graduate Program in Neuroscience, University of Washington, Seattle, WA, USA
- Center of Excellence in Neurobiology of Addiction, Pain and Emotion (NAPE), University of Washington, Seattle, WA, USA
| | - Jia J Choong
- Department of Biological Structure, University of Washington, Seattle, WA, USA
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, USA
| | - Sophia Hwang
- Department of Biological Structure, University of Washington, Seattle, WA, USA
| | - Kayla Pitts
- Department of Biological Structure, University of Washington, Seattle, WA, USA
| | - Liana Bloom
- Department of Biological Structure, University of Washington, Seattle, WA, USA
| | - Aasiya Islam
- Department of Biological Structure, University of Washington, Seattle, WA, USA
| | - Yizhe Y Zhang
- Department of Biological Structure, University of Washington, Seattle, WA, USA
- Graduate Program in Neuroscience, University of Washington, Seattle, WA, USA
- Center of Excellence in Neurobiology of Addiction, Pain and Emotion (NAPE), University of Washington, Seattle, WA, USA
| | - Eric R Szelenyi
- Department of Biological Structure, University of Washington, Seattle, WA, USA
- Center of Excellence in Neurobiology of Addiction, Pain and Emotion (NAPE), University of Washington, Seattle, WA, USA
| | - Xiaoyu Tong
- New York University Neuroscience Institute, New York, NY, USA
| | - Emily L Newman
- Department of Psychiatry, Harvard Medical School McLean Hospital, Belmont, MA, USA
| | - Klaus Miczek
- Department of Psychology, Tufts University, Medford, MA, USA
| | - Hayden R Wright
- Department of Integrative Physiology and Neuroscience, Washington State University, Pullman, WA, USA
- Graduate Program in Neuroscience, Washington State University, Pullman, WA, USA
| | - Ryan J McLaughlin
- Department of Integrative Physiology and Neuroscience, Washington State University, Pullman, WA, USA
- Graduate Program in Neuroscience, Washington State University, Pullman, WA, USA
| | | | - Neir Eshel
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Mitra Heshmati
- Department of Biological Structure, University of Washington, Seattle, WA, USA
- Graduate Program in Neuroscience, University of Washington, Seattle, WA, USA
- Center of Excellence in Neurobiology of Addiction, Pain and Emotion (NAPE), University of Washington, Seattle, WA, USA
- Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, WA, USA
| | - Simon R O Nilsson
- Department of Biological Structure, University of Washington, Seattle, WA, USA.
| | - Sam A Golden
- Department of Biological Structure, University of Washington, Seattle, WA, USA.
- Graduate Program in Neuroscience, University of Washington, Seattle, WA, USA.
- Center of Excellence in Neurobiology of Addiction, Pain and Emotion (NAPE), University of Washington, Seattle, WA, USA.
| |
Collapse
|
37
|
Farhat N, van der Linden D, Zamansky A, Assif T. Automation in canine science: enhancing human capabilities and overcoming adoption barriers. Front Vet Sci 2024; 11:1394620. [PMID: 38948674 PMCID: PMC11212470 DOI: 10.3389/fvets.2024.1394620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 04/29/2024] [Indexed: 07/02/2024] Open
Abstract
The emerging field of canine science has been slow in adopting automated approaches for data analysis. However, with the dramatic increase in the volume and complexity of the collected behavioral data, this is now beginning to change. This paper aims to systematize the field of automation in canine science. We provide an examination of current automation processes and pipelines by providing a literature review of state-of-the-art studies applying automation in this field. In addition, via an empirical study with researchers in animal behavior, we explore their perceptions and attitudes toward automated approaches for better understanding barriers for a wider adoption of automation. The insights derived from this research could facilitate more effective and widespread utilization of automation within canine science, addressing current challenges and enhancing the analysis of increasingly complex and voluminous behavioral data. This could potentially revolutionize the field, allowing for more objective and quantifiable assessments of dog behavior, which would ultimately contribute to our understanding of dog-human interactions and canine welfare.
Collapse
Affiliation(s)
- Nareed Farhat
- Department of Information Systems, University of Haifa, Haifa, Israel
| | - Dirk van der Linden
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, United Kingdom
| | - Anna Zamansky
- Department of Information Systems, University of Haifa, Haifa, Israel
| | - Tal Assif
- Department of Information Systems, University of Haifa, Haifa, Israel
- Lod Municipal Shelter, Lod, Israel
| |
Collapse
|
38
|
Fang C, Wu Z, Zheng H, Yang J, Ma C, Zhang T. MCP: Multi-Chicken Pose Estimation Based on Transfer Learning. Animals (Basel) 2024; 14:1774. [PMID: 38929393 PMCID: PMC11200378 DOI: 10.3390/ani14121774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 06/07/2024] [Accepted: 06/10/2024] [Indexed: 06/28/2024] Open
Abstract
Poultry managers can better understand the state of poultry through poultry behavior analysis. As one of the key steps in behavior analysis, the accurate estimation of poultry posture is the focus of this research. This study mainly analyzes a top-down pose estimation method of multiple chickens. Therefore, we propose the "multi-chicken pose" (MCP), a pose estimation system for multiple chickens through deep learning. Firstly, we find the position of each chicken from the image via the chicken detector; then, an estimate of the pose of each chicken is made using a pose estimation network, which is based on transfer learning. On this basis, the pixel error (PE), root mean square error (RMSE), and image quantity distribution of key points are analyzed according to the improved chicken keypoint similarity (CKS). The experimental results show that the algorithm scores in different evaluation metrics are a mean average precision (mAP) of 0.652, a mean average recall (mAR) of 0.742, a percentage of correct keypoints (PCKs) of 0.789, and an RMSE of 17.30 pixels. To the best of our knowledge, this is the first time that transfer learning has been used for the pose estimation of multiple chickens as objects. The method can provide a new path for future poultry behavior analysis.
Collapse
Affiliation(s)
- Cheng Fang
- College of Engineering, South China Agricultural University, 483 Wushan Road, Guangzhou 510642, China; (C.F.)
| | - Zhenlong Wu
- College of Engineering, South China Agricultural University, 483 Wushan Road, Guangzhou 510642, China; (C.F.)
| | - Haikun Zheng
- College of Engineering, South China Agricultural University, 483 Wushan Road, Guangzhou 510642, China; (C.F.)
| | - Jikang Yang
- College of Engineering, South China Agricultural University, 483 Wushan Road, Guangzhou 510642, China; (C.F.)
| | - Chuang Ma
- College of Engineering, South China Agricultural University, 483 Wushan Road, Guangzhou 510642, China; (C.F.)
| | - Tiemin Zhang
- College of Engineering, South China Agricultural University, 483 Wushan Road, Guangzhou 510642, China; (C.F.)
- National Engineering Research Center for Breeding Swine Industry, Guangzhou 510642, China
- Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou 510642, China
| |
Collapse
|
39
|
Burchardt LS, van de Sande Y, Kehy M, Gamba M, Ravignani A, Pouw W. A toolkit for the dynamic study of air sacs in siamang and other elastic circular structures. PLoS Comput Biol 2024; 20:e1012222. [PMID: 38913743 PMCID: PMC11226135 DOI: 10.1371/journal.pcbi.1012222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 07/05/2024] [Accepted: 06/03/2024] [Indexed: 06/26/2024] Open
Abstract
Biological structures are defined by rigid elements, such as bones, and elastic elements, like muscles and membranes. Computer vision advances have enabled automatic tracking of moving animal skeletal poses. Such developments provide insights into complex time-varying dynamics of biological motion. Conversely, the elastic soft-tissues of organisms, like the nose of elephant seals, or the buccal sac of frogs, are poorly studied and no computer vision methods have been proposed. This leaves major gaps in different areas of biology. In primatology, most critically, the function of air sacs is widely debated; many open questions on the role of air sacs in the evolution of animal communication, including human speech, remain unanswered. To support the dynamic study of soft-tissue structures, we present a toolkit for the automated tracking of semi-circular elastic structures in biological video data. The toolkit contains unsupervised computer vision tools (using Hough transform) and supervised deep learning (by adapting DeepLabCut) methodology to track inflation of laryngeal air sacs or other biological spherical objects (e.g., gular cavities). Confirming the value of elastic kinematic analysis, we show that air sac inflation correlates with acoustic markers that likely inform about body size. Finally, we present a pre-processed audiovisual-kinematic dataset of 7+ hours of closeup audiovisual recordings of siamang (Symphalangus syndactylus) singing. This toolkit (https://github.com/WimPouw/AirSacTracker) aims to revitalize the study of non-skeletal morphological structures across multiple species.
Collapse
Affiliation(s)
- Lara S. Burchardt
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
- Leibniz-Zentrum Allgemeine Sprachwissenschaft, Berlin, Germany
| | - Yana van de Sande
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Mounia Kehy
- Equipe de Neuro-Ethologie Sensorielle, Université Jean Monnet, France
| | - Marco Gamba
- Department of Life Sciences and Systems Biology, University of Turin, Turin, Italy
| | - Andrea Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus, Denmark
- Department of Human Neurosciences, Sapienza University of Rome, Rome, Italy
| | - Wim Pouw
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
40
|
Oesch LT, Ryan MB, Churchland AK. From innate to instructed: A new look at perceptual decision-making. Curr Opin Neurobiol 2024; 86:102871. [PMID: 38569230 PMCID: PMC11162954 DOI: 10.1016/j.conb.2024.102871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 03/07/2024] [Accepted: 03/08/2024] [Indexed: 04/05/2024]
Abstract
Understanding how subjects perceive sensory stimuli in their environment and use this information to guide appropriate actions is a major challenge in neuroscience. To study perceptual decision-making in animals, researchers use tasks that either probe spontaneous responses to stimuli (often described as "naturalistic") or train animals to associate stimuli with experimenter-defined responses. Spontaneous decisions rely on animals' pre-existing knowledge, while trained tasks offer greater versatility, albeit often at the cost of extensive training. Here, we review emerging approaches to investigate perceptual decision-making using both spontaneous and trained behaviors, highlighting their strengths and limitations. Additionally, we propose how trained decision-making tasks could be improved to achieve faster learning and a more generalizable understanding of task rules.
Collapse
Affiliation(s)
- Lukas T Oesch
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, United States
| | - Michael B Ryan
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, United States. https://twitter.com/NeuroMikeRyan
| | - Anne K Churchland
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, United States.
| |
Collapse
|
41
|
Li H, Deng Z, Yu X, Lin J, Xie Y, Liao W, Ma Y, Zheng Q. Combining dual-view fusion pose estimation and multi-type motion feature extraction to assess arthritis pain in mice. Biomed Signal Process Control 2024; 92:106080. [DOI: 10.1016/j.bspc.2024.106080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/07/2024]
|
42
|
Yu JH, Napoli JL, Lovett-Barron M. Understanding collective behavior through neurobiology. Curr Opin Neurobiol 2024; 86:102866. [PMID: 38852986 PMCID: PMC11439442 DOI: 10.1016/j.conb.2024.102866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 02/16/2024] [Accepted: 03/07/2024] [Indexed: 06/11/2024]
Abstract
A variety of organisms exhibit collective movement, including schooling fish and flocking birds, where coordinated behavior emerges from the interactions between group members. Despite the prevalence of collective movement in nature, little is known about the neural mechanisms producing each individual's behavior within the group. Here we discuss how a neurobiological approach can enrich our understanding of collective behavior by determining the mechanisms by which individuals interact. We provide examples of sensory systems for social communication during collective movement, highlight recent discoveries about neural systems for detecting the position and actions of social partners, and discuss opportunities for future research. Understanding the neurobiology of collective behavior can provide insight into how nervous systems function in a dynamic social world.
Collapse
Affiliation(s)
- Jo-Hsien Yu
- Department of Neurobiology, School of Biological Sciences, University of California, San Diego, La Jolla, CA, 92093, USA. https://twitter.com/anitajhyu
| | - Julia L Napoli
- Department of Neurobiology, School of Biological Sciences, University of California, San Diego, La Jolla, CA, 92093, USA. https://twitter.com/juliadoingneuro
| | - Matthew Lovett-Barron
- Department of Neurobiology, School of Biological Sciences, University of California, San Diego, La Jolla, CA, 92093, USA.
| |
Collapse
|
43
|
Lv S, Wang J, Chen X, Liao X. STPoseNet: A real-time spatiotemporal network model for robust mouse pose estimation. iScience 2024; 27:109772. [PMID: 38711440 PMCID: PMC11070338 DOI: 10.1016/j.isci.2024.109772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 03/15/2024] [Accepted: 04/15/2024] [Indexed: 05/08/2024] Open
Abstract
Animal behavior analysis plays a crucial role in contemporary neuroscience research. However, the performance of the frame-by-frame approach may degrade in scenarios with occlusions or motion blur. In this study, we propose a spatiotemporal network model based on YOLOv8 to enhance the accuracy of key-point detection in mouse behavioral experimental videos. This model integrates a time-domain tracking strategy comprising two components: the first part utilizes key-point detection results from the previous frame to detect potential target locations in the subsequent frame; the second part employs Kalman filtering to analyze key-point changes prior to detection, allowing for the estimation of missing key-points. In the comparison of pose estimation results between our approach, YOLOv8, DeepLabCut and SLEAP on videos of three mouse behavioral experiments, our approach demonstrated significantly superior performance. This suggests that our method offers a new and effective means of accurately tracking and estimating pose in mice through spatiotemporal processing.
Collapse
Affiliation(s)
- Songyan Lv
- Guangxi Key Laboratory of Special Biomedicine & Advanced Institute for Brain and Intelligence, School of Medicine, Guangxi University, Nanning 530004, China
| | - Jincheng Wang
- Guangxi Key Laboratory of Special Biomedicine & Advanced Institute for Brain and Intelligence, School of Medicine, Guangxi University, Nanning 530004, China
| | - Xiaowei Chen
- Guangxi Key Laboratory of Special Biomedicine & Advanced Institute for Brain and Intelligence, School of Medicine, Guangxi University, Nanning 530004, China
| | - Xiang Liao
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing 400030, China
| |
Collapse
|
44
|
Luo W, Zhang G, Shao Q, Zhao Y, Wang D, Zhang X, Liu K, Li X, Liu J, Wang P, Li L, Wang G, Wang F, Yu Z. An efficient visual servo tracker for herd monitoring by UAV. Sci Rep 2024; 14:10463. [PMID: 38714785 PMCID: PMC11582714 DOI: 10.1038/s41598-024-60445-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Accepted: 04/23/2024] [Indexed: 05/10/2024] Open
Abstract
It is a challenging and meaningful task to carry out UAV-based livestock monitoring in high-altitude (more than 4500 m on average) and cold regions (annual average - 4 °C) on the Qinghai Tibet Plateau. The purpose of artificial intelligence (AI) is to execute automated tasks and to solve practical problems in actual applications by combining the software technology with the hardware carrier to create integrated advanced devices. Only in this way, the maximum value of AI could be realized. In this paper, a real-time tracking system with dynamic target tracking ability is proposed. It is developed based on the tracking-by-detection architecture using YOLOv7 and Deep SORT algorithms for target detection and tracking, respectively. In response to the problems encountered in the tracking process of complex and dense scenes, our work (1) Uses optical flow to compensate the Kalman filter, to solve the problem of mismatch between the target bounding box predicted by the Kalman filter (KF) and the input when the target detection in the current frame is complex, thereby improving the prediction accuracy; (2) Using a low confidence trajectory filtering method to reduce false positive trajectories generated by Deep SORT, thereby mitigating the impact of unreliable detection on target tracking. (3) A visual servo controller has been designed for the Unmanned Aerial Vehicle (UAV) to reduce the impact of rapid movement on tracking and ensure that the target is always within the field of view of the UAV camera, thereby achieving automatic tracking tasks. Finally, the system was tested using Tibetan yaks on the Qinghai Tibet Plateau as tracking targets, and the results showed that the system has real-time multi tracking ability and ideal visual servo effect in complex and dense scenes.
Collapse
Affiliation(s)
- Wei Luo
- North China Institute of Aerospace Engineering, Langfang, 065000, China
- Key Laboratory of Land Surface Pattern and Simulation, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, 100101, China
- Aerospace Remote Sensing Information Processing and Application Collaborative Innovation Center of Hebei Province, Langfang, 065000, China
- National Joint Engineering Research Center of Space Remote Sensing Information Application Technology, Langfang, 065000, China
| | - Guoqing Zhang
- North China Institute of Aerospace Engineering, Langfang, 065000, China
| | - Quanqin Shao
- Key Laboratory of Land Surface Pattern and Simulation, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, 100101, China
- University of Chinese Academy of Sciences, Beijing, 101407, China
| | - Yongxiang Zhao
- North China Institute of Aerospace Engineering, Langfang, 065000, China
| | - Dongliang Wang
- Key Laboratory of Land Surface Pattern and Simulation, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, 100101, China.
| | - Xiongyi Zhang
- Key Laboratory of Land Surface Pattern and Simulation, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, 100101, China
| | - Ke Liu
- North China Institute of Aerospace Engineering, Langfang, 065000, China
| | - Xiaoliang Li
- North China Institute of Aerospace Engineering, Langfang, 065000, China
| | - Jiandong Liu
- North China Institute of Aerospace Engineering, Langfang, 065000, China
| | - Penggang Wang
- North China Institute of Aerospace Engineering, Langfang, 065000, China
| | - Lin Li
- North China Institute of Aerospace Engineering, Langfang, 065000, China
| | - Guanwu Wang
- North China Institute of Aerospace Engineering, Langfang, 065000, China
| | - Fulong Wang
- North China Institute of Aerospace Engineering, Langfang, 065000, China
| | - Zhongde Yu
- North China Institute of Aerospace Engineering, Langfang, 065000, China
| |
Collapse
|
45
|
Kastner DB, Williams G, Holobetz C, Romano JP, Dayan P. The choice-wide behavioral association study: data-driven identification of interpretable behavioral components. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.26.582115. [PMID: 38464037 PMCID: PMC10925091 DOI: 10.1101/2024.02.26.582115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
Behavior contains rich structure across many timescales, but there is a dearth of methods to identify relevant components, especially over the longer periods required for learning and decision-making. Inspired by the goals and techniques of genome-wide association studies, we present a data-driven method-the choice-wide behavioral association study: CBAS-that systematically identifies such behavioral features. CBAS uses a powerful, resampling-based, method of multiple comparisons correction to identify sequences of actions or choices that either differ significantly between groups or significantly correlate with a covariate of interest. We apply CBAS to different tasks and species (flies, rats, and humans) and find, in all instances, that it provides interpretable information about each behavioral task.
Collapse
Affiliation(s)
- David B. Kastner
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA 94143, USA
- Lead Contact
| | - Greer Williams
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA 94143, USA
| | - Cristofer Holobetz
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA 94143, USA
| | - Joseph P. Romano
- Department of Statistics, Stanford University, Stanford, CA 94305, USA
| | - Peter Dayan
- Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
| |
Collapse
|
46
|
Biderman D, Whiteway MR, Hurwitz C, Greenspan N, Lee RS, Vishnubhotla A, Warren R, Pedraja F, Noone D, Schartner M, Huntenburg JM, Khanal A, Meijer GT, Noel JP, Pan-Vazquez A, Socha KZ, Urai AE, Cunningham JP, Sawtell NB, Paninski L. Lightning Pose: improved animal pose estimation via semi-supervised learning, Bayesian ensembling, and cloud-native open-source tools. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.04.28.538703. [PMID: 37162966 PMCID: PMC10168383 DOI: 10.1101/2023.04.28.538703] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Contemporary pose estimation methods enable precise measurements of behavior via supervised deep learning with hand-labeled video frames. Although effective in many cases, the supervised approach requires extensive labeling and often produces outputs that are unreliable for downstream analyses. Here, we introduce "Lightning Pose," an efficient pose estimation package with three algorithmic contributions. First, in addition to training on a few labeled video frames, we use many unlabeled videos and penalize the network whenever its predictions violate motion continuity, multiple-view geometry, and posture plausibility (semi-supervised learning). Second, we introduce a network architecture that resolves occlusions by predicting pose on any given frame using surrounding unlabeled frames. Third, we refine the pose predictions post-hoc by combining ensembling and Kalman smoothing. Together, these components render pose trajectories more accurate and scientifically usable. We release a cloud application that allows users to label data, train networks, and predict new videos directly from the browser.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | - Anup Khanal
- University of California Los Angeles, Los Angeles, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
47
|
Mykins M, Bridges B, Jo A, Krishnan K. Multidimensional Analysis of a Social Behavior Identifies Regression and Phenotypic Heterogeneity in a Female Mouse Model for Rett Syndrome. J Neurosci 2024; 44:e1078232023. [PMID: 38199865 PMCID: PMC10957218 DOI: 10.1523/jneurosci.1078-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 11/01/2023] [Accepted: 11/17/2023] [Indexed: 01/12/2024] Open
Abstract
Regression is a key feature of neurodevelopmental disorders such as autism spectrum disorder, Fragile X syndrome, and Rett syndrome (RTT). RTT is caused by mutations in the X-linked gene methyl-CpG-binding protein 2 (MECP2). It is characterized by an early period of typical development with subsequent regression of previously acquired motor and speech skills in girls. The syndromic phenotypes are individualistic and dynamic over time. Thus far, it has been difficult to capture these dynamics and syndromic heterogeneity in the preclinical Mecp2-heterozygous female mouse model (Het). The emergence of computational neuroethology tools allows for robust analysis of complex and dynamic behaviors to model endophenotypes in preclinical models. Toward this first step, we utilized DeepLabCut, a marker-less pose estimation software to quantify trajectory kinematics and multidimensional analysis to characterize behavioral heterogeneity in Het in the previously benchmarked, ethologically relevant social cognition task of pup retrieval. We report the identification of two distinct phenotypes of adult Het: Het that display a delay in efficiency in early days and then improve over days like wild-type mice and Het that regress and perform worse in later days. Furthermore, regression is dependent on age and behavioral context and can be detected in the initial days of retrieval. Together, the novel identification of two populations of Het suggests differential effects on neural circuitry, opens new avenues to investigate the underlying molecular and cellular mechanisms of heterogeneity, and designs better studies for stratifying therapeutics.
Collapse
Affiliation(s)
- Michael Mykins
- Department of Biochemistry & Cellular and Molecular Biology, University of Tennessee, Knoxville, Tennessee
| | - Benjamin Bridges
- Department of Biochemistry & Cellular and Molecular Biology, University of Tennessee, Knoxville, Tennessee
| | - Angela Jo
- Department of Biochemistry & Cellular and Molecular Biology, University of Tennessee, Knoxville, Tennessee
| | - Keerthi Krishnan
- Department of Biochemistry & Cellular and Molecular Biology, University of Tennessee, Knoxville, Tennessee
| |
Collapse
|
48
|
Catto A, O’Connor R, Braunscheidel KM, Kenny PJ, Shen L. FABEL: Forecasting Animal Behavioral Events with Deep Learning-Based Computer Vision. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.15.584610. [PMID: 38559273 PMCID: PMC10980057 DOI: 10.1101/2024.03.15.584610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Behavioral neuroscience aims to provide a connection between neural phenomena and emergent organism-level behaviors. This requires perturbing the nervous system and observing behavioral outcomes, and comparing observed post-perturbation behavior with predicted counterfactual behavior and therefore accurate behavioral forecasts. In this study we present FABEL, a deep learning method for forecasting future animal behaviors and locomotion trajectories from historical locomotion alone. We train an offline pose estimation network to predict animal body-part locations in behavioral video; then sequences of pose vectors are input to deep learning time-series forecasting models. Specifically, we train an LSTM network that predicts a future food interaction event in a specified time window, and a Temporal Fusion Transformer that predicts future trajectories of animal body-parts, which are then converted into probabilistic label forecasts. Importantly, accurate prediction of food interaction provides a basis for neurobehavioral intervention in the context of compulsive eating. We show promising results on forecasting tasks between 100 milliseconds and 5 seconds timescales. Because the model takes only behavioral video as input, it can be adapted to any behavioral task and does not require specific physiological readouts. Simultaneously, these deep learning models may serve as extensible modules that can accommodate diverse signals, such as in-vivo fluorescence imaging and electrophysiology, which may improve behavior forecasts and elucidate invervention targets for desired behavioral change.
Collapse
Affiliation(s)
- Adam Catto
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Richard O’Connor
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Kevin M. Braunscheidel
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Paul J. Kenny
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Li Shen
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
- Windreich Department of Artificial Intelligence and Human Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| |
Collapse
|
49
|
Ding SS, Fox JL, Gordus A, Joshi A, Liao JC, Scholz M. Fantastic beasts and how to study them: rethinking experimental animal behavior. J Exp Biol 2024; 227:jeb247003. [PMID: 38372042 PMCID: PMC10911175 DOI: 10.1242/jeb.247003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Humans have been trying to understand animal behavior at least since recorded history. Recent rapid development of new technologies has allowed us to make significant progress in understanding the physiological and molecular mechanisms underlying behavior, a key goal of neuroethology. However, there is a tradeoff when studying animal behavior and its underlying biological mechanisms: common behavior protocols in the laboratory are designed to be replicable and controlled, but they often fail to encompass the variability and breadth of natural behavior. This Commentary proposes a framework of 10 key questions that aim to guide researchers in incorporating a rich natural context into their experimental design or in choosing a new animal study system. The 10 questions cover overarching experimental considerations that can provide a template for interspecies comparisons, enable us to develop studies in new model organisms and unlock new experiments in our quest to understand behavior.
Collapse
Affiliation(s)
- Siyu Serena Ding
- Max Planck Institute of Animal Behavior, 78464 Konstanz, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz, 78464 Konstanz, Germany
| | - Jessica L. Fox
- Department of Biology, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Andrew Gordus
- Department of Biology, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Abhilasha Joshi
- Departments of Physiology and Psychiatry, University of California, San Francisco, CA 94158, USA
| | - James C. Liao
- Department of Biology, The Whitney Laboratory for Marine Bioscience, University of Florida, St. Augustine, FL 32080, USA
| | - Monika Scholz
- Max Planck Research Group Neural Information Flow, Max Planck Institute for Neurobiology of Behavior – caesar, 53175 Bonn, Germany
| |
Collapse
|
50
|
Le VA, Sterley TL, Cheng N, Bains JS, Murari K. Markerless Mouse Tracking for Social Experiments. eNeuro 2024; 11:ENEURO.0154-22.2023. [PMID: 38233144 PMCID: PMC10901195 DOI: 10.1523/eneuro.0154-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 09/18/2023] [Accepted: 10/31/2023] [Indexed: 01/19/2024] Open
Abstract
Automated behavior quantification in socially interacting animals requires accurate tracking. While many methods have been very successful and highly generalizable to different settings, issues of mistaken identities and lost information on key anatomical features are common, although they can be alleviated by increased human effort in training or post-processing. We propose a markerless video-based tool to simultaneously track two interacting mice of the same appearance in controlled settings for quantifying behaviors such as different types of sniffing, touching, and locomotion to improve tracking accuracy under these settings without increased human effort. It incorporates conventional handcrafted tracking and deep-learning-based techniques. The tool is trained on a small number of manually annotated images from a basic experimental setup and outputs body masks and coordinates of the snout and tail-base for each mouse. The method was tested on several commonly used experimental conditions including bedding in the cage and fiberoptic or headstage implants on the mice. Results obtained without any human corrections after the automated analysis showed a near elimination of identities switches and a ∼15% improvement in tracking accuracy over pure deep-learning-based pose estimation tracking approaches. Our approach can be optionally ensembled with such techniques for further improvement. Finally, we demonstrated an application of this approach in studies of social behavior of mice by quantifying and comparing interactions between pairs of mice in which some lack olfaction. Together, these results suggest that our approach could be valuable for studying group behaviors in rodents, such as social interactions.
Collapse
Affiliation(s)
- Van Anh Le
- Electrical and Software Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
| | - Toni-Lee Sterley
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 1N4, Canada
| | - Ning Cheng
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 1N4, Canada
- Faculty of Veterinary Medicine, University of Calgary, Calgary, AB T2N 1N4, Canada
- Alberta Children's Hospital Research Institute, University of Calgary, Calgary, AB T2N 1N4, Canada
| | - Jaideep S Bains
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 1N4, Canada
| | - Kartikeya Murari
- Electrical and Software Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 1N4, Canada
- Biomedical Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
| |
Collapse
|