1
|
Dang KM, Zhang YJ, Zhang T, Wang C, Sinner A, Coronica P, Poon JKS. NeuroQuantify - An image analysis software for detection and quantification of neuron cells and neurite lengths using deep learning. J Neurosci Methods 2024; 411:110273. [PMID: 39197681 DOI: 10.1016/j.jneumeth.2024.110273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 06/23/2024] [Accepted: 08/25/2024] [Indexed: 09/01/2024]
Abstract
BACKGROUND The segmentation of cells and neurites in microscopy images of neuronal networks provides valuable quantitative information about neuron growth and neuronal differentiation, including the number of cells, neurites, neurite length and neurite orientation. This information is essential for assessing the development of neuronal networks in response to extracellular stimuli, which is useful for studying neuronal structures, for example, the study of neurodegenerative diseases and pharmaceuticals. NEW METHOD We have developed NeuroQuantify, an open-source software that uses deep learning to efficiently and quickly segment cells and neurites in phase contrast microscopy images. RESULTS NeuroQuantify offers several key features: (i) automatic detection of cells and neurites; (ii) post-processing of the images for the quantitative neurite length measurement based on segmentation of phase contrast microscopy images, and (iii) identification of neurite orientations. COMPARISON WITH EXISTING METHODS NeuroQuantify overcomes some of the limitations of existing methods in the automatic and accurate analysis of neuronal structures. It has been developed for phase contrast images rather than fluorescence images. In addition to typical functionality of cell counting, NeuroQuantify also detects and counts neurites, measures the neurite lengths, and produces the neurite orientation distribution. CONCLUSIONS We offer a valuable tool to assess network development rapidly and effectively. The user-friendly NeuroQuantify software can be installed and freely downloaded from GitHub at https://github.com/StanleyZ0528/neural-image-segmentation.
Collapse
Affiliation(s)
- Ka My Dang
- Max Planck Institute of Microstructure Physics, Weinberg 2, Halle D-06120, Germany; Max Planck-University of Toronto Centre for Neural Science and Technology, Canada.
| | - Yi Jia Zhang
- Department of Electrical and Computer Engineering, University of Toronto, 10 King's College Rd., Toronto, Ontario M5S 3G4, Canada
| | - Tianchen Zhang
- Department of Electrical and Computer Engineering, University of Toronto, 10 King's College Rd., Toronto, Ontario M5S 3G4, Canada
| | - Chao Wang
- Department of Electrical and Computer Engineering, University of Toronto, 10 King's College Rd., Toronto, Ontario M5S 3G4, Canada
| | - Anton Sinner
- Max Planck Institute of Microstructure Physics, Weinberg 2, Halle D-06120, Germany
| | - Piero Coronica
- Max Planck Computing and Data Facility, Gießenbachstraße 2, Garching 85748, Germany
| | - Joyce K S Poon
- Max Planck Institute of Microstructure Physics, Weinberg 2, Halle D-06120, Germany; Max Planck-University of Toronto Centre for Neural Science and Technology, Canada; Department of Electrical and Computer Engineering, University of Toronto, 10 King's College Rd., Toronto, Ontario M5S 3G4, Canada.
| |
Collapse
|
2
|
Chen J, Yuan Z, Xi J, Gao Z, Li Y, Zhu X, Shi YS, Guan F, Wang Y. Efficient and Accurate Semi-Automatic Neuron Tracing with Extended Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7299-7309. [PMID: 39255163 DOI: 10.1109/tvcg.2024.3456197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Neuron tracing, alternately referred to as neuron reconstruction, is the procedure for extracting the digital representation of the three-dimensional neuronal morphology from stacks of microscopic images. Achieving accurate neuron tracing is critical for profiling the neuroanatomical structure at single-cell level and analyzing the neuronal circuits and projections at whole-brain scale. However, the process often demands substantial human involvement and represents a nontrivial task. Conventional solutions towards neuron tracing often contend with challenges such as non-intuitive user interactions, suboptimal data generation throughput, and ambiguous visualization. In this paper, we introduce a novel method that leverages the power of extended reality (XR) for intuitive and progressive semi-automatic neuron tracing in real time. In our method, we have defined a set of interactors for controllable and efficient interactions for neuron tracing in an immersive environment. We have also developed a GPU-accelerated automatic tracing algorithm that can generate updated neuron reconstruction in real time. In addition, we have built a visualizer for fast and improved visual experience, particularly when working with both volumetric images and 3D objects. Our method has been successfully implemented with one virtual reality (VR) headset and one augmented reality (AR) headset with satisfying results achieved. We also conducted two user studies and proved the effectiveness of the interactors and the efficiency of our method in comparison with other approaches for neuron tracing.
Collapse
|
3
|
Gou L, Wang Y, Gao L, Zhong Y, Xie L, Wang H, Zha X, Shao Y, Xu H, Xu X, Yan J. Gapr for large-scale collaborative single-neuron reconstruction. Nat Methods 2024; 21:1926-1935. [PMID: 38961277 DOI: 10.1038/s41592-024-02345-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 06/06/2024] [Indexed: 07/05/2024]
Abstract
Whole-brain analysis of single-neuron morphology is crucial for unraveling the complex structure of the brain. However, large-scale neuron reconstruction from terabyte and even petabyte data of mammalian brains generated by state-of-the-art light microscopy is a daunting task. Here, we developed 'Gapr' (Gapr accelerates projectome reconstruction) that streamlines deep learning-based automatic reconstruction, 'automatic proofreading' that reduces human workloads at high-confidence sites, and high-throughput collaborative proofreading by crowd users through the Internet. Furthermore, Gapr offers a seamless user interface that ensures high proofreading speed per annotator, on-demand conversion for handling large datasets, flexible workflows tailored to diverse datasets and rigorous error tracking for quality control. Finally, we demonstrated Gapr's efficacy by reconstructing over 4,000 neurons in mouse brains, revealing the morphological diversity in cortical interneurons and hypothalamic neurons. Here, we present Gapr as a solution for large-scale single-neuron reconstruction projects.
Collapse
Affiliation(s)
- Lingfeng Gou
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Yanzhi Wang
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Shanghai, China
| | - Le Gao
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Yiting Zhong
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Shanghai, China
| | - Lucheng Xie
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Haifang Wang
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Xi Zha
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Yinqi Shao
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Huatai Xu
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai, China
| | - Xiaohong Xu
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai, China
| | - Jun Yan
- Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai, China.
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
4
|
Gliko O, Mallory M, Dalley R, Gala R, Gornet J, Zeng H, Sorensen SA, Sümbül U. High-throughput analysis of dendrite and axonal arbors reveals transcriptomic correlates of neuroanatomy. Nat Commun 2024; 15:6337. [PMID: 39068160 PMCID: PMC11283452 DOI: 10.1038/s41467-024-50728-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 07/16/2024] [Indexed: 07/30/2024] Open
Abstract
Neuronal anatomy is central to the organization and function of brain cell types. However, anatomical variability within apparently homogeneous populations of cells can obscure such insights. Here, we report large-scale automation of neuronal morphology reconstruction and analysis on a dataset of 813 inhibitory neurons characterized using the Patch-seq method, which enables measurement of multiple properties from individual neurons, including local morphology and transcriptional signature. We demonstrate that these automated reconstructions can be used in the same manner as manual reconstructions to understand the relationship between some, but not all, cellular properties used to define cell types. We uncover gene expression correlates of laminar innervation on multiple transcriptomically defined neuronal subclasses and types. In particular, our results reveal correlates of the variability in Layer 1 (L1) axonal innervation in a transcriptomically defined subpopulation of Martinotti cells in the adult mouse neocortex.
Collapse
Affiliation(s)
| | | | | | | | - James Gornet
- California Institute of Technology, Pasadena, CA, USA
| | | | | | | |
Collapse
|
5
|
Choi YK, Feng L, Jeong WK, Kim J. Connecto-informatics at the mesoscale: current advances in image processing and analysis for mapping the brain connectivity. Brain Inform 2024; 11:15. [PMID: 38833195 DOI: 10.1186/s40708-024-00228-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 05/08/2024] [Indexed: 06/06/2024] Open
Abstract
Mapping neural connections within the brain has been a fundamental goal in neuroscience to understand better its functions and changes that follow aging and diseases. Developments in imaging technology, such as microscopy and labeling tools, have allowed researchers to visualize this connectivity through high-resolution brain-wide imaging. With this, image processing and analysis have become more crucial. However, despite the wealth of neural images generated, access to an integrated image processing and analysis pipeline to process these data is challenging due to scattered information on available tools and methods. To map the neural connections, registration to atlases and feature extraction through segmentation and signal detection are necessary. In this review, our goal is to provide an updated overview of recent advances in these image-processing methods, with a particular focus on fluorescent images of the mouse brain. Our goal is to outline a pathway toward an integrated image-processing pipeline tailored for connecto-informatics. An integrated workflow of these image processing will facilitate researchers' approach to mapping brain connectivity to better understand complex brain networks and their underlying brain functions. By highlighting the image-processing tools available for fluroscent imaging of the mouse brain, this review will contribute to a deeper grasp of connecto-informatics, paving the way for better comprehension of brain connectivity and its implications.
Collapse
Affiliation(s)
- Yoon Kyoung Choi
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | | | - Won-Ki Jeong
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | - Jinhyun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea.
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea.
- KIST-SKKU Brain Research Center, SKKU Institute for Convergence, Sungkyunkwan University, Suwon, South Korea.
| |
Collapse
|
6
|
Clark AS, Kalmanson Z, Morton K, Hartman J, Meyer J, San-Miguel A. An unbiased, automated platform for scoring dopaminergic neurodegeneration in C. elegans. PLoS One 2023; 18:e0281797. [PMID: 37418455 PMCID: PMC10328331 DOI: 10.1371/journal.pone.0281797] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 06/20/2023] [Indexed: 07/09/2023] Open
Abstract
Caenorhabditis elegans (C. elegans) has served as a simple model organism to study dopaminergic neurodegeneration, as it enables quantitative analysis of cellular and sub-cellular morphologies in live animals. These isogenic nematodes have a rapid life cycle and transparent body, making high-throughput imaging and evaluation of fluorescently tagged neurons possible. However, the current state-of-the-art method for quantifying dopaminergic degeneration requires researchers to manually examine images and score dendrites into groups of varying levels of neurodegeneration severity, which is time consuming, subject to bias, and limited in data sensitivity. We aim to overcome the pitfalls of manual neuron scoring by developing an automated, unbiased image processing algorithm to quantify dopaminergic neurodegeneration in C. elegans. The algorithm can be used on images acquired with different microscopy setups and only requires two inputs: a maximum projection image of the four cephalic neurons in the C. elegans head and the pixel size of the user's camera. We validate the platform by detecting and quantifying neurodegeneration in nematodes exposed to rotenone, cold shock, and 6-hydroxydopamine using 63x epifluorescence, 63x confocal, and 40x epifluorescence microscopy, respectively. Analysis of tubby mutant worms with altered fat storage showed that, contrary to our hypothesis, increased adiposity did not sensitize to stressor-induced neurodegeneration. We further verify the accuracy of the algorithm by comparing code-generated, categorical degeneration results with manually scored dendrites of the same experiments. The platform, which detects 20 different metrics of neurodegeneration, can provide comparative insight into how each exposure affects dopaminergic neurodegeneration patterns.
Collapse
Affiliation(s)
- Andrew S. Clark
- Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, North Carolina, United States of America
| | - Zachary Kalmanson
- Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, North Carolina, United States of America
| | - Katherine Morton
- Nicholas School of the Environment, Duke University, Durham, North Carolina, United States of America
| | - Jessica Hartman
- Nicholas School of the Environment, Duke University, Durham, North Carolina, United States of America
- Biochemistry and Molecular Biology, Medical University of South Carolina, Charleston, South Carolina, United States of America
| | - Joel Meyer
- Nicholas School of the Environment, Duke University, Durham, North Carolina, United States of America
| | - Adriana San-Miguel
- Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, North Carolina, United States of America
| |
Collapse
|
7
|
Ma R, Hao L, Tao Y, Mendoza X, Khodeiry M, Liu Y, Shyu ML, Lee RK. RGC-Net: An Automatic Reconstruction and Quantification Algorithm for Retinal Ganglion Cells Based on Deep Learning. Transl Vis Sci Technol 2023; 12:7. [PMID: 37140906 PMCID: PMC10166122 DOI: 10.1167/tvst.12.5.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Accepted: 03/31/2023] [Indexed: 05/05/2023] Open
Abstract
Purpose The purpose of this study was to develop a deep learning-based fully automated reconstruction and quantification algorithm which automatically delineates the neurites and somas of retinal ganglion cells (RGCs). Methods We trained a deep learning-based multi-task image segmentation model, RGC-Net, that automatically segments the neurites and somas in RGC images. A total of 166 RGC scans with manual annotations from human experts were used to develop this model, whereas 132 scans were used for training, and the remaining 34 scans were reserved as testing data. Post-processing techniques removed speckles or dead cells in soma segmentation results to further improve the robustness of the model. Quantification analyses were also conducted to compare five different metrics obtained by our automated algorithm and manual annotations. Results Quantitatively, our segmentation model achieves average foreground accuracy, background accuracy, overall accuracy, and dice similarity coefficient of 0.692, 0.999, 0.997, and 0.691 for the neurite segmentation task, and 0.865, 0.999, 0.997, and 0.850 for the soma segmentation task, respectively. Conclusions The experimental results demonstrate that RGC-Net can accurately and reliably reconstruct neurites and somas in RGC images. We also demonstrate our algorithm is comparable to human manually curated annotations in quantification analyses. Translational Relevance Our deep learning model provides a new tool that can trace and analyze the RGC neurites and somas efficiently and faster than manual analysis.
Collapse
Affiliation(s)
- Rui Ma
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Lili Hao
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
- Department of Ophthalmology, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Yudong Tao
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Ximena Mendoza
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Mohamed Khodeiry
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Yuan Liu
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Mei-Ling Shyu
- School of Science and Engineering, University of Missouri-Kansas City, Kansas City, MO, USA
| | - Richard K. Lee
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| |
Collapse
|
8
|
Boorboor S, Mathew S, Ananth M, Talmage D, Role LW, Kaufman AE. NeuRegenerate: A Framework for Visualizing Neurodegeneration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1625-1637. [PMID: 34757909 PMCID: PMC10070008 DOI: 10.1109/tvcg.2021.3127132] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Recent advances in high-resolution microscopy have allowed scientists to better understand the underlying brain connectivity. However, due to the limitation that biological specimens can only be imaged at a single timepoint, studying changes to neural projections over time is limited to observations gathered using population analysis. In this article, we introduce NeuRegenerate, a novel end-to-end framework for the prediction and visualization of changes in neural fiber morphology within a subject across specified age-timepoints. To predict projections, we present neuReGANerator, a deep-learning network based on cycle-consistent generative adversarial network (GAN) that translates features of neuronal structures across age-timepoints for large brain microscopy volumes. We improve the reconstruction quality of the predicted neuronal structures by implementing a density multiplier and a new loss function, called the hallucination loss. Moreover, to alleviate artifacts that occur due to tiling of large input volumes, we introduce a spatial-consistency module in the training pipeline of neuReGANerator. Finally, to visualize the change in projections, predicted using neuReGANerator, NeuRegenerate offers two modes: (i) neuroCompare to simultaneously visualize the difference in the structures of the neuronal projections, from two age domains (using structural view and bounded view), and (ii) neuroMorph, a vesselness-based morphing technique to interactively visualize the transformation of the structures from one age-timepoint to the other. Our framework is designed specifically for volumes acquired using wide-field microscopy. We demonstrate our framework by visualizing the structural changes within the cholinergic system of the mouse brain between a young and old specimen.
Collapse
|
9
|
Cudic M, Diamond JS, Noble JA. Unpaired mesh-to-image translation for 3D fluorescent microscopy images of neurons. Med Image Anal 2023; 86:102768. [PMID: 36857945 DOI: 10.1016/j.media.2023.102768] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 01/18/2023] [Accepted: 02/08/2023] [Indexed: 02/12/2023]
Abstract
While Generative Adversarial Networks (GANs) can now reliably produce realistic images in a multitude of imaging domains, they are ill-equipped to model thin, stochastic textures present in many large 3D fluorescent microscopy (FM) images acquired in biological research. This is especially problematic in neuroscience where the lack of ground truth data impedes the development of automated image analysis algorithms for neurons and neural populations. We therefore propose an unpaired mesh-to-image translation methodology for generating volumetric FM images of neurons from paired ground truths. We start by learning unique FM styles efficiently through a Gramian-based discriminator. Then, we stylize 3D voxelized meshes of previously reconstructed neurons by successively generating slices. As a result, we effectively create a synthetic microscope and can acquire realistic FM images of neurons with control over the image content and imaging configurations. We demonstrate the feasibility of our architecture and its superior performance compared to state-of-the-art image translation architectures through a variety of texture-based metrics, unsupervised segmentation accuracy, and an expert opinion test. In this study, we use 2 synthetic FM datasets and 2 newly acquired FM datasets of retinal neurons.
Collapse
Affiliation(s)
- Mihael Cudic
- National Institutes of Health Oxford-Cambridge Scholars Program, USA; National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA; Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK
| | - Jeffrey S Diamond
- National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA
| | - J Alison Noble
- Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK.
| |
Collapse
|
10
|
Clark AS, Kalmanson Z, Morton K, Hartman J, Meyer J, San-Miguel A. An unbiased, automated platform for scoring dopaminergic neurodegeneration in C. elegans. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.02.526781. [PMID: 36778421 PMCID: PMC9915681 DOI: 10.1101/2023.02.02.526781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
Caenorhabditis elegans ( C. elegans ) has served as a simple model organism to study dopaminergic neurodegeneration, as it enables quantitative analysis of cellular and sub-cellular morphologies in live animals. These isogenic nematodes have a rapid life cycle and transparent body, making high-throughput imaging and evaluation of fluorescently tagged neurons possible. However, the current state-of-the-art method for quantifying dopaminergic degeneration requires researchers to manually examine images and score dendrites into groups of varying levels of neurodegeneration severity, which is time consuming, subject to bias, and limited in data sensitivity. We aim to overcome the pitfalls of manual neuron scoring by developing an automated, unbiased image processing algorithm to quantify dopaminergic neurodegeneration in C. elegans . The algorithm can be used on images acquired with different microscopy setups and only requires two inputs: a maximum projection image of the four cephalic neurons in the C. elegans head and the pixel size of the user’s camera. We validate the platform by detecting and quantifying neurodegeneration in nematodes exposed to rotenone, cold shock, and 6-hydroxydopamine using 63x epifluorescence, 63x confocal, and 40x epifluorescence microscopy, respectively. Analysis of tubby mutant worms with altered fat storage showed that, contrary to our hypothesis, increased adiposity did not sensitize to stressor-induced neurodegeneration. We further verify the accuracy of the algorithm by comparing code-generated, categorical degeneration results with manually scored dendrites of the same experiments. The platform, which detects 19 different metrics of neurodegeneration, can provide comparative insight into how each exposure affects dopaminergic neurodegeneration patterns.
Collapse
Affiliation(s)
- Andrew S Clark
- Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, North Carolina, USA
| | - Zachary Kalmanson
- Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, North Carolina, USA
| | - Katherine Morton
- Nicholas School of the Environment, Duke University, Durham, North Carolina, USA
| | - Jessica Hartman
- Nicholas School of the Environment, Duke University, Durham, North Carolina, USA
- Biochemistry and Molecular Biology, Medical University of South Carolina, Charleston, South Carolina, USA
| | - Joel Meyer
- Nicholas School of the Environment, Duke University, Durham, North Carolina, USA
| | - Adriana San-Miguel
- Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, North Carolina, USA
| |
Collapse
|
11
|
Liu Y, Zhong Y, Zhao X, Liu L, Ding L, Peng H. Tracing weak neuron fibers. Bioinformatics 2022; 39:6960919. [PMID: 36571479 PMCID: PMC9848051 DOI: 10.1093/bioinformatics/btac816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 11/01/2022] [Accepted: 12/23/2022] [Indexed: 12/27/2022] Open
Abstract
MOTIVATION Precise reconstruction of neuronal arbors is important for circuitry mapping. Many auto-tracing algorithms have been developed toward full reconstruction. However, it is still challenging to trace the weak signals of neurite fibers that often correspond to axons. RESULTS We proposed a method, named the NeuMiner, for tracing weak fibers by combining two strategies: an online sample mining strategy and a modified gamma transformation. NeuMiner improved the recall of weak signals (voxel values <20) by a large margin, from 5.1 to 27.8%. This is prominent for axons, which increased by 6.4 times, compared to 2.0 times for dendrites. Both strategies were shown to be beneficial for weak fiber recognition, and they reduced the average axonal spatial distances to gold standards by 46 and 13%, respectively. The improvement was observed on two prevalent automatic tracing algorithms and can be applied to any other tracers and image types. AVAILABILITY AND IMPLEMENTATION Source codes of NeuMiner are freely available on GitHub (https://github.com/crazylyf/neuronet/tree/semantic_fnm). Image visualization, preprocessing and tracing are conducted on the Vaa3D platform, which is accessible at the Vaa3D GitHub repository (https://github.com/Vaa3D). All training and testing images are cropped from high-resolution fMOST mouse brains downloaded from the Brain Image Library (https://www.brainimagelibrary.org/), and the corresponding gold standards are available at https://doi.brainimagelibrary.org/doi/10.35077/g.25. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Yufeng Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Ye Zhong
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Xuan Zhao
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Lijuan Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Liya Ding
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | | |
Collapse
|
12
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
13
|
Ghahremani P, Boorboor S, Mirhosseini P, Gudisagar C, Ananth M, Talmage D, Role LW, Kaufman AE. NeuroConstruct: 3D Reconstruction and Visualization of Neurites in Optical Microscopy Brain Images. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:4951-4965. [PMID: 34478372 PMCID: PMC11423259 DOI: 10.1109/tvcg.2021.3109460] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We introduce NeuroConstruct, a novel end-to-end application for the segmentation, registration, and visualization of brain volumes imaged using wide-field microscopy. NeuroConstruct offers a Segmentation Toolbox with various annotation helper functions that aid experts to effectively and precisely annotate micrometer resolution neurites. It also offers an automatic neurites segmentation using convolutional neuronal networks (CNN) trained by the Toolbox annotations and somas segmentation using thresholding. To visualize neurites in a given volume, NeuroConstruct offers a hybrid rendering by combining iso-surface rendering of high-confidence classified neurites, along with real-time rendering of raw volume using a 2D transfer function for voxel classification score versus voxel intensity value. For a complete reconstruction of the 3D neurites, we introduce a Registration Toolbox that provides automatic coarse-to-fine alignment of serially sectioned samples. The quantitative and qualitative analysis show that NeuroConstruct outperforms the state-of-the-art in all design aspects. NeuroConstruct was developed as a collaboration between computer scientists and neuroscientists, with an application to the study of cholinergic neurons, which are severely affected in Alzheimer's disease.
Collapse
|
14
|
Räsänen N, Harju V, Joki T, Narkilahti S. Practical guide for preparation, computational reconstruction and analysis of 3D human neuronal networks in control and ischaemic conditions. Development 2022; 149:276215. [PMID: 35929583 PMCID: PMC9440753 DOI: 10.1242/dev.200012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 06/23/2022] [Indexed: 11/20/2022]
Abstract
To obtain commensurate numerical data of neuronal network morphology in vitro, network analysis needs to follow consistent guidelines. Important factors in successful analysis are sample uniformity, suitability of the analysis method for extracting relevant data and the use of established metrics. However, for the analysis of 3D neuronal cultures, there is little coherence in the analysis methods and metrics used in different studies. Here, we present a framework for the analysis of neuronal networks in 3D. First, we selected a hydrogel that supported the growth of human pluripotent stem cell-derived cortical neurons. Second, we tested and compared two software programs for tracing multi-neuron images in three dimensions and optimized a workflow for neuronal analysis using software that was considered highly suitable for this purpose. Third, as a proof of concept, we exposed 3D neuronal networks to oxygen-glucose deprivation- and ionomycin-induced damage and showed morphological differences between the damaged networks and control samples utilizing the proposed analysis workflow. With the optimized workflow, we present a protocol for preparing, challenging, imaging and analysing 3D human neuronal cultures. Summary: An optimized protocol is presented that allows morphological, quantifiable differences between the damaged and control human neuronal networks to be detected in three-dimensional cultures.
Collapse
Affiliation(s)
- Noora Räsänen
- Tampere University, 33100, Tampere Faculty of Medicine and Health Technology , , Finland
| | - Venla Harju
- Tampere University, 33100, Tampere Faculty of Medicine and Health Technology , , Finland
| | - Tiina Joki
- Tampere University, 33100, Tampere Faculty of Medicine and Health Technology , , Finland
| | - Susanna Narkilahti
- Tampere University, 33100, Tampere Faculty of Medicine and Health Technology , , Finland
| |
Collapse
|
15
|
Zhou H, Cao T, Liu T, Liu S, Chen L, Chen Y, Huang Q, Ye W, Zeng S, Quan T. Super-resolution Segmentation Network for Reconstruction of Packed Neurites. Neuroinformatics 2022; 20:1155-1167. [PMID: 35851944 DOI: 10.1007/s12021-022-09594-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 12/31/2022]
Abstract
Neuron reconstruction can provide the quantitative data required for measuring the neuronal morphology and is crucial in brain research. However, the difficulty in reconstructing dense neurites, wherein massive labor is required for accurate reconstruction in most cases, has not been well resolved. In this work, we provide a new pathway for solving this challenge by proposing the super-resolution segmentation network (SRSNet), which builds the mapping of the neurites in the original neuronal images and their segmentation in a higher-resolution (HR) space. During the segmentation process, the distances between the boundaries of the packed neurites are enlarged, and only the central parts of the neurites are segmented. Owing to this strategy, the super-resolution segmented images are produced for subsequent reconstruction. We carried out experiments on neuronal images with a voxel size of 0.2 μm × 0.2 μm × 1 μm produced by fMOST. SRSNet achieves an average F1 score of 0.88 for automatic packed neurites reconstruction, which takes both the precision and recall values into account, while the average F1 scores of other state-of-the-art automatic tracing methods are less than 0.70.
Collapse
Affiliation(s)
- Hang Zhou
- School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Tingting Cao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tian Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Shijie Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Lu Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Wei Ye
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.
| |
Collapse
|
16
|
Guo S, Xue J, Liu J, Ye X, Guo Y, Liu D, Zhao X, Xiong F, Han X, Peng H. Smart imaging to empower brain-wide neuroscience at single-cell levels. Brain Inform 2022; 9:10. [PMID: 35543774 PMCID: PMC9095808 DOI: 10.1186/s40708-022-00158-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 04/12/2022] [Indexed: 11/10/2022] Open
Abstract
A deep understanding of the neuronal connectivity and networks with detailed cell typing across brain regions is necessary to unravel the mechanisms behind the emotional and memorial functions as well as to find the treatment of brain impairment. Brain-wide imaging with single-cell resolution provides unique advantages to access morphological features of a neuron and to investigate the connectivity of neuron networks, which has led to exciting discoveries over the past years based on animal models, such as rodents. Nonetheless, high-throughput systems are in urgent demand to support studies of neural morphologies at larger scale and more detailed level, as well as to enable research on non-human primates (NHP) and human brains. The advances in artificial intelligence (AI) and computational resources bring great opportunity to 'smart' imaging systems, i.e., to automate, speed up, optimize and upgrade the imaging systems with AI and computational strategies. In this light, we review the important computational techniques that can support smart systems in brain-wide imaging at single-cell resolution.
Collapse
Affiliation(s)
- Shuxia Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China.
| | - Jie Xue
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Jian Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xiangqiao Ye
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Yichen Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Di Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xuan Zhao
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Feng Xiong
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xiaofeng Han
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China.
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| |
Collapse
|
17
|
Gupta C, Chandrashekar P, Jin T, He C, Khullar S, Chang Q, Wang D. Bringing machine learning to research on intellectual and developmental disabilities: taking inspiration from neurological diseases. J Neurodev Disord 2022; 14:28. [PMID: 35501679 PMCID: PMC9059371 DOI: 10.1186/s11689-022-09438-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Accepted: 04/07/2022] [Indexed: 12/31/2022] Open
Abstract
Intellectual and Developmental Disabilities (IDDs), such as Down syndrome, Fragile X syndrome, Rett syndrome, and autism spectrum disorder, usually manifest at birth or early childhood. IDDs are characterized by significant impairment in intellectual and adaptive functioning, and both genetic and environmental factors underpin IDD biology. Molecular and genetic stratification of IDDs remain challenging mainly due to overlapping factors and comorbidity. Advances in high throughput sequencing, imaging, and tools to record behavioral data at scale have greatly enhanced our understanding of the molecular, cellular, structural, and environmental basis of some IDDs. Fueled by the "big data" revolution, artificial intelligence (AI) and machine learning (ML) technologies have brought a whole new paradigm shift in computational biology. Evidently, the ML-driven approach to clinical diagnoses has the potential to augment classical methods that use symptoms and external observations, hoping to push the personalized treatment plan forward. Therefore, integrative analyses and applications of ML technology have a direct bearing on discoveries in IDDs. The application of ML to IDDs can potentially improve screening and early diagnosis, advance our understanding of the complexity of comorbidity, and accelerate the identification of biomarkers for clinical research and drug development. For more than five decades, the IDDRC network has supported a nexus of investigators at centers across the USA, all striving to understand the interplay between various factors underlying IDDs. In this review, we introduced fast-increasing multi-modal data types, highlighted example studies that employed ML technologies to illuminate factors and biological mechanisms underlying IDDs, as well as recent advances in ML technologies and their applications to IDDs and other neurological diseases. We discussed various molecular, clinical, and environmental data collection modes, including genetic, imaging, phenotypical, and behavioral data types, along with multiple repositories that store and share such data. Furthermore, we outlined some fundamental concepts of machine learning algorithms and presented our opinion on specific gaps that will need to be filled to accomplish, for example, reliable implementation of ML-based diagnosis technology in IDD clinics. We anticipate that this review will guide researchers to formulate AI and ML-based approaches to investigate IDDs and related conditions.
Collapse
Affiliation(s)
- Chirag Gupta
- Waisman Center, University of Wisconsin-Madison, Madison, WI, 53705, USA
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Pramod Chandrashekar
- Waisman Center, University of Wisconsin-Madison, Madison, WI, 53705, USA
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Ting Jin
- Waisman Center, University of Wisconsin-Madison, Madison, WI, 53705, USA
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Chenfeng He
- Waisman Center, University of Wisconsin-Madison, Madison, WI, 53705, USA
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Saniya Khullar
- Waisman Center, University of Wisconsin-Madison, Madison, WI, 53705, USA
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Qiang Chang
- Waisman Center, University of Wisconsin-Madison, Madison, WI, 53705, USA
- Department of Medical Genetics, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, 53705, USA
- Department of Neurology, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, 53705, USA
| | - Daifeng Wang
- Waisman Center, University of Wisconsin-Madison, Madison, WI, 53705, USA.
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, WI, 53706, USA.
- Department of Computer Sciences, University of Wisconsin-Madison, Madison, WI, 53706, USA.
| |
Collapse
|
18
|
Chen W, Liu M, Du H, Radojevic M, Wang Y, Meijering E. Deep-Learning-Based Automated Neuron Reconstruction From 3D Microscopy Images Using Synthetic Training Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1031-1042. [PMID: 34847022 DOI: 10.1109/tmi.2021.3130934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital reconstruction of neuronal structures from 3D microscopy images is critical for the quantitative investigation of brain circuits and functions. It is a challenging task that would greatly benefit from automatic neuron reconstruction methods. In this paper, we propose a novel method called SPE-DNR that combines spherical-patches extraction (SPE) and deep-learning for neuron reconstruction (DNR). Based on 2D Convolutional Neural Networks (CNNs) and the intensity distribution features extracted by SPE, it determines the tracing directions and classifies voxels into foreground or background. This way, starting from a set of seed points, it automatically traces the neurite centerlines and determines when to stop tracing. To avoid errors caused by imperfect manual reconstructions, we develop an image synthesizing scheme to generate synthetic training images with exact reconstructions. This scheme simulates 3D microscopy imaging conditions as well as structural defects, such as gaps and abrupt radii changes, to improve the visual realism of the synthetic images. To demonstrate the applicability and generalizability of SPE-DNR, we test it on 67 real 3D neuron microscopy images from three datasets. The experimental results show that the proposed SPE-DNR method is robust and competitive compared with other state-of-the-art neuron reconstruction methods.
Collapse
|
19
|
Hidden Markov modeling for maximum probability neuron reconstruction. Commun Biol 2022; 5:388. [PMID: 35468989 PMCID: PMC9038756 DOI: 10.1038/s42003-022-03320-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 03/24/2022] [Indexed: 11/08/2022] Open
Abstract
Recent advances in brain clearing and imaging have made it possible to image entire mammalian brains at sub-micron resolution. These images offer the potential to assemble brain-wide atlases of neuron morphology, but manual neuron reconstruction remains a bottleneck. Several automatic reconstruction algorithms exist, but most focus on single neuron images. In this paper, we present a probabilistic reconstruction method, ViterBrain, which combines a hidden Markov state process that encodes neuron geometry with a random field appearance model of neuron fluorescence. ViterBrain utilizes dynamic programming to compute the global maximizer of what we call the most probable neuron path. We applied our algorithm to imperfect image segmentations, and showed that it can follow axons in the presence of noise or nearby neurons. We also provide an interactive framework where users can trace neurons by fixing start and endpoints. ViterBrain is available in our open-source Python package brainlit. ViterBrain is an automated probabilistic reconstruction method that can reconstruct neuronal geometry and processes from microscopy images with code available in the open-source Python package, brainlit.
Collapse
|
20
|
Liu Y, Foustoukos G, Crochet S, Petersen CC. Axonal and Dendritic Morphology of Excitatory Neurons in Layer 2/3 Mouse Barrel Cortex Imaged Through Whole-Brain Two-Photon Tomography and Registered to a Digital Brain Atlas. Front Neuroanat 2022; 15:791015. [PMID: 35145380 PMCID: PMC8821665 DOI: 10.3389/fnana.2021.791015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 12/22/2021] [Indexed: 11/17/2022] Open
Abstract
Communication between cortical areas contributes importantly to sensory perception and cognition. On the millisecond time scale, information is signaled from one brain area to another by action potentials propagating across long-range axonal arborizations. Here, we develop and test methodology for imaging and annotating the brain-wide axonal arborizations of individual excitatory layer 2/3 neurons in mouse barrel cortex through single-cell electroporation and two-photon serial section tomography followed by registration to a digital brain atlas. Each neuron had an extensive local axon within the barrel cortex. In addition, individual neurons innervated subsets of secondary somatosensory cortex; primary somatosensory cortex for upper limb, trunk, and lower limb; primary and secondary motor cortex; visual and auditory cortical regions; dorsolateral striatum; and various fiber bundles. In the future, it will be important to assess if the diversity of axonal projections across individual layer 2/3 mouse barrel cortex neurons is accompanied by functional differences in their activity patterns.
Collapse
Affiliation(s)
| | | | | | - Carl C.H. Petersen
- Laboratory of Sensory Processing, Brain Mind Institute, Faculty of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
21
|
Newmaster KT, Kronman FA, Wu YT, Kim Y. Seeing the Forest and Its Trees Together: Implementing 3D Light Microscopy Pipelines for Cell Type Mapping in the Mouse Brain. Front Neuroanat 2022; 15:787601. [PMID: 35095432 PMCID: PMC8794814 DOI: 10.3389/fnana.2021.787601] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 12/02/2021] [Indexed: 12/14/2022] Open
Abstract
The brain is composed of diverse neuronal and non-neuronal cell types with complex regional connectivity patterns that create the anatomical infrastructure underlying cognition. Remarkable advances in neuroscience techniques enable labeling and imaging of these individual cell types and their interactions throughout intact mammalian brains at a cellular resolution allowing neuroscientists to examine microscopic details in macroscopic brain circuits. Nevertheless, implementing these tools is fraught with many technical and analytical challenges with a need for high-level data analysis. Here we review key technical considerations for implementing a brain mapping pipeline using the mouse brain as a primary model system. Specifically, we provide practical details for choosing methods including cell type specific labeling, sample preparation (e.g., tissue clearing), microscopy modalities, image processing, and data analysis (e.g., image registration to standard atlases). We also highlight the need to develop better 3D atlases with standardized anatomical labels and nomenclature across species and developmental time points to extend the mapping to other species including humans and to facilitate data sharing, confederation, and integrative analysis. In summary, this review provides key elements and currently available resources to consider while developing and implementing high-resolution mapping methods.
Collapse
Affiliation(s)
- Kyra T Newmaster
- Department of Neural and Behavioral Sciences, The Pennsylvania State University, Hershey, PA, United States
| | - Fae A Kronman
- Department of Neural and Behavioral Sciences, The Pennsylvania State University, Hershey, PA, United States
| | - Yuan-Ting Wu
- Department of Neural and Behavioral Sciences, The Pennsylvania State University, Hershey, PA, United States
| | - Yongsoo Kim
- Department of Neural and Behavioral Sciences, The Pennsylvania State University, Hershey, PA, United States
| |
Collapse
|
22
|
Yang B, Huang J, Wu G, Yang J. Classifying the tracing difficulty of 3D neuron image blocks based on deep learning. Brain Inform 2021; 8:25. [PMID: 34739611 PMCID: PMC8571474 DOI: 10.1186/s40708-021-00146-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Accepted: 10/22/2021] [Indexed: 11/13/2022] Open
Abstract
Quickly and accurately tracing neuronal morphologies in large-scale volumetric microscopy data is a very challenging task. Most automatic algorithms for tracing multi-neuron in a whole brain are designed under the Ultra-Tracer framework, which begins the tracing of a neuron from its soma and traces all signals via a block-by-block strategy. Some neuron image blocks are easy for tracing and their automatic reconstructions are very accurate, and some others are difficult and their automatic reconstructions are inaccurate or incomplete. The former are called low Tracing Difficulty Blocks (low-TDBs), while the latter are called high Tracing Difficulty Blocks (high-TDBs). We design a model named 3D-SSM to classify the tracing difficulty of 3D neuron image blocks, which is based on 3D Residual neural Network (3D-ResNet), Fully Connected Neural Network (FCNN) and Long Short-Term Memory network (LSTM). 3D-SSM contains three modules: Structure Feature Extraction (SFE), Sequence Information Extraction (SIE) and Model Fusion (MF). SFE utilizes a 3D-ResNet and a FCNN to extract two kinds of features in 3D image blocks and their corresponding automatic reconstruction blocks. SIE uses two LSTMs to learn sequence information hidden in 3D image blocks. MF adopts a concatenation operation and a FCNN to combine outputs from SIE. 3D-SSM can be used as a stop condition of an automatic tracing algorithm in the Ultra-Tracer framework. With its help, neuronal signals in low-TDBs can be traced by the automatic algorithm and in high-TDBs may be reconstructed by annotators. 12732 training samples and 5342 test samples are constructed on neuron images of a whole mouse brain. The 3D-SSM achieves classification accuracy rates 87.04% on the training set and 84.07% on the test set. Furthermore, the trained 3D-SSM is tested on samples from another whole mouse brain and its accuracy rate is 83.21%.
Collapse
Affiliation(s)
- Bin Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | - Jiajin Huang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | - Gaowei Wu
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jian Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China.
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
23
|
Chen X, Zhang C, Zhao J, Xiong Z, Zha ZJ, Wu F. Weakly Supervised Neuron Reconstruction From Optical Microscopy Images With Morphological Priors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3205-3216. [PMID: 33999814 DOI: 10.1109/tmi.2021.3080695] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Manually labeling neurons from high-resolution but noisy and low-contrast optical microscopy (OM) images is tedious. As a result, the lack of annotated data poses a key challenge when applying deep learning techniques for reconstructing neurons from noisy and low-contrast OM images. While traditional tracing methods provide a possible way to efficiently generate labels for supervised network training, the generated pseudo-labels contain many noisy and incorrect labels, which lead to severe performance degradation. On the other hand, the publicly available dataset, BigNeuron, provides a large number of single 3D neurons that are reconstructed using various imaging paradigms and tracing methods. Though the raw OM images are not fully available for these neurons, they convey essential morphological priors for complex 3D neuron structures. In this paper, we propose a new approach to exploit morphological priors from neurons that have been reconstructed for training a deep neural network to extract neuron signals from OM images. We integrate a deep segmentation network in a generative adversarial network (GAN), expecting the segmentation network to be weakly supervised by pseudo-labels at the pixel level while utilizing the supervision of previously reconstructed neurons at the morphology level. In our morphological-prior-guided neuron reconstruction GAN, named MP-NRGAN, the segmentation network extracts neuron signals from raw images, and the discriminator network encourages the extracted neurons to follow the morphology distribution of reconstructed neurons. Comprehensive experiments on the public VISoR-40 dataset and BigNeuron dataset demonstrate that our proposed MP-NRGAN outperforms state-of-the-art approaches with less training effort.
Collapse
|
24
|
Huang Q, Cao T, Chen Y, Li A, Zeng S, Quan T. Automated Neuron Tracing Using Content-Aware Adaptive Voxel Scooping on CNN Predicted Probability Map. Front Neuroanat 2021; 15:712842. [PMID: 34497493 PMCID: PMC8419427 DOI: 10.3389/fnana.2021.712842] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 07/29/2021] [Indexed: 11/23/2022] Open
Abstract
Neuron tracing, as the essential step for neural circuit building and brain information flow analyzing, plays an important role in the understanding of brain organization and function. Though lots of methods have been proposed, automatic and accurate neuron tracing from optical images remains challenging. Current methods often had trouble in tracing the complex tree-like distorted structures and broken parts of neurite from a noisy background. To address these issues, we propose a method for accurate neuron tracing using content-aware adaptive voxel scooping on a convolutional neural network (CNN) predicted probability map. First, a 3D residual CNN was applied as preprocessing to predict the object probability and suppress high noise. Then, instead of tracing on the binary image produced by maximum classification, an adaptive voxel scooping method was presented for successive neurite tracing on the probability map, based on the internal content properties (distance, connectivity, and probability continuity along direction) of the neurite. Last, the neuron tree graph was built using the length first criterion. The proposed method was evaluated on the public BigNeuron datasets and fluorescence micro-optical sectioning tomography (fMOST) datasets and outperformed current state-of-art methods on images with neurites that had broken parts and complex structures. The high accuracy tracing proved the potential of the proposed method for neuron tracing on large-scale.
Collapse
Affiliation(s)
- Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingting Cao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
25
|
Yuval O, Iosilevskii Y, Meledin A, Podbilewicz B, Shemesh T. Neuron tracing and quantitative analyses of dendritic architecture reveal symmetrical three-way-junctions and phenotypes of git-1 in C. elegans. PLoS Comput Biol 2021; 17:e1009185. [PMID: 34280180 PMCID: PMC8321406 DOI: 10.1371/journal.pcbi.1009185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Revised: 07/29/2021] [Accepted: 06/15/2021] [Indexed: 11/18/2022] Open
Abstract
Complex dendritic trees are a distinctive feature of neurons. Alterations to dendritic morphology are associated with developmental, behavioral and neurodegenerative changes. The highly-arborized PVD neuron of C. elegans serves as a model to study dendritic patterning; however, quantitative, objective and automated analyses of PVD morphology are missing. Here, we present a method for neuronal feature extraction, based on deep-learning and fitting algorithms. The extracted neuronal architecture is represented by a database of structural elements for abstracted analysis. We obtain excellent automatic tracing of PVD trees and uncover that dendritic junctions are unevenly distributed. Surprisingly, these junctions are three-way-symmetrical on average, while dendritic processes are arranged orthogonally. We quantify the effect of mutation in git-1, a regulator of dendritic spine formation, on PVD morphology and discover a localized reduction in junctions. Our findings shed new light on PVD architecture, demonstrating the effectiveness of our objective analyses of dendritic morphology and suggest molecular control mechanisms.
Collapse
Affiliation(s)
- Omer Yuval
- Faculty of Biology, Technion–Israel Institute of Technology, Haifa, Israel
- School of Computing, Faculty of Engineering and Physical Sciences, University of Leeds, Leeds, United Kingdom
| | - Yael Iosilevskii
- Faculty of Biology, Technion–Israel Institute of Technology, Haifa, Israel
| | - Anna Meledin
- Faculty of Biology, Technion–Israel Institute of Technology, Haifa, Israel
| | | | - Tom Shemesh
- Faculty of Biology, Technion–Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
26
|
Chen ZS, Pesaran B. Improving scalability in systems neuroscience. Neuron 2021; 109:1776-1790. [PMID: 33831347 PMCID: PMC8178195 DOI: 10.1016/j.neuron.2021.03.025] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 03/11/2021] [Accepted: 03/16/2021] [Indexed: 12/30/2022]
Abstract
Emerging technologies to acquire data at increasingly greater scales promise to transform discovery in systems neuroscience. However, current exponential growth in the scale of data acquisition is a double-edged sword. Scaling up data acquisition can speed up the cycle of discovery but can also misinterpret the results or possibly slow down the cycle because of challenges presented by the curse of high-dimensional data. Active, adaptive, closed-loop experimental paradigms use hardware and algorithms optimized to enable time-critical computation to provide feedback that interprets the observations and tests hypotheses to actively update the stimulus or stimulation parameters. In this perspective, we review important concepts of active and adaptive experiments and discuss how selectively constraining the dimensionality and optimizing strategies at different stages of discovery loop can help mitigate the curse of high-dimensional data. Active and adaptive closed-loop experimental paradigms can speed up discovery despite an exponentially increasing data scale, offering a road map to timely and iterative hypothesis revision and discovery in an era of exponential growth in neuroscience.
Collapse
Affiliation(s)
- Zhe Sage Chen
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY 10016, USA; Neuroscience Institute, NYU School of Medicine, New York, NY 10016, USA.
| | - Bijan Pesaran
- Neuroscience Institute, NYU School of Medicine, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA; Department of Neurology, New York University School of Medicine, New York, NY 10016, USA.
| |
Collapse
|
27
|
Abstract
Brain scientists are now capable of collecting more data in a single experiment than researchers a generation ago might have collected over an entire career. Indeed, the brain itself seems to thirst for more and more data. Such digital information not only comprises individual studies but is also increasingly shared and made openly available for secondary, confirmatory, and/or combined analyses. Numerous web resources now exist containing data across spatiotemporal scales. Data processing workflow technologies running via cloud-enabled computing infrastructures allow for large-scale processing. Such a move toward greater openness is fundamentally changing how brain science results are communicated and linked to available raw data and processed results. Ethical, professional, and motivational issues challenge the whole-scale commitment to data-driven neuroscience. Nevertheless, fueled by government investments into primary brain data collection coupled with increased sharing and community pressure challenging the dominant publishing model, large-scale brain and data science is here to stay.
Collapse
Affiliation(s)
- John Darrell Van Horn
- Department of Psychology, University of Virginia, Charlottesville, Virginia, USA
- School of Data Science, University of Virginia, Charlottesville, Virginia, USA
| |
Collapse
|
28
|
Simionato G, Hinkelmann K, Chachanidze R, Bianchi P, Fermo E, van Wijk R, Leonetti M, Wagner C, Kaestner L, Quint S. Red blood cell phenotyping from 3D confocal images using artificial neural networks. PLoS Comput Biol 2021; 17:e1008934. [PMID: 33983926 PMCID: PMC8118337 DOI: 10.1371/journal.pcbi.1008934] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Accepted: 04/01/2021] [Indexed: 12/15/2022] Open
Abstract
The investigation of cell shapes mostly relies on the manual classification of 2D images, causing a subjective and time consuming evaluation based on a portion of the cell surface. We present a dual-stage neural network architecture for analyzing fine shape details from confocal microscopy recordings in 3D. The system, tested on red blood cells, uses training data from both healthy donors and patients with a congenital blood disease, namely hereditary spherocytosis. Characteristic shape features are revealed from the spherical harmonics spectrum of each cell and are automatically processed to create a reproducible and unbiased shape recognition and classification. The results show the relation between the particular genetic mutation causing the disease and the shape profile. With the obtained 3D phenotypes, we suggest our method for diagnostics and theragnostics of blood diseases. Besides the application employed in this study, our algorithms can be easily adapted for the 3D shape phenotyping of other cell types and extend their use to other applications, such as industrial automated 3D quality control.
Collapse
Affiliation(s)
- Greta Simionato
- Department of Experimental Physics, Saarland University, Campus E2.6, Saarbrücken, Germany
- Institute for Clinical and Experimental Surgery, Saarland University, Campus University Hospital, Homburg, Germany
| | - Konrad Hinkelmann
- Department of Experimental Physics, Saarland University, Campus E2.6, Saarbrücken, Germany
| | - Revaz Chachanidze
- Department of Experimental Physics, Saarland University, Campus E2.6, Saarbrücken, Germany
- CNRS, University Grenoble Alpes, Grenoble INP, LRP, Grenoble, France
| | - Paola Bianchi
- Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, Milano, Italy
| | - Elisa Fermo
- Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, Milano, Italy
| | - Richard van Wijk
- Department of Clinical Chemistry & Haematology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Marc Leonetti
- CNRS, University Grenoble Alpes, Grenoble INP, LRP, Grenoble, France
| | - Christian Wagner
- Department of Experimental Physics, Saarland University, Campus E2.6, Saarbrücken, Germany
- Physics and Materials Science Research Unit, University of Luxembourg, Luxembourg City, Luxembourg
| | - Lars Kaestner
- Department of Experimental Physics, Saarland University, Campus E2.6, Saarbrücken, Germany
- Theoretical Medicine and Biosciences, Saarland University, Campus University Hospital, Homburg, Germany
| | - Stephan Quint
- Department of Experimental Physics, Saarland University, Campus E2.6, Saarbrücken, Germany
- Cysmic GmbH, Saarland University, Saarbrücken, Germany
- * E-mail:
| |
Collapse
|
29
|
Yang B, Chen W, Luo H, Tan Y, Liu M, Wang Y. Neuron Image Segmentation via Learning Deep Features and Enhancing Weak Neuronal Structures. IEEE J Biomed Health Inform 2021; 25:1634-1645. [PMID: 32809948 DOI: 10.1109/jbhi.2020.3017540] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Neuron morphology reconstruction (tracing) in 3D volumetric images is critical for neuronal research. However, most existing neuron tracing methods are not applicable in challenging datasets where the neuron images are contaminated by noises or containing weak filament signals. In this paper, we present a two-stage 3D neuron segmentation approach via learning deep features and enhancing weak neuronal structures, to reduce the impact of image noise in the data and enhance the weak-signal neuronal structures. In the first stage, we train a voxel-wise multi-level fully convolutional network (FCN), which specializes in learning deep features, to obtain the 3D neuron image segmentation maps in an end-to-end manner. In the second stage, a ray-shooting model is employed to detect the discontinued segments in segmentation results of the first-stage, and the local neuron diameter of the broken point is estimated and direction of the filamentary fragment is detected by rayburst sampling algorithm. Then, a Hessian-repair model is built to repair the broken structures, by enhancing weak neuronal structures in a fibrous structure determined by the estimated local neuron diameter and the filamentary fragment direction. Experimental results demonstrate that our proposed segmentation approach achieves better segmentation performance than other state-of-the-art methods for 3D neuron segmentation. Compared with the neuron reconstruction results on the segmented images produced by other segmentation methods, the proposed approach gains 47.83% and 34.83% improvement in the average distance scores. The average Precision and Recall rates of the branch point detection with our proposed method are 38.74% and 22.53% higher than the detection results without segmentation.
Collapse
|
30
|
Zhang T, Zeng Y, Zhang Y, Zhang X, Shi M, Tang L, Zhang D, Xu B. Neuron type classification in rat brain based on integrative convolutional and tree-based recurrent neural networks. Sci Rep 2021; 11:7291. [PMID: 33790380 PMCID: PMC8012629 DOI: 10.1038/s41598-021-86780-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Accepted: 03/17/2021] [Indexed: 11/24/2022] Open
Abstract
The study of cellular complexity in the nervous system based on anatomy has shown more practical and objective advantages in morphology than other perspectives on molecular, physiological, and evolutionary aspects. However, morphology-based neuron type classification in the whole rat brain is challenging, given the significant number of neuron types, limited reconstructed neuron samples, and diverse data formats. Here, we report that different types of deep neural network modules may well process different kinds of features and that the integration of these submodules will show power on the representation and classification of neuron types. For SWC-format data, which are compressed but unstructured, we construct a tree-based recurrent neural network (Tree-RNN) module. For 2D or 3D slice-format data, which are structured but with large volumes of pixels, we construct a convolutional neural network (CNN) module. We also generate a virtually simulated dataset with two classes, reconstruct a CASIA rat-neuron dataset with 2.6 million neurons without labels, and select the NeuroMorpho-rat dataset with 35,000 neurons containing hierarchical labels. In the twelve-class classification task, the proposed model achieves state-of-the-art performance compared with other models, e.g., the CNN, RNN, and support vector machine based on hand-designed features.
Collapse
Affiliation(s)
- Tielin Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China.
| | - Yi Zeng
- Institute of Automation, Chinese Academy of Sciences, Beijing, China. .,University of Chinese Academy of Sciences, Beijing, China. .,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
| | - Yue Zhang
- Electronics and Communication Engineering, Peking University, Beijing, China
| | - Xinhe Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Mengting Shi
- Institute of Automation, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Likai Tang
- Department of Automation, Tsinghua University, Beijing, China
| | - Duzhen Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Bo Xu
- Institute of Automation, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
31
|
McIntosh JR, Yao J, Hong L, Faller J, Sajda P. Ballistocardiogram Artifact Reduction in Simultaneous EEG-fMRI Using Deep Learning. IEEE Trans Biomed Eng 2020; 68:78-89. [PMID: 32746037 DOI: 10.1109/tbme.2020.3004548] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVE The concurrent recording of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) is a technique that has received much attention due to its potential for combined high temporal and spatial resolution. However, the ballistocardiogram (BCG), a large-amplitude artifact caused by cardiac induced movement contaminates the EEG during EEG-fMRI recordings. Removal of BCG in software has generally made use of linear decompositions of the corrupted EEG. This is not ideal as the BCG signal propagates in a manner which is non-linearly dependent on the electrocardiogram (ECG). In this paper, we present a novel method for BCG artifact suppression using recurrent neural networks (RNNs). METHODS EEG signals were recovered by training RNNs on the nonlinear mappings between ECG and the BCG corrupted EEG. We evaluated our model's performance against the commonly used Optimal Basis Set (OBS) method at the level of individual subjects, and investigated generalization across subjects. RESULTS We show that our algorithm can generate larger average power reduction of the BCG at critical frequencies, while simultaneously improving task relevant EEG based classification. CONCLUSION The presented deep learning architecture can be used to reduce BCG related artifacts in EEG-fMRI recordings. SIGNIFICANCE We present a deep learning approach that can be used to suppress the BCG artifact in EEG-fMRI without the use of additional hardware. This method may have scope to be combined with current hardware methods, operate in real-time and be used for direct modeling of the BCG.
Collapse
|
32
|
Zhao J, Chen X, Xiong Z, Liu D, Zeng J, Xie C, Zhang Y, Zha ZJ, Bi G, Wu F. Neuronal Population Reconstruction From Ultra-Scale Optical Microscopy Images via Progressive Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4034-4046. [PMID: 32746145 DOI: 10.1109/tmi.2020.3009148] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Reconstruction of neuronal populations from ultra-scale optical microscopy (OM) images is essential to investigate neuronal circuits and brain mechanisms. The noises, low contrast, huge memory requirement, and high computational cost pose significant challenges in the neuronal population reconstruction. Recently, many studies have been conducted to extract neuron signals using deep neural networks (DNNs). However, training such DNNs usually relies on a huge amount of voxel-wise annotations in OM images, which are expensive in terms of both finance and labor. In this paper, we propose a novel framework for dense neuronal population reconstruction from ultra-scale images. To solve the problem of high cost in obtaining manual annotations for training DNNs, we propose a progressive learning scheme for neuronal population reconstruction (PLNPR) which does not require any manual annotations. Our PLNPR scheme consists of a traditional neuron tracing module and a deep segmentation network that mutually complement and progressively promote each other. To reconstruct dense neuronal populations from a terabyte-sized ultra-scale image, we introduce an automatic framework which adaptively traces neurons block by block and fuses fragmented neurites in overlapped regions continuously and smoothly. We build a dataset "VISoR-40" which consists of 40 large-scale OM image blocks from cortical regions of a mouse. Extensive experimental results on our VISoR-40 dataset and the public BigNeuron dataset demonstrate the effectiveness and superiority of our method on neuronal population reconstruction and single neuron reconstruction. Furthermore, we successfully apply our method to reconstruct dense neuronal populations from an ultra-scale mouse brain slice. The proposed adaptive block propagation and fusion strategies greatly improve the completeness of neurites in dense neuronal population reconstruction.
Collapse
|
33
|
Jiang S, Pan Z, Feng Z, Guan Y, Ren M, Ding Z, Chen S, Gong H, Luo Q, Li A. Skeleton optimization of neuronal morphology based on three-dimensional shape restrictions. BMC Bioinformatics 2020; 21:395. [PMID: 32887543 PMCID: PMC7472589 DOI: 10.1186/s12859-020-03714-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Accepted: 08/18/2020] [Indexed: 11/23/2022] Open
Abstract
Background Neurons are the basic structural unit of the brain, and their morphology is a key determinant of their classification. The morphology of a neuronal circuit is a fundamental component in neuron modeling. Recently, single-neuron morphologies of the whole brain have been used in many studies. The correctness and completeness of semimanually traced neuronal morphology are credible. However, there are some inaccuracies in semimanual tracing results. The distance between consecutive nodes marked by humans is very long, spanning multiple voxels. On the other hand, the nodes are marked around the centerline of the neuronal fiber, not on the centerline. Although these inaccuracies do not seriously affect the projection patterns that these studies focus on, they reduce the accuracy of the traced neuronal skeletons. These small inaccuracies will introduce deviations into subsequent studies that are based on neuronal morphology files. Results We propose a neuronal digital skeleton optimization method to evaluate and make fine adjustments to a digital skeleton after neuron tracing. Provided that the neuronal fiber shape is smooth and continuous, we describe its physical properties according to two shape restrictions. One restriction is designed based on the grayscale image, and the other is designed based on geometry. These two restrictions are designed to finely adjust the digital skeleton points to the neuronal fiber centerline. With this method, we design the three-dimensional shape restriction workflow of neuronal skeleton adjustment computation. The performance of the proposed method has been quantitatively evaluated using synthetic and real neuronal image data. The results show that our method can reduce the difference between the traced neuronal skeleton and the centerline of the neuronal fiber. Furthermore, morphology metrics such as the neuronal fiber length and radius become more precise. Conclusions This method can improve the accuracy of a neuronal digital skeleton based on traced results. The greater the accuracy of the digital skeletons that are acquired, the more precise the neuronal morphologies that are analyzed will be.
Collapse
Affiliation(s)
- Siqi Jiang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Zhengyu Pan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Zhao Feng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yue Guan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Miao Ren
- School of Biomedical Engineering, Hainan University, Haikou, China
| | - Zhangheng Ding
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shangbin Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China.,CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Science, Shanghai, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,School of Biomedical Engineering, Hainan University, Haikou, China.,HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China. .,HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China. .,CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Science, Shanghai, China.
| |
Collapse
|
34
|
Meijering E. A bird's-eye view of deep learning in bioimage analysis. Comput Struct Biotechnol J 2020; 18:2312-2325. [PMID: 32994890 PMCID: PMC7494605 DOI: 10.1016/j.csbj.2020.08.003] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/26/2020] [Accepted: 08/01/2020] [Indexed: 02/07/2023] Open
Abstract
Deep learning of artificial neural networks has become the de facto standard approach to solving data analysis problems in virtually all fields of science and engineering. Also in biology and medicine, deep learning technologies are fundamentally transforming how we acquire, process, analyze, and interpret data, with potentially far-reaching consequences for healthcare. In this mini-review, we take a bird's-eye view at the past, present, and future developments of deep learning, starting from science at large, to biomedical imaging, and bioimage analysis in particular.
Collapse
Affiliation(s)
- Erik Meijering
- School of Computer Science and Engineering & Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
35
|
Huang Q, Chen Y, Liu S, Xu C, Cao T, Xu Y, Wang X, Rao G, Li A, Zeng S, Quan T. Weakly Supervised Learning of 3D Deep Network for Neuron Reconstruction. Front Neuroanat 2020; 14:38. [PMID: 32848636 PMCID: PMC7399060 DOI: 10.3389/fnana.2020.00038] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 06/05/2020] [Indexed: 11/13/2022] Open
Abstract
Digital reconstruction or tracing of 3D tree-like neuronal structures from optical microscopy images is essential for understanding the functionality of neurons and reveal the connectivity of neuronal networks. Despite the existence of numerous tracing methods, reconstructing a neuron from highly noisy images remains challenging, particularly for neurites with low and inhomogeneous intensities. Conducting deep convolutional neural network (CNN)-based segmentation prior to neuron tracing facilitates an approach to solving this problem via separation of weak neurites from a noisy background. However, large manual annotations are needed in deep learning-based methods, which is labor-intensive and limits the algorithm's generalization for different datasets. In this study, we present a weakly supervised learning method of a deep CNN for neuron reconstruction without manual annotations. Specifically, we apply a 3D residual CNN as the architecture for discriminative neuronal feature extraction. We construct the initial pseudo-labels (without manual segmentation) of the neuronal images on the basis of an existing automatic tracing method. A weakly supervised learning framework is proposed via iterative training of the CNN model for improved prediction and refining of the pseudo-labels to update training samples. The pseudo-label was iteratively modified via mining and addition of weak neurites from the CNN predicted probability map on the basis of their tubularity and continuity. The proposed method was evaluated on several challenging images from the public BigNeuron and Diadem datasets, to fMOST datasets. Owing to the adaption of 3D deep CNNs and weakly supervised learning, the presented method demonstrates effective detection of weak neurites from noisy images and achieves results similar to those of the CNN model with manual annotations. The tracing performance was significantly improved by the proposed method on both small and large datasets (>100 GB). Moreover, the proposed method proved to be superior to several novel tracing methods on original images. The results obtained on various large-scale datasets demonstrated the generalization and high precision achieved by the proposed method for neuron reconstruction.
Collapse
Affiliation(s)
- Qing Huang
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shijie Liu
- School of Mathematics and Physics, China University of Geosciences, Wuhan, China
| | - Cheng Xu
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingting Cao
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yongchao Xu
- School of Electronics Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaojun Wang
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Gong Rao
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingwei Quan
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
36
|
Mosinska A, Kozinski M, Fua P. Joint Segmentation and Path Classification of Curvilinear Structures. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:1515-1521. [PMID: 31180837 DOI: 10.1109/tpami.2019.2921327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Detection of curvilinear structures in images has long been of interest. One of the most challenging aspects of this problem is inferring the graph representation of the curvilinear network. Most existing delineation approaches first perform binary segmentation of the image and then refine it using either a set of hand-designed heuristics or a separate classifier that assigns likelihood to paths extracted from the pixel-wise prediction. In our work, we bridge the gap between segmentation and path classification by training a deep network that performs those two tasks simultaneously. We show that this approach is beneficial because it enforces consistency across the whole processing pipeline. We apply our approach on roads and neurons datasets.
Collapse
|
37
|
Friedmann D, Pun A, Adams EL, Lui JH, Kebschull JM, Grutzner SM, Castagnola C, Tessier-Lavigne M, Luo L. Mapping mesoscale axonal projections in the mouse brain using a 3D convolutional network. Proc Natl Acad Sci U S A 2020; 117:11068-11075. [PMID: 32358193 PMCID: PMC7245124 DOI: 10.1073/pnas.1918465117] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
The projection targets of a neuronal population are a key feature of its anatomical characteristics. Historically, tissue sectioning, confocal microscopy, and manual scoring of specific regions of interest have been used to generate coarse summaries of mesoscale projectomes. We present here TrailMap, a three-dimensional (3D) convolutional network for extracting axonal projections from intact cleared mouse brains imaged by light-sheet microscopy. TrailMap allows region-based quantification of total axon content in large and complex 3D structures after registration to a standard reference atlas. The identification of axonal structures as thin as one voxel benefits from data augmentation but also requires a loss function that tolerates errors in annotation. A network trained with volumes of serotonergic axons in all major brain regions can be generalized to map and quantify axons from thalamocortical, deep cerebellar, and cortical projection neurons, validating transfer learning as a tool to adapt the model to novel categories of axonal morphology. Speed of training, ease of use, and accuracy improve over existing tools without a need for specialized computing hardware. Given the recent emphasis on genetically and functionally defining cell types in neural circuit analysis, TrailMap will facilitate automated extraction and quantification of axons from these specific cell types at the scale of the entire mouse brain, an essential component of deciphering their connectivity.
Collapse
Affiliation(s)
- Drew Friedmann
- Department of Biology, Stanford University, Stanford, CA 94305
- Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305
| | - Albert Pun
- Department of Biology, Stanford University, Stanford, CA 94305
- Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305
| | - Eliza L Adams
- Department of Biology, Stanford University, Stanford, CA 94305
- Neurosciences Graduate Program, Stanford University, Stanford, CA 94305
| | - Jan H Lui
- Department of Biology, Stanford University, Stanford, CA 94305
- Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305
| | - Justus M Kebschull
- Department of Biology, Stanford University, Stanford, CA 94305
- Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305
| | - Sophie M Grutzner
- Department of Biology, Stanford University, Stanford, CA 94305
- Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305
| | | | | | - Liqun Luo
- Department of Biology, Stanford University, Stanford, CA 94305;
- Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305
| |
Collapse
|
38
|
Li Q, Shen L. 3D Neuron Reconstruction in Tangled Neuronal Image With Deep Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:425-435. [PMID: 31295108 DOI: 10.1109/tmi.2019.2926568] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Digital reconstruction or tracing of 3D neuron is essential for understanding the brain functions. While existing automatic tracing algorithms work well for the clean neuronal image with a single neuron, they are not robust to trace the neuron surrounded by nerve fibers. We propose a 3D U-Net-based network, namely 3D U-Net Plus, to segment the neuron from the surrounding fibers before the application of tracing algorithms. All the images in BigNeuron, the biggest available neuronal image dataset, contain clean neurons with no interference of nerve fibers, which are not practical to train the segmentation network. Based upon the BigNeuron images, we synthesize a SYNethic TAngled NEuronal Image dataset (SYNTANEI) to train the proposed network, by fusing the neurons with extracted nerve fibers. Due to the adoption of dropout, àtrous convolution and Àtrous Spatial Pyramid Pooling (ASPP), experimental results on the synthetic and real tangled neuronal images show that the proposed 3D U-Net Plus network achieved very promising segmentation results. The neurons reconstructed by the tracing algorithm using the segmentation result match significantly better with the ground truth than that using the original images.
Collapse
|
39
|
Li S, Quan T, Zhou H, Huang Q, Guan T, Chen Y, Xu C, Kang H, Li A, Fu L, Luo Q, Gong H, Zeng S. Brain-Wide Shape Reconstruction of a Traced Neuron Using the Convex Image Segmentation Method. Neuroinformatics 2019; 18:199-218. [PMID: 31396858 DOI: 10.1007/s12021-019-09434-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Neuronal shape reconstruction is a helpful technique for establishing neuron identity, inferring neuronal connections, mapping neuronal circuits, and so on. Advances in optical imaging techniques have enabled data collection that includes the shape of a neuron across the whole brain, considerably extending the scope of neuronal anatomy. However, such datasets often include many fuzzy neurites and many crossover regions that neurites are closely attached, which make neuronal shape reconstruction more challenging. In this study, we proposed a convex image segmentation model for neuronal shape reconstruction that segments a neurite into cross sections along its traced skeleton. Both the sparse nature of gradient images and the rule that fuzzy neurites usually have a small radius are utilized to improve neuronal shape reconstruction in regions with fuzzy neurites. Because the model is closely related to the traced skeleton point, we can use this relationship for identifying neurite with crossover regions. We demonstrated the performance of our model on various datasets, including those with fuzzy neurites and neurites with crossover regions, and we verified that our model could robustly reconstruct the neuron shape on a brain-wide scale.
Collapse
Affiliation(s)
- Shiwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China. .,School of Mathematics and Economics, Hubei University of Education, Wuhan, 430205, Hubei, China.
| | - Hang Zhou
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Tao Guan
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Cheng Xu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Hongtao Kang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Ling Fu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| |
Collapse
|
40
|
Wang Y, Li Q, Liu L, Zhou Z, Ruan Z, Kong L, Li Y, Wang Y, Zhong N, Chai R, Luo X, Guo Y, Hawrylycz M, Luo Q, Gu Z, Xie W, Zeng H, Peng H. TeraVR empowers precise reconstruction of complete 3-D neuronal morphology in the whole brain. Nat Commun 2019; 10:3474. [PMID: 31375678 PMCID: PMC6677772 DOI: 10.1038/s41467-019-11443-y] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Accepted: 07/16/2019] [Indexed: 12/29/2022] Open
Abstract
Neuron morphology is recognized as a key determinant of cell type, yet the quantitative profiling of a mammalian neuron's complete three-dimensional (3-D) morphology remains arduous when the neuron has complex arborization and long projection. Whole-brain reconstruction of neuron morphology is even more challenging as it involves processing tens of teravoxels of imaging data. Validating such reconstructions is extremely laborious. We develop TeraVR, an open-source virtual reality annotation system, to address these challenges. TeraVR integrates immersive and collaborative 3-D visualization, interaction, and hierarchical streaming of teravoxel-scale images. Using TeraVR, we have produced precise 3-D full morphology of long-projecting neurons in whole mouse brains and developed a collaborative workflow for highly accurate neuronal reconstruction.
Collapse
Affiliation(s)
- Yimin Wang
- Southeast University - Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, China. .,School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China. .,Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, 200444, China.
| | - Qi Li
- School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Lijuan Liu
- Southeast University - Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, China
| | - Zhi Zhou
- Southeast University - Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, China.,Allen Institute for Brain Science, Seattle, 98109, USA
| | - Zongcai Ruan
- Southeast University - Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, China
| | - Lingsheng Kong
- School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Yaoyao Li
- School of Optometry and Ophthalmology, Wenzhou Medical University, Wenzhou, 325027, China
| | - Yun Wang
- Allen Institute for Brain Science, Seattle, 98109, USA
| | - Ning Zhong
- Beijing University of Technology, 100124, Beijing, China.,Department of Life Science and Informatics, Maebashi Institute of Technology, Maebashi, 371-0816, Japan
| | - Renjie Chai
- Institute of Life Sciences, Southeast University, Nanjing, 210096, China.,Key Laboratory for Developmental Genes and Human Disease, Ministry of Education, Institute of Life Sciences, Jiangsu Province High-Tech Key Laboratory for Bio-Medical Research, Southeast University, Nanjing, 210096, China.,Co-Innovation Center of Neuroregeneration, Nantong University, Nantong, 226019, China
| | - Xiangfeng Luo
- School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Yike Guo
- Data Science Institute, Imperial College London, London, SW7 2AZ, UK
| | | | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Zhongze Gu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096, China
| | - Wei Xie
- Institute of Life Sciences, Southeast University, Nanjing, 210096, China
| | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, 98109, USA
| | - Hanchuan Peng
- Southeast University - Allen Institute Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, China. .,Allen Institute for Brain Science, Seattle, 98109, USA.
| |
Collapse
|
41
|
Skibbe H, Reisert M, Nakae K, Watakabe A, Hata J, Mizukami H, Okano H, Yamamori T, Ishii S. PAT-Probabilistic Axon Tracking for Densely Labeled Neurons in Large 3-D Micrographs. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:69-78. [PMID: 30010551 DOI: 10.1109/tmi.2018.2855736] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
A major goal of contemporary neuroscience research is to map the structural connectivity of mammalian brain using microscopy imaging data. In this context, the reconstruction of densely labeled axons from two-photon microscopy images is a challenging and important task. The visually overlapping, crossing, and often strongly distorted images of the axons allow many ambiguous interpretations to be made. We address the problem of tracking axons in densely labeled samples of neurons in large image data sets acquired from marmoset brains. Our high-resolution images were acquired using two-photon microscopy and they provided whole brain coverage, occupying terabytes of memory. Both the image distortions and the large data set size frequently make it impractical to apply present-day neuron tracing algorithms to such data due to the optimization of such algorithms to the precise tracing of either single or sparse sets of neurons. Thus, new tracking techniques are needed. We propose a probabilistic axon tracking algorithm (PAT). PAT tackles the tracking of axons in two steps: locally (L-PAT) and globally (G-PAT). L-PAT is a probabilistic tracking algorithm that can tackle distorted, cluttered images of densely labeled axons. L-PAT divides a large micrograph into smaller image stacks. It then processes each image stack independently before mapping the axons in each image to a sparse model of axon trajectories. G-PAT merges the sparse L-PAT models into a single global model of axon trajectories by minimizing a global objective function using a probabilistic optimization method. We demonstrate the superior performance of PAT over standard approaches on synthetic data. Furthermore, we successfully apply PAT to densely labeled axons in large images acquired from marmoset brains.
Collapse
|