1
|
Sanchez C, Nadal M, Cansell C, Laroui S, Descombes X, Rovère C, Debreuve É. Computational detection, characterization, and clustering of microglial cells in a mouse model of fat-induced postprandial hypothalamic inflammation. Methods 2025; 236:28-38. [PMID: 40021035 DOI: 10.1016/j.ymeth.2025.02.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2024] [Revised: 02/21/2025] [Accepted: 02/25/2025] [Indexed: 03/03/2025] Open
Abstract
Obesity is associated with brain inflammation, glial reactivity, and immune cells infiltration. Studies in rodents have shown that glial reactivity occurs within 24 h of high-fat diet (HFD) consumption, long before obesity development, and takes place mainly in the hypothalamus (HT), a crucial brain structure for controlling body weight. Understanding more precisely the kinetics of glial activation of two major brain cells (astrocytes and microglia) and their impact on eating behavior could prevent obesity and offer new prospects for therapeutic treatments. To understand the mechanisms pertaining to obesity-related neuroinflammation, we developed a fully automated algorithm, NutriMorph. Although some algorithms were developed in the past decade to detect and segment cells, they are highly specific, not fully automatic, and do not provide the desired morphological analysis. Our algorithm copes with these issues and performs the analysis of cells images (here, microglia of the hypothalamic arcuate nucleus), and the morphological clustering of these cells through statistical analysis and machine learning. Using the k-Means algorithm, it clusters the microglia of the control condition (healthy mice) and the different states of neuroinflammation induced by high-fat diets (obese mice) into subpopulations. This paper is an extension and re-analysis of a first published paper showing that microglial reactivity can already be seen after few hours of high-fat diet (Cansell et al., 2021 [5]). Thanks to NutriMorph algorithm, we unravel the presence of different hypothalamic microglial subpopulations (based on morphology) subject to proportion changes in response to already few hours of high-fat diet in mice.
Collapse
Affiliation(s)
- Clara Sanchez
- Université Côte d'Azur, CNRS, IPMC, Valbonne, France
| | - Morgane Nadal
- Université Côte d'Azur, CNRS, Inria, I3S, Team Morpheme, Sophia Antipolis, France
| | | | - Sarah Laroui
- Université Côte d'Azur, CNRS, Inria, I3S, Team Morpheme, Sophia Antipolis, France
| | - Xavier Descombes
- Université Côte d'Azur, CNRS, Inria, I3S, Team Morpheme, Sophia Antipolis, France
| | - Carole Rovère
- Université Côte d'Azur, CNRS, IPMC, Valbonne, France
| | - Éric Debreuve
- Université Côte d'Azur, CNRS, Inria, I3S, Team Morpheme, Sophia Antipolis, France.
| |
Collapse
|
2
|
Chen J, Yuan Z, Xi J, Gao Z, Li Y, Zhu X, Shi YS, Guan F, Wang Y. Efficient and Accurate Semi-Automatic Neuron Tracing with Extended Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7299-7309. [PMID: 39255163 DOI: 10.1109/tvcg.2024.3456197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Neuron tracing, alternately referred to as neuron reconstruction, is the procedure for extracting the digital representation of the three-dimensional neuronal morphology from stacks of microscopic images. Achieving accurate neuron tracing is critical for profiling the neuroanatomical structure at single-cell level and analyzing the neuronal circuits and projections at whole-brain scale. However, the process often demands substantial human involvement and represents a nontrivial task. Conventional solutions towards neuron tracing often contend with challenges such as non-intuitive user interactions, suboptimal data generation throughput, and ambiguous visualization. In this paper, we introduce a novel method that leverages the power of extended reality (XR) for intuitive and progressive semi-automatic neuron tracing in real time. In our method, we have defined a set of interactors for controllable and efficient interactions for neuron tracing in an immersive environment. We have also developed a GPU-accelerated automatic tracing algorithm that can generate updated neuron reconstruction in real time. In addition, we have built a visualizer for fast and improved visual experience, particularly when working with both volumetric images and 3D objects. Our method has been successfully implemented with one virtual reality (VR) headset and one augmented reality (AR) headset with satisfying results achieved. We also conducted two user studies and proved the effectiveness of the interactors and the efficiency of our method in comparison with other approaches for neuron tracing.
Collapse
|
3
|
Liu M, Wu S, Chen R, Lin Z, Wang Y, Meijering E. Brain Image Segmentation for Ultrascale Neuron Reconstruction via an Adaptive Dual-Task Learning Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2574-2586. [PMID: 38373129 DOI: 10.1109/tmi.2024.3367384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Accurate morphological reconstruction of neurons in whole brain images is critical for brain science research. However, due to the wide range of whole brain imaging, uneven staining, and optical system fluctuations, there are significant differences in image properties between different regions of the ultrascale brain image, such as dramatically varying voxel intensities and inhomogeneous distribution of background noise, posing an enormous challenge to neuron reconstruction from whole brain images. In this paper, we propose an adaptive dual-task learning network (ADTL-Net) to quickly and accurately extract neuronal structures from ultrascale brain images. Specifically, this framework includes an External Features Classifier (EFC) and a Parameter Adaptive Segmentation Decoder (PASD), which share the same Multi-Scale Feature Encoder (MSFE). MSFE introduces an attention module named Channel Space Fusion Module (CSFM) to extract structure and intensity distribution features of neurons at different scales for addressing the problem of anisotropy in 3D space. Then, EFC is designed to classify these feature maps based on external features, such as foreground intensity distributions and image smoothness, and select specific PASD parameters to decode them of different classes to obtain accurate segmentation results. PASD contains multiple sets of parameters trained by different representative complex signal-to-noise distribution image blocks to handle various images more robustly. Experimental results prove that compared with other advanced segmentation methods for neuron reconstruction, the proposed method achieves state-of-the-art results in the task of neuron reconstruction from ultrascale brain images, with an improvement of about 49% in speed and 12% in F1 score.
Collapse
|
4
|
Zeng Y, Wang Y. Complete Neuron Reconstruction Based on Branch Confidence. Brain Sci 2024; 14:396. [PMID: 38672045 PMCID: PMC11047972 DOI: 10.3390/brainsci14040396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 04/04/2024] [Accepted: 04/09/2024] [Indexed: 04/28/2024] Open
Abstract
In the past few years, significant advancements in microscopic imaging technology have led to the production of numerous high-resolution images capturing brain neurons at the micrometer scale. The reconstructed structure of neurons from neuronal images can serve as a valuable reference for research in brain diseases and neuroscience. Currently, there lacks an accurate and efficient method for neuron reconstruction. Manual reconstruction remains the primary approach, offering high accuracy but requiring significant time investment. While some automatic reconstruction methods are faster, they often sacrifice accuracy and cannot be directly relied upon. Therefore, the primary goal of this paper is to develop a neuron reconstruction tool that is both efficient and accurate. The tool aids users in reconstructing complete neurons by calculating the confidence of branches during the reconstruction process. The method models the neuron reconstruction as multiple Markov chains, and calculates the confidence of the connections between branches by simulating the reconstruction artifacts in the results. Users iteratively modify low-confidence branches to ensure precise and efficient neuron reconstruction. Experiments on both the publicly accessible BigNeuron dataset and a self-created Whole-Brain dataset demonstrate that the tool achieves high accuracy similar to manual reconstruction, while significantly reducing reconstruction time.
Collapse
Affiliation(s)
- Ying Zeng
- School of Computer Science and Technology, Shanghai University, Shanghai 200444, China;
- Guangdong Institute of Intelligence Science and Technology, Zhuhai 519031, China
| | - Yimin Wang
- Guangdong Institute of Intelligence Science and Technology, Zhuhai 519031, China
| |
Collapse
|
5
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
6
|
Pan L, Yan X, Zheng Y, Huang L, Zhang Z, Fu R, Zheng B, Zheng S. Automatic pulmonary artery-vein separation in CT images using a twin-pipe network and topology reconstruction. PeerJ Comput Sci 2023; 9:e1537. [PMID: 37810355 PMCID: PMC10557495 DOI: 10.7717/peerj-cs.1537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 07/24/2023] [Indexed: 10/10/2023]
Abstract
Background With the wide application of CT scanning, the separation of pulmonary arteries and veins (A/V) based on CT images plays an important role for assisting surgeons in preoperative planning of lung cancer surgery. However, distinguishing between arteries and veins in chest CT images remains challenging due to the complex structure and the presence of their similarities. Methods We proposed a novel method for automatically separating pulmonary arteries and veins based on vessel topology information and a twin-pipe deep learning network. First, vessel tree topology is constructed by combining scale-space particles and multi-stencils fast marching (MSFM) methods to ensure the continuity and authenticity of the topology. Second, a twin-pipe network is designed to learn the multiscale differences between arteries and veins and the characteristics of the small arteries that closely accompany bronchi. Finally, we designed a topology optimizer that considers interbranch and intrabranch topological relationships to optimize the results of arteries and veins classification. Results The proposed approach is validated on the public dataset CARVE14 and our private dataset. Compared with ground truth, the proposed method achieves an average accuracy of 90.1% on the CARVE14 dataset, and 96.2% on our local dataset. Conclusions The method can effectively separate pulmonary arteries and veins and has good generalization for chest CT images from different devices, as well as enhanced and noncontrast CT image sequences from the same device.
Collapse
Affiliation(s)
- Lin Pan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Xiaochao Yan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Yaoyong Zheng
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Liqin Huang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Zhen Zhang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Rongda Fu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Bin Zheng
- Key Laboratory of Cardio-Thoracic Surgery, Fujian Medical University, Fuzhou, Fujian, China
| | - Shaohua Zheng
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian, China
| |
Collapse
|
7
|
Clark AS, Kalmanson Z, Morton K, Hartman J, Meyer J, San-Miguel A. An unbiased, automated platform for scoring dopaminergic neurodegeneration in C. elegans. PLoS One 2023; 18:e0281797. [PMID: 37418455 PMCID: PMC10328331 DOI: 10.1371/journal.pone.0281797] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 06/20/2023] [Indexed: 07/09/2023] Open
Abstract
Caenorhabditis elegans (C. elegans) has served as a simple model organism to study dopaminergic neurodegeneration, as it enables quantitative analysis of cellular and sub-cellular morphologies in live animals. These isogenic nematodes have a rapid life cycle and transparent body, making high-throughput imaging and evaluation of fluorescently tagged neurons possible. However, the current state-of-the-art method for quantifying dopaminergic degeneration requires researchers to manually examine images and score dendrites into groups of varying levels of neurodegeneration severity, which is time consuming, subject to bias, and limited in data sensitivity. We aim to overcome the pitfalls of manual neuron scoring by developing an automated, unbiased image processing algorithm to quantify dopaminergic neurodegeneration in C. elegans. The algorithm can be used on images acquired with different microscopy setups and only requires two inputs: a maximum projection image of the four cephalic neurons in the C. elegans head and the pixel size of the user's camera. We validate the platform by detecting and quantifying neurodegeneration in nematodes exposed to rotenone, cold shock, and 6-hydroxydopamine using 63x epifluorescence, 63x confocal, and 40x epifluorescence microscopy, respectively. Analysis of tubby mutant worms with altered fat storage showed that, contrary to our hypothesis, increased adiposity did not sensitize to stressor-induced neurodegeneration. We further verify the accuracy of the algorithm by comparing code-generated, categorical degeneration results with manually scored dendrites of the same experiments. The platform, which detects 20 different metrics of neurodegeneration, can provide comparative insight into how each exposure affects dopaminergic neurodegeneration patterns.
Collapse
Affiliation(s)
- Andrew S. Clark
- Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, North Carolina, United States of America
| | - Zachary Kalmanson
- Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, North Carolina, United States of America
| | - Katherine Morton
- Nicholas School of the Environment, Duke University, Durham, North Carolina, United States of America
| | - Jessica Hartman
- Nicholas School of the Environment, Duke University, Durham, North Carolina, United States of America
- Biochemistry and Molecular Biology, Medical University of South Carolina, Charleston, South Carolina, United States of America
| | - Joel Meyer
- Nicholas School of the Environment, Duke University, Durham, North Carolina, United States of America
| | - Adriana San-Miguel
- Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, North Carolina, United States of America
| |
Collapse
|
8
|
Manubens-Gil L, Zhou Z, Chen H, Ramanathan A, Liu X, Liu Y, Bria A, Gillette T, Ruan Z, Yang J, Radojević M, Zhao T, Cheng L, Qu L, Liu S, Bouchard KE, Gu L, Cai W, Ji S, Roysam B, Wang CW, Yu H, Sironi A, Iascone DM, Zhou J, Bas E, Conde-Sousa E, Aguiar P, Li X, Li Y, Nanda S, Wang Y, Muresan L, Fua P, Ye B, He HY, Staiger JF, Peter M, Cox DN, Simonneau M, Oberlaender M, Jefferis G, Ito K, Gonzalez-Bellido P, Kim J, Rubel E, Cline HT, Zeng H, Nern A, Chiang AS, Yao J, Roskams J, Livesey R, Stevens J, Liu T, Dang C, Guo Y, Zhong N, Tourassi G, Hill S, Hawrylycz M, Koch C, Meijering E, Ascoli GA, Peng H. BigNeuron: a resource to benchmark and predict performance of algorithms for automated tracing of neurons in light microscopy datasets. Nat Methods 2023; 20:824-835. [PMID: 37069271 DOI: 10.1038/s41592-023-01848-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 03/14/2023] [Indexed: 04/19/2023]
Abstract
BigNeuron is an open community bench-testing platform with the goal of setting open standards for accurate and fast automatic neuron tracing. We gathered a diverse set of image volumes across several species that is representative of the data obtained in many neuroscience laboratories interested in neuron tracing. Here, we report generated gold standard manual annotations for a subset of the available imaging datasets and quantified tracing quality for 35 automatic tracing algorithms. The goal of generating such a hand-curated diverse dataset is to advance the development of tracing algorithms and enable generalizable benchmarking. Together with image quality features, we pooled the data in an interactive web application that enables users and developers to perform principal component analysis, t-distributed stochastic neighbor embedding, correlation and clustering, visualization of imaging and tracing data, and benchmarking of automatic tracing algorithms in user-defined data subsets. The image quality metrics explain most of the variance in the data, followed by neuromorphological features related to neuron size. We observed that diverse algorithms can provide complementary information to obtain accurate results and developed a method to iteratively combine methods and generate consensus reconstructions. The consensus trees obtained provide estimates of the neuron structure ground truth that typically outperform single algorithms in noisy datasets. However, specific algorithms may outperform the consensus tree strategy in specific imaging conditions. Finally, to aid users in predicting the most accurate automatic tracing results without manual annotations for comparison, we used support vector machine regression to predict reconstruction quality given an image volume and a set of automatic tracings.
Collapse
Affiliation(s)
- Linus Manubens-Gil
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Zhi Zhou
- Microsoft Corporation, Redmond, WA, USA
| | | | - Arvind Ramanathan
- Computing, Environment and Life Sciences Directorate, Argonne National Laboratory, Lemont, IL, USA
| | | | - Yufeng Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | | | - Todd Gillette
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Zongcai Ruan
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Jian Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | | | - Ting Zhao
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Li Cheng
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Lei Qu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Anhui University, Hefei, China
| | | | - Kristofer E Bouchard
- Scientific Data Division and Biological Systems and Engineering Division, Lawrence Berkeley National Lab, Berkeley, CA, USA
- Helen Wills Neuroscience Institute and Redwood Center for Theoretical Neuroscience, UC Berkeley, Berkeley, CA, USA
| | - Lin Gu
- RIKEN AIP, Tokyo, Japan
- Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, Tokyo, Japan
| | - Weidong Cai
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Shuiwang Ji
- Texas A&M University, College Station, TX, USA
| | - Badrinath Roysam
- Cullen College of Engineering, University of Houston, Houston, TX, USA
| | - Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Hongchuan Yu
- National Centre for Computer Animation, Bournemouth University, Poole, UK
| | | | - Daniel Maxim Iascone
- Department of Neuroscience, Columbia University, New York, NY, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Jie Zhou
- Department of Computer Science, Northern Illinois University, DeKalb, IL, USA
| | | | - Eduardo Conde-Sousa
- i3S, Instituto de Investigação E Inovação Em Saúde, Universidade Do Porto, Porto, Portugal
- INEB, Instituto de Engenharia Biomédica, Universidade Do Porto, Porto, Portugal
| | - Paulo Aguiar
- i3S, Instituto de Investigação E Inovação Em Saúde, Universidade Do Porto, Porto, Portugal
| | - Xiang Li
- Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Yujie Li
- Allen Institute for Brain Science, Seattle, WA, USA
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Sumit Nanda
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Yuan Wang
- Program in Neuroscience, Department of Biomedical Sciences, Florida State University College of Medicine, Tallahassee, FL, USA
| | - Leila Muresan
- Cambridge Advanced Imaging Centre, University of Cambridge, Cambridge, UK
| | - Pascal Fua
- Computer Vision Laboratory, EPFL, Lausanne, Switzerland
| | - Bing Ye
- Life Sciences Institute and Department of Cell and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
| | - Hai-Yan He
- Department of Biology, Georgetown University, Washington, DC, USA
| | - Jochen F Staiger
- Institute for Neuroanatomy, University Medical Center Göttingen, Georg-August- University Göttingen, Goettingen, Germany
| | - Manuel Peter
- Department of Stem Cell and Regenerative Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Daniel N Cox
- Neuroscience Institute, Georgia State University, Atlanta, GA, USA
| | - Michel Simonneau
- 42 ENS Paris-Saclay, CNRS, CentraleSupélec, LuMIn, Université Paris-Saclay, Gif-sur-Yvette, France
| | - Marcel Oberlaender
- Max Planck Group: In Silico Brain Sciences, Max Planck Institute for Neurobiology of Behavior - caesar, Bonn, Germany
| | - Gregory Jefferis
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK
- Department of Zoology, University of Cambridge, Cambridge, UK
| | - Kei Ito
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Institute for Quantitative Biosciences, University of Tokyo, Tokyo, Japan
- Institute of Zoology, Biocenter Cologne, University of Cologne, Cologne, Germany
| | | | - Jinhyun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea
| | - Edwin Rubel
- Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA, USA
| | | | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Aljoscha Nern
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Ann-Shyn Chiang
- Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan
| | | | - Jane Roskams
- Allen Institute for Brain Science, Seattle, WA, USA
- Department of Zoology, Life Sciences Institute, University of British Columbia, Vancouver, British Columbia, Canada
| | - Rick Livesey
- Zayed Centre for Rare Disease Research, UCL Great Ormond Street Institute of Child Health, London, UK
| | - Janine Stevens
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Tianming Liu
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Chinh Dang
- Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA, USA
| | - Yike Guo
- Data Science Institute, Imperial College London, London, UK
| | - Ning Zhong
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
- Department of Life Science and Informatics, Maebashi Institute of Technology, Maebashi, Japan
| | | | - Sean Hill
- Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | | | | | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA.
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, China.
| |
Collapse
|
9
|
Pan L, Li Z, Shen Z, Liu Z, Huang L, Yang M, Zheng B, Zeng T, Zheng S. Learning multi-view and centerline topology connectivity information for pulmonary artery-vein separation. Comput Biol Med 2023; 155:106669. [PMID: 36803793 DOI: 10.1016/j.compbiomed.2023.106669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 02/06/2023] [Accepted: 02/10/2023] [Indexed: 02/16/2023]
Abstract
BACKGROUND Automatic pulmonary artery-vein separation has considerable importance in the diagnosis and treatment of lung diseases. However, insufficient connectivity and spatial inconsistency have always been the problems of artery-vein separation. METHODS A novel automatic method for artery-vein separation in CT images is presented in this work. Specifically, a multi-scale information aggregated network (MSIA-Net) including multi-scale fusion blocks and deep supervision, is proposed to learn the features of artery-vein and aggregate additional semantic information, respectively. The proposed method integrates nine MSIA-Net models for artery-vein separation, vessel segmentation, and centerline separation tasks along with axial, coronal, and sagittal multi-view slices. First, the preliminary artery-vein separation results are obtained by the proposed multi-view fusion strategy (MVFS). Then, centerline correction algorithm (CCA) is used to correct the preliminary results of artery-vein separation by the centerline separation results. Finally, the vessel segmentation results are utilized to reconstruct the artery-vein morphology. In addition, weighted cross-entropy and dice loss are employed to solve the class imbalance problem. RESULTS We constructed 50 manually labeled contrast-enhanced computed CT scans for five-fold cross-validation, and experimental results demonstrated that our method achieves superior segmentation performance of 97.7%, 85.1%, and 84.9% on ACC, Pre, and DSC, respectively. Additionally, a series of ablation studies demonstrate the effectiveness of the proposed components. CONCLUSION The proposed method can effectively solve the problem of insufficient vascular connectivity and correct the spatial inconsistency of artery-vein.
Collapse
Affiliation(s)
- Lin Pan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Zhaopei Li
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Zhiqiang Shen
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Zheng Liu
- Faculty of Applied Science, School of Engineering, University of British Columbia, Kelowna, BC, Canada
| | - Liqin Huang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Mingjing Yang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Bin Zheng
- Key Laboratory of Cardio-Thoracic Surgery, Fujian Medical University, Fuzhou, China
| | - Taidui Zeng
- Key Laboratory of Cardio-Thoracic Surgery, Fujian Medical University, Fuzhou, China
| | - Shaohua Zheng
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China.
| |
Collapse
|
10
|
Clark AS, Kalmanson Z, Morton K, Hartman J, Meyer J, San-Miguel A. An unbiased, automated platform for scoring dopaminergic neurodegeneration in C. elegans. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.02.526781. [PMID: 36778421 PMCID: PMC9915681 DOI: 10.1101/2023.02.02.526781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
Caenorhabditis elegans ( C. elegans ) has served as a simple model organism to study dopaminergic neurodegeneration, as it enables quantitative analysis of cellular and sub-cellular morphologies in live animals. These isogenic nematodes have a rapid life cycle and transparent body, making high-throughput imaging and evaluation of fluorescently tagged neurons possible. However, the current state-of-the-art method for quantifying dopaminergic degeneration requires researchers to manually examine images and score dendrites into groups of varying levels of neurodegeneration severity, which is time consuming, subject to bias, and limited in data sensitivity. We aim to overcome the pitfalls of manual neuron scoring by developing an automated, unbiased image processing algorithm to quantify dopaminergic neurodegeneration in C. elegans . The algorithm can be used on images acquired with different microscopy setups and only requires two inputs: a maximum projection image of the four cephalic neurons in the C. elegans head and the pixel size of the user’s camera. We validate the platform by detecting and quantifying neurodegeneration in nematodes exposed to rotenone, cold shock, and 6-hydroxydopamine using 63x epifluorescence, 63x confocal, and 40x epifluorescence microscopy, respectively. Analysis of tubby mutant worms with altered fat storage showed that, contrary to our hypothesis, increased adiposity did not sensitize to stressor-induced neurodegeneration. We further verify the accuracy of the algorithm by comparing code-generated, categorical degeneration results with manually scored dendrites of the same experiments. The platform, which detects 19 different metrics of neurodegeneration, can provide comparative insight into how each exposure affects dopaminergic neurodegeneration patterns.
Collapse
Affiliation(s)
- Andrew S Clark
- Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, North Carolina, USA
| | - Zachary Kalmanson
- Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, North Carolina, USA
| | - Katherine Morton
- Nicholas School of the Environment, Duke University, Durham, North Carolina, USA
| | - Jessica Hartman
- Nicholas School of the Environment, Duke University, Durham, North Carolina, USA
- Biochemistry and Molecular Biology, Medical University of South Carolina, Charleston, South Carolina, USA
| | - Joel Meyer
- Nicholas School of the Environment, Duke University, Durham, North Carolina, USA
| | - Adriana San-Miguel
- Chemical and Biomolecular Engineering, North Carolina State University, Raleigh, North Carolina, USA
| |
Collapse
|
11
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
12
|
Sung D, Risk BB, Kottke PA, Allen JW, Nahab F, Fedorov AG, Fleischer CC. Comparisons of healthy human brain temperature predicted from biophysical modeling and measured with whole brain MR thermometry. Sci Rep 2022; 12:19285. [PMID: 36369468 PMCID: PMC9652378 DOI: 10.1038/s41598-022-22599-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Accepted: 10/17/2022] [Indexed: 11/13/2022] Open
Abstract
Brain temperature is an understudied parameter relevant to brain injury and ischemia. To advance our understanding of thermal dynamics in the human brain, combined with the challenges of routine experimental measurements, a biophysical modeling framework was developed to facilitate individualized brain temperature predictions. Model-predicted brain temperatures using our fully conserved model were compared with whole brain chemical shift thermometry acquired in 30 healthy human subjects (15 male and 15 female, age range 18-36 years old). Magnetic resonance (MR) thermometry, as well as structural imaging, angiography, and venography, were acquired prospectively on a Siemens Prisma whole body 3 T MR scanner. Bland-Altman plots demonstrate agreement between model-predicted and MR-measured brain temperatures at the voxel-level. Regional variations were similar between predicted and measured temperatures (< 0.55 °C for all 10 cortical and 12 subcortical regions of interest), and subcortical white matter temperatures were higher than cortical regions. We anticipate the advancement of brain temperature as a marker of health and injury will be facilitated by a well-validated computational model which can enable predictions when experiments are not feasible.
Collapse
Affiliation(s)
- Dongsuk Sung
- grid.213917.f0000 0001 2097 4943Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA USA
| | - Benjamin B. Risk
- grid.189967.80000 0001 0941 6502Department of Biostatistics and Bioinformatics, Emory University, Atlanta, GA USA
| | - Peter A. Kottke
- grid.213917.f0000 0001 2097 4943Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA USA
| | - Jason W. Allen
- grid.213917.f0000 0001 2097 4943Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA USA ,grid.189967.80000 0001 0941 6502Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA USA ,grid.189967.80000 0001 0941 6502Department of Neurology, Emory University School of Medicine, Atlanta, GA USA
| | - Fadi Nahab
- grid.189967.80000 0001 0941 6502Department of Neurology, Emory University School of Medicine, Atlanta, GA USA
| | - Andrei G. Fedorov
- grid.213917.f0000 0001 2097 4943Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA USA ,grid.213917.f0000 0001 2097 4943Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA USA
| | - Candace C. Fleischer
- grid.213917.f0000 0001 2097 4943Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA USA ,grid.189967.80000 0001 0941 6502Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA USA ,grid.213917.f0000 0001 2097 4943Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA USA ,grid.189967.80000 0001 0941 6502Wesley Woods Health Center, Emory University School of Medicine, 1841 Clifton Road, Atlanta, GA 30329 USA
| |
Collapse
|
13
|
Weikert T, Friebe L, Wilder-Smith A, Yang S, Sperl JI, Neumann D, Balachandran A, Bremerich J, Sauter AW. Automated quantification of airway wall thickness on chest CT using retina U-Nets - Performance evaluation and application to a large cohort of chest CTs of COPD patients. Eur J Radiol 2022; 155:110460. [PMID: 35963191 DOI: 10.1016/j.ejrad.2022.110460] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 07/17/2022] [Accepted: 07/31/2022] [Indexed: 12/24/2022]
Abstract
PURPOSE Airway wall thickening is a consequence of chronic inflammatory processes and usually only qualitatively described in CT radiology reports. The purpose of this study is to automatically quantify airway wall thickness in multiple airway generations and assess the diagnostic potential of this parameter in a large cohort of patients with Chronic Obstructive Pulmonary Disease (COPD). MATERIALS AND METHODS This retrospective, single-center study included a series of unenhanced chest CTs. Inclusion criteria were the mentioning of an explicit COPD GOLD stage in the written radiology report and time period (01/2019-12/2021). A control group included chest CTs with completely unremarkable lungs according to the report. The DICOM images of all cases (axial orientation; slice-thickness: 1 mm; soft-tissue kernel) were processed by an AI algorithm pipeline consisting of (A) a 3D-U-Net for det detection and tracing of the bronchial tree centerlines (B) extraction of image patches perpendicular to the centerlines of the bronchi, and (C) a 2D U-Net for segmentation of airway walls on those patches. The performance of centerline detection and wall segmentation was assessed. The imaging parameter average wall thickness was calculated for bronchus generations 3-8 (AWT3-8) across the lungs. Mean AWT3-8 was compared between five groups (control, COPD Gold I-IV) using non-parametric statistics. Furthermore, the established emphysema score %LAV-950 was calculated and used to classify scans (normal vs. COPD) alone and in combination with AWT3-8. RESULTS: A total of 575 chest CTs were processed. Algorithm performance was very good (airway centerline detection sensitivity: 86.9%; airway wall segmentation Dice score: 0.86). AWT3-8 was statistically significantly greater in COPD patients compared to controls (2.03 vs. 1.87 mm, p < 0.001) and increased with COPD stage. The classifier that combined %LAV-950 and AWT3-8 was superior to the classifier using only %LAV-950 (AUC = 0.92 vs. 0.79). CONCLUSION Airway wall thickness increases in patients suffering from COPD and is automatically quantifiable. AWT3-8 could become a CT imaging parameter in COPD complementing the established emphysema biomarker %LAV-950. CLINICAL RELEVANCE STATEMENT Quantitative measurements considering the complete visible bronchial tree instead of qualitative description could enhance radiology reports, allow for precise monitoring of disease progression and diagnosis of early stages of disease.
Collapse
Affiliation(s)
- Thomas Weikert
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| | - Liene Friebe
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland
| | - Adrian Wilder-Smith
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland
| | - Shan Yang
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland
| | | | - Dominik Neumann
- Siemens Healthineers, Henkestrasse 127, 91052 Erlangen, Germany
| | | | - Jens Bremerich
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland
| | - Alexander W Sauter
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland
| |
Collapse
|
14
|
Chen W, Liu M, Du H, Radojevic M, Wang Y, Meijering E. Deep-Learning-Based Automated Neuron Reconstruction From 3D Microscopy Images Using Synthetic Training Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1031-1042. [PMID: 34847022 DOI: 10.1109/tmi.2021.3130934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital reconstruction of neuronal structures from 3D microscopy images is critical for the quantitative investigation of brain circuits and functions. It is a challenging task that would greatly benefit from automatic neuron reconstruction methods. In this paper, we propose a novel method called SPE-DNR that combines spherical-patches extraction (SPE) and deep-learning for neuron reconstruction (DNR). Based on 2D Convolutional Neural Networks (CNNs) and the intensity distribution features extracted by SPE, it determines the tracing directions and classifies voxels into foreground or background. This way, starting from a set of seed points, it automatically traces the neurite centerlines and determines when to stop tracing. To avoid errors caused by imperfect manual reconstructions, we develop an image synthesizing scheme to generate synthetic training images with exact reconstructions. This scheme simulates 3D microscopy imaging conditions as well as structural defects, such as gaps and abrupt radii changes, to improve the visual realism of the synthetic images. To demonstrate the applicability and generalizability of SPE-DNR, we test it on 67 real 3D neuron microscopy images from three datasets. The experimental results show that the proposed SPE-DNR method is robust and competitive compared with other state-of-the-art neuron reconstruction methods.
Collapse
|
15
|
Wang X, Liu M, Wang Y, Fan J, Meijering E. A 3D Tubular Flux Model for Centerline Extraction in Neuron Volumetric Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1069-1079. [PMID: 34826295 DOI: 10.1109/tmi.2021.3130987] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital morphology reconstruction from neuron volumetric images is essential for computational neuroscience. The centerline of the axonal and dendritic tree provides an effective shape representation and serves as a basis for further neuron reconstruction. However, it is still a challenge to directly extract the accurate centerline from the complex neuron structure with poor image quality. In this paper, we propose a neuron centerline extraction method based on a 3D tubular flux model via a two-stage CNN framework. In the first stage, a 3D CNN is used to learn the latent neuron structure features, namely flux features, from neuron images. In the second stage, a light-weight U-Net takes the learned flux features as input to extract the centerline with a spatial weighted average strategy to constrain the multi-voxel width response. Specifically, the labels of flux features in the first stage are generated by the 3D tubular model which calculates the geometric representations of the flux between each voxel in the tubular region and the nearest point on the centerline ground truth. Compared with self-learned features by networks, flux features, as a kind of prior knowledge, explicitly take advantage of the contextual distance and direction distribution information around the centerline, which is beneficial for the precise centerline extraction. Experiments on two challenging datasets demonstrate that the proposed method outperforms other state-of-the-art methods by 18% and 35.1% in F1-measurement and average distance scores at the most, and the extracted centerline is helpful to improve the neuron reconstruction performance.
Collapse
|
16
|
Guo J, Fu R, Pan L, Zheng S, Huang L, Zheng B, He B. Coarse-to-fine airway segmentation using multi information fusion network and CNN-based region growing. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106610. [PMID: 35077902 DOI: 10.1016/j.cmpb.2021.106610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 12/03/2021] [Accepted: 12/26/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES Automatic airway segmentation from chest computed tomography (CT) scans plays an important role in pulmonary disease diagnosis and computer-assisted therapy. However, low contrast at peripheral branches and complex tree-like structures remain as two mainly challenges for airway segmentation. Recent research has illustrated that deep learning methods perform well in segmentation tasks. Motivated by these works, a coarse-to-fine segmentation framework is proposed to obtain a complete airway tree. METHODS Our framework segments the overall airway and small branches via the multi-information fusion convolution neural network (Mif-CNN) and the CNN-based region growing, respectively. In Mif-CNN, atrous spatial pyramid pooling (ASPP) is integrated into a u-shaped network, and it can expend the receptive field and capture multi-scale information. Meanwhile, boundary and location information are incorporated into semantic information. These information are fused to help Mif-CNN utilize additional context knowledge and useful features. To improve the performance of the segmentation result, the CNN-based region growing method is designed to focus on obtaining small branches. A voxel classification network (VCN), which can entirely capture the rich information around each voxel, is applied to classify the voxels into airway and non-airway. In addition, a shape reconstruction method is used to refine the airway tree. RESULTS We evaluate our method on a private dataset and a public dataset from EXACT09. Compared with the segmentation results from other methods, our method demonstrated promising accuracy in complete airway tree segmentation. In the private dataset, the Dice similarity coefficient (DSC), Intersection over Union (IoU), false positive rate (FPR), and sensitivity are 93.5%, 87.8%, 0.015%, and 90.8%, respectively. In the public dataset, the DSC, IoU, FPR, and sensitivity are 95.8%, 91.9%, 0.053% and 96.6%, respectively. CONCLUSION The proposed Mif-CNN and CNN-based region growing method segment the airway tree accurately and efficiently in CT scans. Experimental results also demonstrate that the framework is ready for application in computer-aided diagnosis systems for lung disease and other related works.
Collapse
Affiliation(s)
- Jinquan Guo
- School of Mechanical engineering and Automation, Fuzhou University, Fuzhou 350108, China
| | - Rongda Fu
- School of Mechanical engineering and Automation, Fuzhou University, Fuzhou 350108, China
| | - Lin Pan
- School of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Shaohua Zheng
- School of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Liqin Huang
- School of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Bin Zheng
- Thoracic Department, Fujian Medical University Union Hospital, China.
| | - Bingwei He
- School of Mechanical engineering and Automation, Fuzhou University, Fuzhou 350108, China.
| |
Collapse
|
17
|
Xu Z, Feng Z, Zhao M, Sun Q, Deng L, Jia X, Jiang T, Luo P, Chen W, Tudi A, Yuan J, Li X, Gong H, Luo Q, Li A. Whole-brain connectivity atlas of glutamatergic and GABAergic neurons in the mouse dorsal and median raphe nuclei. eLife 2021; 10:65502. [PMID: 34792021 PMCID: PMC8626088 DOI: 10.7554/elife.65502] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Accepted: 11/17/2021] [Indexed: 11/25/2022] Open
Abstract
The dorsal raphe nucleus (DR) and median raphe nucleus (MR) contain populations of glutamatergic and GABAergic neurons that regulate diverse behavioral functions. However, their whole-brain input-output circuits remain incompletely elucidated. We used viral tracing combined with fluorescence micro-optical sectioning tomography to generate a comprehensive whole-brain atlas of inputs and outputs of glutamatergic and GABAergic neurons in the DR and MR. We found that these neurons received inputs from similar upstream brain regions. The glutamatergic and GABAergic neurons in the same raphe nucleus had divergent projection patterns with differences in critical brain regions. Specifically, MR glutamatergic neurons projected to the lateral habenula through multiple pathways. Correlation and cluster analysis revealed that glutamatergic and GABAergic neurons in the same raphe nucleus received heterogeneous inputs and sent different collateral projections. This connectivity atlas further elucidates the anatomical architecture of the raphe nuclei, which could facilitate better understanding of their behavioral functions.
Collapse
Affiliation(s)
- Zhengchao Xu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Zhao Feng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, China
| | - Mengting Zhao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Qingtao Sun
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, China
| | - Lei Deng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Xueyan Jia
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, China
| | - Tao Jiang
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, China
| | - Pan Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Wu Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Ayizuohere Tudi
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Jing Yuan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, China
| | - Xiangning Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, China.,CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Science, Shanghai, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, China.,School of Biomedical Engineering, Hainan University, Haikou, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, China.,CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Science, Shanghai, China
| |
Collapse
|
18
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. DeepRayburst for Automatic Shape Analysis of Tree-Like Structures in Biomedical Images. IEEE J Biomed Health Inform 2021; 26:2204-2215. [PMID: 34727041 DOI: 10.1109/jbhi.2021.3124514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Precise quantification of tree-like structures from biomedical images, such as neuronal shape reconstruction and retinal blood vessel caliber estimation, is increasingly important in understanding normal function and pathologic processes in biology. Some handcrafted methods have been proposed for this purpose in recent years. However, they are designed only for a specific application. In this paper, we propose a shape analysis algorithm, DeepRayburst, that can be applied to many different applications based on a Multi-Feature Rayburst Sampling (MFRS) and a Dual Channel Temporal Convolutional Network (DC-TCN). Specifically, we first generate a Rayburst Sampling (RS) core containing a set of multidirectional rays. Then the MFRS is designed by extending each ray of the RS to multiple parallel rays which extract a set of feature sequences. A Gaussian kernel is then used to fuse these feature sequences and outputs one feature sequence. Furthermore, we design a DC-TCN to make the rays terminate on the surface of tree-like structures according to the fused feature sequence. Finally, by analyzing the distribution patterns of the terminated rays, the algorithm can serve multiple shape analysis applications of tree-like structures. Experiments on three different applications, including soma shape reconstruction, neuronal shape reconstruction, and vessel caliber estimation, confirm that the proposed method outperforms other state-of-the-art shape analysis methods, which demonstrate its flexibility and robustness.
Collapse
|
19
|
Chen X, Zhang C, Zhao J, Xiong Z, Zha ZJ, Wu F. Weakly Supervised Neuron Reconstruction From Optical Microscopy Images With Morphological Priors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3205-3216. [PMID: 33999814 DOI: 10.1109/tmi.2021.3080695] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Manually labeling neurons from high-resolution but noisy and low-contrast optical microscopy (OM) images is tedious. As a result, the lack of annotated data poses a key challenge when applying deep learning techniques for reconstructing neurons from noisy and low-contrast OM images. While traditional tracing methods provide a possible way to efficiently generate labels for supervised network training, the generated pseudo-labels contain many noisy and incorrect labels, which lead to severe performance degradation. On the other hand, the publicly available dataset, BigNeuron, provides a large number of single 3D neurons that are reconstructed using various imaging paradigms and tracing methods. Though the raw OM images are not fully available for these neurons, they convey essential morphological priors for complex 3D neuron structures. In this paper, we propose a new approach to exploit morphological priors from neurons that have been reconstructed for training a deep neural network to extract neuron signals from OM images. We integrate a deep segmentation network in a generative adversarial network (GAN), expecting the segmentation network to be weakly supervised by pseudo-labels at the pixel level while utilizing the supervision of previously reconstructed neurons at the morphology level. In our morphological-prior-guided neuron reconstruction GAN, named MP-NRGAN, the segmentation network extracts neuron signals from raw images, and the discriminator network encourages the extracted neurons to follow the morphology distribution of reconstructed neurons. Comprehensive experiments on the public VISoR-40 dataset and BigNeuron dataset demonstrate that our proposed MP-NRGAN outperforms state-of-the-art approaches with less training effort.
Collapse
|
20
|
Li Q, Shen L. Neuron segmentation using 3D wavelet integrated encoder-decoder network. Bioinformatics 2021; 38:809-817. [PMID: 34647994 PMCID: PMC8756182 DOI: 10.1093/bioinformatics/btab716] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 09/13/2021] [Accepted: 10/12/2021] [Indexed: 02/03/2023] Open
Abstract
MOTIVATION 3D neuron segmentation is a key step for the neuron digital reconstruction, which is essential for exploring brain circuits and understanding brain functions. However, the fine line-shaped nerve fibers of neuron could spread in a large region, which brings great computational cost to the neuron segmentation. Meanwhile, the strong noises and disconnected nerve fibers bring great challenges to the task. RESULTS In this article, we propose a 3D wavelet and deep learning-based 3D neuron segmentation method. The neuronal image is first partitioned into neuronal cubes to simplify the segmentation task. Then, we design 3D WaveUNet, the first 3D wavelet integrated encoder-decoder network, to segment the nerve fibers in the cubes; the wavelets could assist the deep networks in suppressing data noises and connecting the broken fibers. We also produce a Neuronal Cube Dataset (NeuCuDa) using the biggest available annotated neuronal image dataset, BigNeuron, to train 3D WaveUNet. Finally, the nerve fibers segmented in cubes are assembled to generate the complete neuron, which is digitally reconstructed using an available automatic tracing algorithm. The experimental results show that our neuron segmentation method could completely extract the target neuron in noisy neuronal images. The integrated 3D wavelets can efficiently improve the performance of 3D neuron segmentation and reconstruction. AVAILABILITYAND IMPLEMENTATION The data and codes for this work are available at https://github.com/LiQiufu/3D-WaveUNet. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Qiufu Li
- Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China,AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen 518060, China,Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China
| | | |
Collapse
|
21
|
Liu S, Barrow CS, Hanlon M, Lynch JP, Bucksch A. DIRT/3D: 3D root phenotyping for field-grown maize (Zea mays). PLANT PHYSIOLOGY 2021; 187:739-757. [PMID: 34608967 PMCID: PMC8491025 DOI: 10.1093/plphys/kiab311] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Accepted: 06/09/2021] [Indexed: 05/25/2023]
Abstract
The development of crops with deeper roots holds substantial promise to mitigate the consequences of climate change. Deeper roots are an essential factor to improve water uptake as a way to enhance crop resilience to drought, to increase nitrogen capture, to reduce fertilizer inputs, and to increase carbon sequestration from the atmosphere to improve soil organic fertility. A major bottleneck to achieving these improvements is high-throughput phenotyping to quantify root phenotypes of field-grown roots. We address this bottleneck with Digital Imaging of Root Traits (DIRT)/3D, an image-based 3D root phenotyping platform, which measures 18 architecture traits from mature field-grown maize (Zea mays) root crowns (RCs) excavated with the Shovelomics technique. DIRT/3D reliably computed all 18 traits, including distance between whorls and the number, angles, and diameters of nodal roots, on a test panel of 12 contrasting maize genotypes. The computed results were validated through comparison with manual measurements. Overall, we observed a coefficient of determination of r2>0.84 and a high broad-sense heritability of Hmean2> 0.6 for all but one trait. The average values of the 18 traits and a developed descriptor to characterize complete root architecture distinguished all genotypes. DIRT/3D is a step toward automated quantification of highly occluded maize RCs. Therefore, DIRT/3D supports breeders and root biologists in improving carbon sequestration and food security in the face of the adverse effects of climate change.
Collapse
Affiliation(s)
- Suxing Liu
- Department of Plant Biology, University of Georgia, Athens, Georgia 30602, USA
- Warnell School of Forestry and Natural Resources, University of Georgia, Athens, Georgia 30602, USA
- Institute of Bioinformatics, University of Georgia, Athens, Georgia 30602, USA
| | | | - Meredith Hanlon
- Department of Plant Science, Pennsylvania State University, State College, Pennsylvania 16802, USA
| | - Jonathan P. Lynch
- Department of Plant Science, Pennsylvania State University, State College, Pennsylvania 16802, USA
| | - Alexander Bucksch
- Department of Plant Biology, University of Georgia, Athens, Georgia 30602, USA
- Warnell School of Forestry and Natural Resources, University of Georgia, Athens, Georgia 30602, USA
- Institute of Bioinformatics, University of Georgia, Athens, Georgia 30602, USA
| |
Collapse
|
22
|
He Y, Huang J, Wu G, Yang J. Exploring highly reliable substructures in auto-reconstructions of a neuron. Brain Inform 2021; 8:17. [PMID: 34431008 PMCID: PMC8384950 DOI: 10.1186/s40708-021-00137-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2021] [Accepted: 07/27/2021] [Indexed: 11/10/2022] Open
Abstract
The digital reconstruction of a neuron is the most direct and effective way to investigate its morphology. Many automatic neuron tracing methods have been proposed, but without manual check it is difficult to know whether a reconstruction or which substructure in a reconstruction is accurate. For a neuron's reconstructions generated by multiple automatic tracing methods with different principles or models, their common substructures are highly reliable and named individual motifs. In this work, we propose a Vaa3D-based method called Lamotif to explore individual motifs in automatic reconstructions of a neuron. Lamotif utilizes the local alignment algorithm in BlastNeuron to extract local alignment pairs between a specified objective reconstruction and multiple reference reconstructions, and combines these pairs to generate individual motifs on the objective reconstruction. The proposed Lamotif is evaluated on reconstructions of 163 multiple species neurons, which are generated by four state-of-the-art tracing methods. Experimental results show that individual motifs are almost on corresponding gold standard reconstructions and have much higher precision rate than objective reconstructions themselves. Furthermore, an objective reconstruction is mostly quite accurate if its individual motifs have high recall rate. Individual motifs contain common geometry substructures in multiple reconstructions, and can be used to select some accurate substructures from a reconstruction or some accurate reconstructions from automatic reconstruction dataset of different neurons.
Collapse
Affiliation(s)
- Yishan He
- Faculty of Information Technology, Beijing University of Technology, 100 Pingleyuan, Chaoyang District, Beijing, 100124, China.,Beijing International Collaboration Base On Brain Informatics and Wisdom Services, 100 Pingleyuan, Chaoyang District, Beijing, 100124, China
| | - Jiajin Huang
- Faculty of Information Technology, Beijing University of Technology, 100 Pingleyuan, Chaoyang District, Beijing, 100124, China.,Beijing International Collaboration Base On Brain Informatics and Wisdom Services, 100 Pingleyuan, Chaoyang District, Beijing, 100124, China
| | - Gaowei Wu
- School of Artificial Intelligence, University of Chinese Academy of Sciences, 19(A) Yuquan Road, Shijingshan District, Beijing, 100049, China.,Institute of Automation, Chinese Academy of Sciences, Haidian District, 95 Zhongguancun East Road, Beijing, 100190, China
| | - Jian Yang
- Faculty of Information Technology, Beijing University of Technology, 100 Pingleyuan, Chaoyang District, Beijing, 100124, China. .,Beijing International Collaboration Base On Brain Informatics and Wisdom Services, 100 Pingleyuan, Chaoyang District, Beijing, 100124, China. .,School of Artificial Intelligence, University of Chinese Academy of Sciences, 19(A) Yuquan Road, Shijingshan District, Beijing, 100049, China.
| |
Collapse
|
23
|
Yang B, Chen W, Luo H, Tan Y, Liu M, Wang Y. Neuron Image Segmentation via Learning Deep Features and Enhancing Weak Neuronal Structures. IEEE J Biomed Health Inform 2021; 25:1634-1645. [PMID: 32809948 DOI: 10.1109/jbhi.2020.3017540] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Neuron morphology reconstruction (tracing) in 3D volumetric images is critical for neuronal research. However, most existing neuron tracing methods are not applicable in challenging datasets where the neuron images are contaminated by noises or containing weak filament signals. In this paper, we present a two-stage 3D neuron segmentation approach via learning deep features and enhancing weak neuronal structures, to reduce the impact of image noise in the data and enhance the weak-signal neuronal structures. In the first stage, we train a voxel-wise multi-level fully convolutional network (FCN), which specializes in learning deep features, to obtain the 3D neuron image segmentation maps in an end-to-end manner. In the second stage, a ray-shooting model is employed to detect the discontinued segments in segmentation results of the first-stage, and the local neuron diameter of the broken point is estimated and direction of the filamentary fragment is detected by rayburst sampling algorithm. Then, a Hessian-repair model is built to repair the broken structures, by enhancing weak neuronal structures in a fibrous structure determined by the estimated local neuron diameter and the filamentary fragment direction. Experimental results demonstrate that our proposed segmentation approach achieves better segmentation performance than other state-of-the-art methods for 3D neuron segmentation. Compared with the neuron reconstruction results on the segmented images produced by other segmentation methods, the proposed approach gains 47.83% and 34.83% improvement in the average distance scores. The average Precision and Recall rates of the branch point detection with our proposed method are 38.74% and 22.53% higher than the detection results without segmentation.
Collapse
|
24
|
Li Q, Zhang Y, Liang H, Gong H, Jiang L, Liu Q, Shen L. Deep learning based neuronal soma detection and counting for Alzheimer's disease analysis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 203:106023. [PMID: 33744751 DOI: 10.1016/j.cmpb.2021.106023] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 02/21/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Alzheimer's Disease (AD) is associated with neuronal damage and decrease. Micro-Optical Sectioning Tomography (MOST) provides an approach to acquire high-resolution images for neuron analysis in the whole-brain. Application of this technique to AD mouse brain enables us to investigate neuron changes during the progression of AD pathology. However, how to deal with the huge amount of data becomes the bottleneck. METHODS Using MOST technology, we acquired 3D whole-brain images of six AD mice, and sampled the imaging data of four regions in each mouse brain for AD progression analysis. To count the number of neurons, we proposed a deep learning based method by detecting neuronal soma in the neuronal images. In our method, the neuronal images were first cut into small cubes, then a Convolutional Neural Network (CNN) classifier was designed to detect the neuronal soma by classifying the cubes into three categories, "soma", "fiber", and "background". RESULTS Compared with the manual method and currently available NeuroGPS software, our method demonstrates faster speed and higher accuracy in identifying neurons from the MOST images. By applying our method to various brain regions of 6-month-old and 12-month-old AD mice, we found that the amount of neurons in three brain regions (lateral entorhinal cortex, medial entorhinal cortex, and presubiculum) decreased slightly with the increase of age, which is consistent with the experimental results previously reported. CONCLUSION This paper provides a new method to automatically handle the huge amounts of data and accurately identify neuronal soma from the MOST images. It also provides the potential possibility to construct a whole-brain neuron projection to reveal the impact of AD pathology on mouse brain.
Collapse
Affiliation(s)
- Qiufu Li
- Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong, 518060, China; AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen 518060, China; Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China
| | - Yu Zhang
- College of Life Sciences and Oceanography, Shenzhen University, Shenzhen, Guangdong, 518055, China
| | - Hanbang Liang
- Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong, 518060, China; AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen 518060, China; Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China
| | - Hui Gong
- National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan 430074, China
| | - Liang Jiang
- College of Life Sciences and Oceanography, Shenzhen University, Shenzhen, Guangdong, 518055, China.
| | - Qiong Liu
- College of Life Sciences and Oceanography, Shenzhen University, Shenzhen, Guangdong, 518055, China; Shenzhen Bay Laboratory, Shenzhen, 518055, China; Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China.
| | - Linlin Shen
- Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong, 518060, China; AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen 518060, China; Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China.
| |
Collapse
|
25
|
Chen W, Liu M, Zhan Q, Tan Y, Meijering E, Radojevic M, Wang Y. Spherical-Patches Extraction for Deep-Learning-Based Critical Points Detection in 3D Neuron Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:527-538. [PMID: 33055023 DOI: 10.1109/tmi.2020.3031289] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Digital reconstruction of neuronal structures is very important to neuroscience research. Many existing reconstruction algorithms require a set of good seed points. 3D neuron critical points, including terminations, branch points and cross-over points, are good candidates for such seed points. However, a method that can simultaneously detect all types of critical points has barely been explored. In this work, we present a method to simultaneously detect all 3 types of 3D critical points in neuron microscopy images, based on a spherical-patches extraction (SPE) method and a 2D multi-stream convolutional neural network (CNN). SPE uses a set of concentric spherical surfaces centered at a given critical point candidate to extract intensity distribution features around the point. Then, a group of 2D spherical patches is generated by projecting the surfaces into 2D rectangular image patches according to the orders of the azimuth and the polar angles. Finally, a 2D multi-stream CNN, in which each stream receives one spherical patch as input, is designed to learn the intensity distribution features from those spherical patches and classify the given critical point candidate into one of four classes: termination, branch point, cross-over point or non-critical point. Experimental results confirm that the proposed method outperforms other state-of-the-art critical points detection methods. The critical points based neuron reconstruction results demonstrate the potential of the detected neuron critical points to be good seed points for neuron reconstruction. Additionally, we have established a public dataset dedicated for neuron critical points detection, which has been released along with this article.
Collapse
|
26
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. 3D Neuron Microscopy Image Segmentation via the Ray-Shooting Model and a DC-BLSTM Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:26-37. [PMID: 32881683 DOI: 10.1109/tmi.2020.3021493] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The morphology reconstruction (tracing) of neurons in 3D microscopy images is important to neuroscience research. However, this task remains very challenging because of the low signal-to-noise ratio (SNR) and the discontinued segments of neurite patterns in the images. In this paper, we present a neuronal structure segmentation method based on the ray-shooting model and the Long Short-Term Memory (LSTM)-based network to enhance the weak-signal neuronal structures and remove background noise in 3D neuron microscopy images. Specifically, the ray-shooting model is used to extract the intensity distribution features within a local region of the image. And we design a neural network based on the dual channel bidirectional LSTM (DC-BLSTM) to detect the foreground voxels according to the voxel-intensity features and boundary-response features extracted by multiple ray-shooting models that are generated in the whole image. This way, we transform the 3D image segmentation task into multiple 1D ray/sequence segmentation tasks, which makes it much easier to label the training samples than many existing Convolutional Neural Network (CNN) based 3D neuron image segmentation methods. In the experiments, we evaluate the performance of our method on the challenging 3D neuron images from two datasets, the BigNeuron dataset and the Whole Mouse Brain Sub-image (WMBS) dataset. Compared with the neuron tracing results on the segmented images produced by other state-of-the-art neuron segmentation methods, our method improves the distance scores by about 32% and 27% in the BigNeuron dataset, and about 38% and 27% in the WMBS dataset.
Collapse
|
27
|
Zhao J, Chen X, Xiong Z, Liu D, Zeng J, Xie C, Zhang Y, Zha ZJ, Bi G, Wu F. Neuronal Population Reconstruction From Ultra-Scale Optical Microscopy Images via Progressive Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4034-4046. [PMID: 32746145 DOI: 10.1109/tmi.2020.3009148] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Reconstruction of neuronal populations from ultra-scale optical microscopy (OM) images is essential to investigate neuronal circuits and brain mechanisms. The noises, low contrast, huge memory requirement, and high computational cost pose significant challenges in the neuronal population reconstruction. Recently, many studies have been conducted to extract neuron signals using deep neural networks (DNNs). However, training such DNNs usually relies on a huge amount of voxel-wise annotations in OM images, which are expensive in terms of both finance and labor. In this paper, we propose a novel framework for dense neuronal population reconstruction from ultra-scale images. To solve the problem of high cost in obtaining manual annotations for training DNNs, we propose a progressive learning scheme for neuronal population reconstruction (PLNPR) which does not require any manual annotations. Our PLNPR scheme consists of a traditional neuron tracing module and a deep segmentation network that mutually complement and progressively promote each other. To reconstruct dense neuronal populations from a terabyte-sized ultra-scale image, we introduce an automatic framework which adaptively traces neurons block by block and fuses fragmented neurites in overlapped regions continuously and smoothly. We build a dataset "VISoR-40" which consists of 40 large-scale OM image blocks from cortical regions of a mouse. Extensive experimental results on our VISoR-40 dataset and the public BigNeuron dataset demonstrate the effectiveness and superiority of our method on neuronal population reconstruction and single neuron reconstruction. Furthermore, we successfully apply our method to reconstruct dense neuronal populations from an ultra-scale mouse brain slice. The proposed adaptive block propagation and fusion strategies greatly improve the completeness of neurites in dense neuronal population reconstruction.
Collapse
|
28
|
Li Q, Shen L. 3D Neuron Reconstruction in Tangled Neuronal Image With Deep Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:425-435. [PMID: 31295108 DOI: 10.1109/tmi.2019.2926568] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Digital reconstruction or tracing of 3D neuron is essential for understanding the brain functions. While existing automatic tracing algorithms work well for the clean neuronal image with a single neuron, they are not robust to trace the neuron surrounded by nerve fibers. We propose a 3D U-Net-based network, namely 3D U-Net Plus, to segment the neuron from the surrounding fibers before the application of tracing algorithms. All the images in BigNeuron, the biggest available neuronal image dataset, contain clean neurons with no interference of nerve fibers, which are not practical to train the segmentation network. Based upon the BigNeuron images, we synthesize a SYNethic TAngled NEuronal Image dataset (SYNTANEI) to train the proposed network, by fusing the neurons with extracted nerve fibers. Due to the adoption of dropout, àtrous convolution and Àtrous Spatial Pyramid Pooling (ASPP), experimental results on the synthetic and real tangled neuronal images show that the proposed 3D U-Net Plus network achieved very promising segmentation results. The neurons reconstructed by the tracing algorithm using the segmentation result match significantly better with the ground truth than that using the original images.
Collapse
|
29
|
Li S, Quan T, Zhou H, Huang Q, Guan T, Chen Y, Xu C, Kang H, Li A, Fu L, Luo Q, Gong H, Zeng S. Brain-Wide Shape Reconstruction of a Traced Neuron Using the Convex Image Segmentation Method. Neuroinformatics 2019; 18:199-218. [PMID: 31396858 DOI: 10.1007/s12021-019-09434-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Neuronal shape reconstruction is a helpful technique for establishing neuron identity, inferring neuronal connections, mapping neuronal circuits, and so on. Advances in optical imaging techniques have enabled data collection that includes the shape of a neuron across the whole brain, considerably extending the scope of neuronal anatomy. However, such datasets often include many fuzzy neurites and many crossover regions that neurites are closely attached, which make neuronal shape reconstruction more challenging. In this study, we proposed a convex image segmentation model for neuronal shape reconstruction that segments a neurite into cross sections along its traced skeleton. Both the sparse nature of gradient images and the rule that fuzzy neurites usually have a small radius are utilized to improve neuronal shape reconstruction in regions with fuzzy neurites. Because the model is closely related to the traced skeleton point, we can use this relationship for identifying neurite with crossover regions. We demonstrated the performance of our model on various datasets, including those with fuzzy neurites and neurites with crossover regions, and we verified that our model could robustly reconstruct the neuron shape on a brain-wide scale.
Collapse
Affiliation(s)
- Shiwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China. .,School of Mathematics and Economics, Hubei University of Education, Wuhan, 430205, Hubei, China.
| | - Hang Zhou
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Tao Guan
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Cheng Xu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Hongtao Kang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Ling Fu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| |
Collapse
|
30
|
Li S, Quan T, Xu C, Huang Q, Kang H, Chen Y, Li A, Fu L, Luo Q, Gong H, Zeng S. Optimization of Traced Neuron Skeleton Using Lasso-Based Model. Front Neuroanat 2019; 13:18. [PMID: 30846931 PMCID: PMC6393391 DOI: 10.3389/fnana.2019.00018] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Accepted: 02/01/2019] [Indexed: 11/30/2022] Open
Abstract
Reconstruction of neuronal morphology from images involves mainly the extraction of neuronal skeleton points. It is an indispensable step in the quantitative analysis of neurons. Due to the complex morphology of neurons, many widely used tracing methods have difficulties in accurately acquiring skeleton points near branch points or in structures with tortuosity. Here, we propose two models to solve these problems. One is based on an L1-norm minimization model, which can better identify tortuous structure, namely, a local structure with large curvature skeleton points; the other detects an optimized branch point by considering the combination patterns of all neurites that link to this point. We combined these two models to achieve optimized skeleton detection for a neuron. We validate our models in various datasets including MOST and BigNeuron. In addition, we demonstrate that our method can optimize the traced skeletons from large-scale images. These characteristics of our approach indicate that it can reduce manual editing of traced skeletons and help to accelerate the accurate reconstruction of neuronal morphology.
Collapse
Affiliation(s)
- Shiwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China.,School of Mathematics and Economics, Hubei University of Education, Hubei, China
| | - Cheng Xu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Hongtao Kang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Ling Fu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| |
Collapse
|