1
|
Ju H, Skibbe H, Fukui M, Yoshimura SH, Naoki H. Machine learning-guided reconstruction of cytoskeleton network from live-cell AFM images. iScience 2024; 27:110907. [PMID: 39391724 PMCID: PMC11465173 DOI: 10.1016/j.isci.2024.110907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 08/29/2024] [Accepted: 09/05/2024] [Indexed: 10/12/2024] Open
Abstract
How actin filaments (F-actins) are dynamically reorganized in motile cells at the level of individual filaments is an open question. To find the answer, a high-speed atomic force microscopy (HS-AFM) system has been developed to live-imagine intracellular dynamics of the individual F-actins. However, noise and low resolution made it difficult to fully recognize individual F-actins in the HS-AFM images. To tackle this problem, we developed a new machine learning method that quantitatively recognizes individual F-actins. The method estimates F-actin orientation from the image while improving the resolution. We found that F-actins were oriented at ±35° toward the membrane in the lamellipodia, which is consistent with Arp2/3 complex-induced branching. Furthermore, in the cell cortex our results showed non-random orientation at four specific angles, suggesting a new mechanism for F-actin organization demonstrating the potential of our newly developed method to fundamentally improve our understanding of the structural dynamics of F-actin networks.
Collapse
Affiliation(s)
- Hanqiu Ju
- Laboratory of Data-driven Biology, Graduate School of Integrated Sciences for Life, Hiroshima University, Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526, Japan
- Mathematical Sciences Research Laboratory, Advanced Technology Research & Development Division, Nikon Corporation, 1-5-20, Nishioi, Shinagawa-ku, Tokyo 140-8601, Japan
| | - Henrik Skibbe
- Brain Image Analysis Unit, RIKEN Center for Brain Science, Wako-shi, Japan
| | - Masaya Fukui
- Laboratory of Data-driven Biology, Graduate School of Integrated Sciences for Life, Hiroshima University, Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526, Japan
| | - Shige H. Yoshimura
- Laboratory of Plasma Membrane and Nuclear Signaling, Graduate School of Biostudies, Kyoto University, Kyoto, Japan
- Center for Living Systems Information Science (CeLiSIS), Kyoto University, Yoshida-Konoe-Cho, Sakyo-ku, Kyoto 606-8501, Japan
| | - Honda Naoki
- Laboratory of Data-driven Biology, Graduate School of Integrated Sciences for Life, Hiroshima University, Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526, Japan
- Graduate School of Medicine, Nagoya University, Tsurumai-cho, Showa-ku, Nagoya, Aichi 466-8550, Japan
- Theoretical Biology Research Group, Exploratory Research Center on Life and Living Systems (ExCELLS), National Institutes of Natural Sciences, Okazaki, Aichi, Japan
| |
Collapse
|
2
|
Athey TL, Tward DJ, Mueller U, Younes L, Vogelstein JT, Miller MI. Preserving Derivative Information while Transforming Neuronal Curves. Neuroinformatics 2024; 22:63-74. [PMID: 38036915 PMCID: PMC10917852 DOI: 10.1007/s12021-023-09648-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/31/2023] [Indexed: 12/02/2023]
Abstract
The international neuroscience community is building the first comprehensive atlases of brain cell types to understand how the brain functions from a higher resolution, and more integrated perspective than ever before. In order to build these atlases, subsets of neurons (e.g. serotonergic neurons, prefrontal cortical neurons etc.) are traced in individual brain samples by placing points along dendrites and axons. Then, the traces are mapped to common coordinate systems by transforming the positions of their points, which neglects how the transformation bends the line segments in between. In this work, we apply the theory of jets to describe how to preserve derivatives of neuron traces up to any order. We provide a framework to compute possible error introduced by standard mapping methods, which involves the Jacobian of the mapping transformation. We show how our first order method improves mapping accuracy in both simulated and real neuron traces under random diffeomorphisms. Our method is freely available in our open-source Python package brainlit.
Collapse
Affiliation(s)
- Thomas L Athey
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
- Institute of Computational Medicine, Johns Hopkins University, Baltimore, MD, USA.
| | - Daniel J Tward
- Department of Computational Medicine, University of California at Los Angeles, Los Angeles, CA, USA
- Department of Neurology, University of California at Los Angeles, Los Angeles, CA, USA
| | - Ulrich Mueller
- Department of Neuroscience, Johns Hopkins University, Baltimore, MD, USA
| | - Laurent Younes
- Institute of Computational Medicine, Johns Hopkins University, Baltimore, MD, USA
- Department of Applied Mathematics & Statistics, Johns Hopkins University, Baltimore, MD, USA
- Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA
- Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, USA
| | - Joshua T Vogelstein
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Institute of Computational Medicine, Johns Hopkins University, Baltimore, MD, USA
- Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA
- Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, USA
| | - Michael I Miller
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Institute of Computational Medicine, Johns Hopkins University, Baltimore, MD, USA
- Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA
- Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
3
|
Boorboor S, Mathew S, Ananth M, Talmage D, Role LW, Kaufman AE. NeuRegenerate: A Framework for Visualizing Neurodegeneration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1625-1637. [PMID: 34757909 PMCID: PMC10070008 DOI: 10.1109/tvcg.2021.3127132] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Recent advances in high-resolution microscopy have allowed scientists to better understand the underlying brain connectivity. However, due to the limitation that biological specimens can only be imaged at a single timepoint, studying changes to neural projections over time is limited to observations gathered using population analysis. In this article, we introduce NeuRegenerate, a novel end-to-end framework for the prediction and visualization of changes in neural fiber morphology within a subject across specified age-timepoints. To predict projections, we present neuReGANerator, a deep-learning network based on cycle-consistent generative adversarial network (GAN) that translates features of neuronal structures across age-timepoints for large brain microscopy volumes. We improve the reconstruction quality of the predicted neuronal structures by implementing a density multiplier and a new loss function, called the hallucination loss. Moreover, to alleviate artifacts that occur due to tiling of large input volumes, we introduce a spatial-consistency module in the training pipeline of neuReGANerator. Finally, to visualize the change in projections, predicted using neuReGANerator, NeuRegenerate offers two modes: (i) neuroCompare to simultaneously visualize the difference in the structures of the neuronal projections, from two age domains (using structural view and bounded view), and (ii) neuroMorph, a vesselness-based morphing technique to interactively visualize the transformation of the structures from one age-timepoint to the other. Our framework is designed specifically for volumes acquired using wide-field microscopy. We demonstrate our framework by visualizing the structural changes within the cholinergic system of the mouse brain between a young and old specimen.
Collapse
|
4
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
5
|
Zhou H, Cao T, Liu T, Liu S, Chen L, Chen Y, Huang Q, Ye W, Zeng S, Quan T. Super-resolution Segmentation Network for Reconstruction of Packed Neurites. Neuroinformatics 2022; 20:1155-1167. [PMID: 35851944 DOI: 10.1007/s12021-022-09594-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 12/31/2022]
Abstract
Neuron reconstruction can provide the quantitative data required for measuring the neuronal morphology and is crucial in brain research. However, the difficulty in reconstructing dense neurites, wherein massive labor is required for accurate reconstruction in most cases, has not been well resolved. In this work, we provide a new pathway for solving this challenge by proposing the super-resolution segmentation network (SRSNet), which builds the mapping of the neurites in the original neuronal images and their segmentation in a higher-resolution (HR) space. During the segmentation process, the distances between the boundaries of the packed neurites are enlarged, and only the central parts of the neurites are segmented. Owing to this strategy, the super-resolution segmented images are produced for subsequent reconstruction. We carried out experiments on neuronal images with a voxel size of 0.2 μm × 0.2 μm × 1 μm produced by fMOST. SRSNet achieves an average F1 score of 0.88 for automatic packed neurites reconstruction, which takes both the precision and recall values into account, while the average F1 scores of other state-of-the-art automatic tracing methods are less than 0.70.
Collapse
Affiliation(s)
- Hang Zhou
- School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Tingting Cao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tian Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Shijie Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Lu Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Wei Ye
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.
| |
Collapse
|
6
|
Huang Q, Cao T, Zeng S, Li A, Quan T. Minimizing probability graph connectivity cost for discontinuous filamentary structures tracing in neuron image. IEEE J Biomed Health Inform 2022; 26:3092-3103. [PMID: 35104232 DOI: 10.1109/jbhi.2022.3147512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Neuron tracing from optical image is critical in understanding brain function in diseases. A key problem is to trace discontinuous filamentary structures from noisy background, which is commonly encountered in neuronal and some medical images. Broken traces lead to cumulative topological errors, and current methods were hard to assemble various fragmentary traces for correct connection. In this paper, we propose a graph connectivity theoretical method for precise filamentary structure tracing in neuron image. First, we build the initial subgraphs of signals via a region-to-region based tracing method on CNN predicted probability. CNN technique removes noise interference, whereas its prediction for some elongated fragments is still incomplete. Second, we reformulate the global connection problem of individual or fragmented subgraphs under heuristic graph restrictions as a dynamic linear programming function via minimizing graph connectivity cost, where the connected cost of breakpoints are calculated using their probability strength via minimum cost path. Experimental results on challenging neuronal images proved that the proposed method outperformed existing methods and achieved similar results of manual tracing, even in some complex discontinuous issues. Performances on vessel images indicate the potential of the method for some other tubular objects tracing.
Collapse
|
7
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. DeepRayburst for Automatic Shape Analysis of Tree-Like Structures in Biomedical Images. IEEE J Biomed Health Inform 2021; 26:2204-2215. [PMID: 34727041 DOI: 10.1109/jbhi.2021.3124514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Precise quantification of tree-like structures from biomedical images, such as neuronal shape reconstruction and retinal blood vessel caliber estimation, is increasingly important in understanding normal function and pathologic processes in biology. Some handcrafted methods have been proposed for this purpose in recent years. However, they are designed only for a specific application. In this paper, we propose a shape analysis algorithm, DeepRayburst, that can be applied to many different applications based on a Multi-Feature Rayburst Sampling (MFRS) and a Dual Channel Temporal Convolutional Network (DC-TCN). Specifically, we first generate a Rayburst Sampling (RS) core containing a set of multidirectional rays. Then the MFRS is designed by extending each ray of the RS to multiple parallel rays which extract a set of feature sequences. A Gaussian kernel is then used to fuse these feature sequences and outputs one feature sequence. Furthermore, we design a DC-TCN to make the rays terminate on the surface of tree-like structures according to the fused feature sequence. Finally, by analyzing the distribution patterns of the terminated rays, the algorithm can serve multiple shape analysis applications of tree-like structures. Experiments on three different applications, including soma shape reconstruction, neuronal shape reconstruction, and vessel caliber estimation, confirm that the proposed method outperforms other state-of-the-art shape analysis methods, which demonstrate its flexibility and robustness.
Collapse
|
8
|
Chen X, Zhang C, Zhao J, Xiong Z, Zha ZJ, Wu F. Weakly Supervised Neuron Reconstruction From Optical Microscopy Images With Morphological Priors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3205-3216. [PMID: 33999814 DOI: 10.1109/tmi.2021.3080695] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Manually labeling neurons from high-resolution but noisy and low-contrast optical microscopy (OM) images is tedious. As a result, the lack of annotated data poses a key challenge when applying deep learning techniques for reconstructing neurons from noisy and low-contrast OM images. While traditional tracing methods provide a possible way to efficiently generate labels for supervised network training, the generated pseudo-labels contain many noisy and incorrect labels, which lead to severe performance degradation. On the other hand, the publicly available dataset, BigNeuron, provides a large number of single 3D neurons that are reconstructed using various imaging paradigms and tracing methods. Though the raw OM images are not fully available for these neurons, they convey essential morphological priors for complex 3D neuron structures. In this paper, we propose a new approach to exploit morphological priors from neurons that have been reconstructed for training a deep neural network to extract neuron signals from OM images. We integrate a deep segmentation network in a generative adversarial network (GAN), expecting the segmentation network to be weakly supervised by pseudo-labels at the pixel level while utilizing the supervision of previously reconstructed neurons at the morphology level. In our morphological-prior-guided neuron reconstruction GAN, named MP-NRGAN, the segmentation network extracts neuron signals from raw images, and the discriminator network encourages the extracted neurons to follow the morphology distribution of reconstructed neurons. Comprehensive experiments on the public VISoR-40 dataset and BigNeuron dataset demonstrate that our proposed MP-NRGAN outperforms state-of-the-art approaches with less training effort.
Collapse
|
9
|
Zhao J, Chen X, Xiong Z, Liu D, Zeng J, Xie C, Zhang Y, Zha ZJ, Bi G, Wu F. Neuronal Population Reconstruction From Ultra-Scale Optical Microscopy Images via Progressive Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4034-4046. [PMID: 32746145 DOI: 10.1109/tmi.2020.3009148] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Reconstruction of neuronal populations from ultra-scale optical microscopy (OM) images is essential to investigate neuronal circuits and brain mechanisms. The noises, low contrast, huge memory requirement, and high computational cost pose significant challenges in the neuronal population reconstruction. Recently, many studies have been conducted to extract neuron signals using deep neural networks (DNNs). However, training such DNNs usually relies on a huge amount of voxel-wise annotations in OM images, which are expensive in terms of both finance and labor. In this paper, we propose a novel framework for dense neuronal population reconstruction from ultra-scale images. To solve the problem of high cost in obtaining manual annotations for training DNNs, we propose a progressive learning scheme for neuronal population reconstruction (PLNPR) which does not require any manual annotations. Our PLNPR scheme consists of a traditional neuron tracing module and a deep segmentation network that mutually complement and progressively promote each other. To reconstruct dense neuronal populations from a terabyte-sized ultra-scale image, we introduce an automatic framework which adaptively traces neurons block by block and fuses fragmented neurites in overlapped regions continuously and smoothly. We build a dataset "VISoR-40" which consists of 40 large-scale OM image blocks from cortical regions of a mouse. Extensive experimental results on our VISoR-40 dataset and the public BigNeuron dataset demonstrate the effectiveness and superiority of our method on neuronal population reconstruction and single neuron reconstruction. Furthermore, we successfully apply our method to reconstruct dense neuronal populations from an ultra-scale mouse brain slice. The proposed adaptive block propagation and fusion strategies greatly improve the completeness of neurites in dense neuronal population reconstruction.
Collapse
|
10
|
Huang Q, Chen Y, Liu S, Xu C, Cao T, Xu Y, Wang X, Rao G, Li A, Zeng S, Quan T. Weakly Supervised Learning of 3D Deep Network for Neuron Reconstruction. Front Neuroanat 2020; 14:38. [PMID: 32848636 PMCID: PMC7399060 DOI: 10.3389/fnana.2020.00038] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 06/05/2020] [Indexed: 11/13/2022] Open
Abstract
Digital reconstruction or tracing of 3D tree-like neuronal structures from optical microscopy images is essential for understanding the functionality of neurons and reveal the connectivity of neuronal networks. Despite the existence of numerous tracing methods, reconstructing a neuron from highly noisy images remains challenging, particularly for neurites with low and inhomogeneous intensities. Conducting deep convolutional neural network (CNN)-based segmentation prior to neuron tracing facilitates an approach to solving this problem via separation of weak neurites from a noisy background. However, large manual annotations are needed in deep learning-based methods, which is labor-intensive and limits the algorithm's generalization for different datasets. In this study, we present a weakly supervised learning method of a deep CNN for neuron reconstruction without manual annotations. Specifically, we apply a 3D residual CNN as the architecture for discriminative neuronal feature extraction. We construct the initial pseudo-labels (without manual segmentation) of the neuronal images on the basis of an existing automatic tracing method. A weakly supervised learning framework is proposed via iterative training of the CNN model for improved prediction and refining of the pseudo-labels to update training samples. The pseudo-label was iteratively modified via mining and addition of weak neurites from the CNN predicted probability map on the basis of their tubularity and continuity. The proposed method was evaluated on several challenging images from the public BigNeuron and Diadem datasets, to fMOST datasets. Owing to the adaption of 3D deep CNNs and weakly supervised learning, the presented method demonstrates effective detection of weak neurites from noisy images and achieves results similar to those of the CNN model with manual annotations. The tracing performance was significantly improved by the proposed method on both small and large datasets (>100 GB). Moreover, the proposed method proved to be superior to several novel tracing methods on original images. The results obtained on various large-scale datasets demonstrated the generalization and high precision achieved by the proposed method for neuron reconstruction.
Collapse
Affiliation(s)
- Qing Huang
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shijie Liu
- School of Mathematics and Physics, China University of Geosciences, Wuhan, China
| | - Cheng Xu
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingting Cao
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yongchao Xu
- School of Electronics Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaojun Wang
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Gong Rao
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingwei Quan
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
11
|
Li S, Quan T, Zhou H, Huang Q, Guan T, Chen Y, Xu C, Kang H, Li A, Fu L, Luo Q, Gong H, Zeng S. Brain-Wide Shape Reconstruction of a Traced Neuron Using the Convex Image Segmentation Method. Neuroinformatics 2019; 18:199-218. [PMID: 31396858 DOI: 10.1007/s12021-019-09434-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Neuronal shape reconstruction is a helpful technique for establishing neuron identity, inferring neuronal connections, mapping neuronal circuits, and so on. Advances in optical imaging techniques have enabled data collection that includes the shape of a neuron across the whole brain, considerably extending the scope of neuronal anatomy. However, such datasets often include many fuzzy neurites and many crossover regions that neurites are closely attached, which make neuronal shape reconstruction more challenging. In this study, we proposed a convex image segmentation model for neuronal shape reconstruction that segments a neurite into cross sections along its traced skeleton. Both the sparse nature of gradient images and the rule that fuzzy neurites usually have a small radius are utilized to improve neuronal shape reconstruction in regions with fuzzy neurites. Because the model is closely related to the traced skeleton point, we can use this relationship for identifying neurite with crossover regions. We demonstrated the performance of our model on various datasets, including those with fuzzy neurites and neurites with crossover regions, and we verified that our model could robustly reconstruct the neuron shape on a brain-wide scale.
Collapse
Affiliation(s)
- Shiwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China. .,School of Mathematics and Economics, Hubei University of Education, Wuhan, 430205, Hubei, China.
| | - Hang Zhou
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Tao Guan
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Cheng Xu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Hongtao Kang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Ling Fu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| |
Collapse
|
12
|
Li S, Quan T, Xu C, Huang Q, Kang H, Chen Y, Li A, Fu L, Luo Q, Gong H, Zeng S. Optimization of Traced Neuron Skeleton Using Lasso-Based Model. Front Neuroanat 2019; 13:18. [PMID: 30846931 PMCID: PMC6393391 DOI: 10.3389/fnana.2019.00018] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Accepted: 02/01/2019] [Indexed: 11/30/2022] Open
Abstract
Reconstruction of neuronal morphology from images involves mainly the extraction of neuronal skeleton points. It is an indispensable step in the quantitative analysis of neurons. Due to the complex morphology of neurons, many widely used tracing methods have difficulties in accurately acquiring skeleton points near branch points or in structures with tortuosity. Here, we propose two models to solve these problems. One is based on an L1-norm minimization model, which can better identify tortuous structure, namely, a local structure with large curvature skeleton points; the other detects an optimized branch point by considering the combination patterns of all neurites that link to this point. We combined these two models to achieve optimized skeleton detection for a neuron. We validate our models in various datasets including MOST and BigNeuron. In addition, we demonstrate that our method can optimize the traced skeletons from large-scale images. These characteristics of our approach indicate that it can reduce manual editing of traced skeletons and help to accelerate the accurate reconstruction of neuronal morphology.
Collapse
Affiliation(s)
- Shiwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China.,School of Mathematics and Economics, Hubei University of Education, Hubei, China
| | - Cheng Xu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Hongtao Kang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Ling Fu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| |
Collapse
|