1
|
Muñoz JM, Williams JT, Lebowitz JJ. Morphological and functional decline of the SNc in a model of progressive parkinsonism. NPJ Parkinsons Dis 2025; 11:24. [PMID: 39875379 PMCID: PMC11775090 DOI: 10.1038/s41531-025-00873-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Accepted: 01/20/2025] [Indexed: 01/30/2025] Open
Abstract
The motor symptoms of Parkinson's Disease are attributed to the degeneration of dopamine neurons in the substantia nigra pars compacta (SNc). Previous work in the MCI-Park mouse model has suggested that the loss of somatodendritic dopamine transmission predicts the development of motor deficits. In the current study, brain slices from MCI-Park mice were used to investigate dopamine signaling in the SNc prior to and through the onset of movement deficits. Electrophysiological properties were impaired by p30 and somatic volume was decreased at all time points. The D2 receptor activated potassium current evoked by quinpirole was present initially, but declined after p30. In contrast, D2-IPSCs were absent at all time points. The decrease in GPCR-mediated inhibition was met with increased spontaneous GABAA signaling. Dendro-dendritic synapses are identified as an early locus of dysfunction in response to bioenergetic decline and suggest that dendritic release sites may contribute to the induction of degeneration.
Collapse
Affiliation(s)
- Jacob M Muñoz
- Vollum Institute, Oregon Health & Science University, Portland, OR, USA
| | - John T Williams
- Vollum Institute, Oregon Health & Science University, Portland, OR, USA
| | - Joseph J Lebowitz
- Vollum Institute, Oregon Health & Science University, Portland, OR, USA.
| |
Collapse
|
2
|
Liu M, Wu S, Chen R, Lin Z, Wang Y, Meijering E. Brain Image Segmentation for Ultrascale Neuron Reconstruction via an Adaptive Dual-Task Learning Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2574-2586. [PMID: 38373129 DOI: 10.1109/tmi.2024.3367384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Accurate morphological reconstruction of neurons in whole brain images is critical for brain science research. However, due to the wide range of whole brain imaging, uneven staining, and optical system fluctuations, there are significant differences in image properties between different regions of the ultrascale brain image, such as dramatically varying voxel intensities and inhomogeneous distribution of background noise, posing an enormous challenge to neuron reconstruction from whole brain images. In this paper, we propose an adaptive dual-task learning network (ADTL-Net) to quickly and accurately extract neuronal structures from ultrascale brain images. Specifically, this framework includes an External Features Classifier (EFC) and a Parameter Adaptive Segmentation Decoder (PASD), which share the same Multi-Scale Feature Encoder (MSFE). MSFE introduces an attention module named Channel Space Fusion Module (CSFM) to extract structure and intensity distribution features of neurons at different scales for addressing the problem of anisotropy in 3D space. Then, EFC is designed to classify these feature maps based on external features, such as foreground intensity distributions and image smoothness, and select specific PASD parameters to decode them of different classes to obtain accurate segmentation results. PASD contains multiple sets of parameters trained by different representative complex signal-to-noise distribution image blocks to handle various images more robustly. Experimental results prove that compared with other advanced segmentation methods for neuron reconstruction, the proposed method achieves state-of-the-art results in the task of neuron reconstruction from ultrascale brain images, with an improvement of about 49% in speed and 12% in F1 score.
Collapse
|
3
|
Velasco I, Garcia-Cantero JJ, Brito JP, Bayona S, Pastor L, Mata S. NeuroEditor: a tool to edit and visualize neuronal morphologies. Front Neuroanat 2024; 18:1342762. [PMID: 38425804 PMCID: PMC10902916 DOI: 10.3389/fnana.2024.1342762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 01/22/2024] [Indexed: 03/02/2024] Open
Abstract
The digital extraction of detailed neuronal morphologies from microscopy data is an essential step in the study of neurons. Ever since Cajal's work, the acquisition and analysis of neuron anatomy has yielded invaluable insight into the nervous system, which has led to our present understanding of many structural and functional aspects of the brain and the nervous system, well beyond the anatomical perspective. Obtaining detailed anatomical data, though, is not a simple task. Despite recent progress, acquiring neuron details still involves using labor-intensive, error prone methods that facilitate the introduction of inaccuracies and mistakes. In consequence, getting reliable morphological tracings usually needs the completion of post-processing steps that require user intervention to ensure the extracted data accuracy. Within this framework, this paper presents NeuroEditor, a new software tool for visualization, editing and correction of previously reconstructed neuronal tracings. This tool has been developed specifically for alleviating the burden associated with the acquisition of detailed morphologies. NeuroEditor offers a set of algorithms that can automatically detect the presence of potential errors in tracings. The tool facilitates users to explore an error with a simple mouse click so that it can be corrected manually or, where applicable, automatically. In some cases, this tool can also propose a set of actions to automatically correct a particular type of error. Additionally, this tool allows users to visualize and compare the original and modified tracings, also providing a 3D mesh that approximates the neuronal membrane. The approximation of this mesh is computed and recomputed on-the-fly, reflecting any instantaneous changes during the tracing process. Moreover, NeuroEditor can be easily extended by users, who can program their own algorithms in Python and run them within the tool. Last, this paper includes an example showing how users can easily define a customized workflow by applying a sequence of editing operations. The edited morphology can then be stored, together with the corresponding 3D mesh that approximates the neuronal membrane.
Collapse
Affiliation(s)
- Ivan Velasco
- Department of Computer Science, Universidad Rey Juan Carlos (URJC), Tulipan, Madrid, Spain
| | - Juan J. Garcia-Cantero
- Department of Computer Science, Universidad Rey Juan Carlos (URJC), Tulipan, Madrid, Spain
- Center for Computational Simulation, Universidad Politecnica de Madrid, Madrid, Spain
| | - Juan P. Brito
- Center for Computational Simulation, Universidad Politecnica de Madrid, Madrid, Spain
- DLSIIS, ETSIINF, Universidad Politecnica de Madrid, Madrid, Spain
| | - Sofia Bayona
- Department of Computer Science, Universidad Rey Juan Carlos (URJC), Tulipan, Madrid, Spain
- Center for Computational Simulation, Universidad Politecnica de Madrid, Madrid, Spain
| | - Luis Pastor
- Department of Computer Science, Universidad Rey Juan Carlos (URJC), Tulipan, Madrid, Spain
- Center for Computational Simulation, Universidad Politecnica de Madrid, Madrid, Spain
| | - Susana Mata
- Department of Computer Science, Universidad Rey Juan Carlos (URJC), Tulipan, Madrid, Spain
- Center for Computational Simulation, Universidad Politecnica de Madrid, Madrid, Spain
| |
Collapse
|
4
|
Gratacos G, Chakrabarti A, Ju T. Tree Recovery by Dynamic Programming. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:15870-15882. [PMID: 37505999 DOI: 10.1109/tpami.2023.3299868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2023]
Abstract
Tree-like structures are common, naturally occurring objects that are of interest to many fields of study, such as plant science and biomedicine. Analysis of these structures is typically based on skeletons extracted from captured data, which often contain spurious cycles that need to be removed. We propose a dynamic programming algorithm for solving the NP-hard tree recovery problem formulated by (Estrada et al. 2015), which seeks a least-cost partitioning of the graph nodes that yields a directed tree. Our algorithm finds the optimal solution by iteratively contracting the graph via node-merging until the problem can be trivially solved. By carefully designing the merging sequence, our algorithm can efficiently recover optimal trees for many real-world data where (Estrada et al. 2015) only produces sub-optimal solutions. We also propose an approximate variant of dynamic programming using beam search, which can process graphs containing thousands of cycles with significantly improved optimality and efficiency compared with (Estrada et al. 2015).
Collapse
|
5
|
Ding L, Zhao X, Guo S, Liu Y, Liu L, Wang Y, Peng H. SNAP: a structure-based neuron morphology reconstruction automatic pruning pipeline. Front Neuroinform 2023; 17:1174049. [PMID: 37388757 PMCID: PMC10303825 DOI: 10.3389/fninf.2023.1174049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 05/22/2023] [Indexed: 07/01/2023] Open
Abstract
Background Neuron morphology analysis is an essential component of neuron cell-type definition. Morphology reconstruction represents a bottleneck in high-throughput morphology analysis workflow, and erroneous extra reconstruction owing to noise and entanglements in dense neuron regions restricts the usability of automated reconstruction results. We propose SNAP, a structure-based neuron morphology reconstruction pruning pipeline, to improve the usability of results by reducing erroneous extra reconstruction and splitting entangled neurons. Methods For the four different types of erroneous extra segments in reconstruction (caused by noise in the background, entanglement with dendrites of close-by neurons, entanglement with axons of other neurons, and entanglement within the same neuron), SNAP incorporates specific statistical structure information into rules for erroneous extra segment detection and achieves pruning and multiple dendrite splitting. Results Experimental results show that this pipeline accomplishes pruning with satisfactory precision and recall. It also demonstrates good multiple neuron-splitting performance. As an effective tool for post-processing reconstruction, SNAP can facilitate neuron morphology analysis.
Collapse
Affiliation(s)
- Liya Ding
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Xuan Zhao
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Shuxia Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yufeng Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lijuan Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yimin Wang
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
- Guangdong Institute of Intelligence Science and Technology, Zhuhai, China
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| |
Collapse
|
6
|
Manubens-Gil L, Zhou Z, Chen H, Ramanathan A, Liu X, Liu Y, Bria A, Gillette T, Ruan Z, Yang J, Radojević M, Zhao T, Cheng L, Qu L, Liu S, Bouchard KE, Gu L, Cai W, Ji S, Roysam B, Wang CW, Yu H, Sironi A, Iascone DM, Zhou J, Bas E, Conde-Sousa E, Aguiar P, Li X, Li Y, Nanda S, Wang Y, Muresan L, Fua P, Ye B, He HY, Staiger JF, Peter M, Cox DN, Simonneau M, Oberlaender M, Jefferis G, Ito K, Gonzalez-Bellido P, Kim J, Rubel E, Cline HT, Zeng H, Nern A, Chiang AS, Yao J, Roskams J, Livesey R, Stevens J, Liu T, Dang C, Guo Y, Zhong N, Tourassi G, Hill S, Hawrylycz M, Koch C, Meijering E, Ascoli GA, Peng H. BigNeuron: a resource to benchmark and predict performance of algorithms for automated tracing of neurons in light microscopy datasets. Nat Methods 2023; 20:824-835. [PMID: 37069271 DOI: 10.1038/s41592-023-01848-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 03/14/2023] [Indexed: 04/19/2023]
Abstract
BigNeuron is an open community bench-testing platform with the goal of setting open standards for accurate and fast automatic neuron tracing. We gathered a diverse set of image volumes across several species that is representative of the data obtained in many neuroscience laboratories interested in neuron tracing. Here, we report generated gold standard manual annotations for a subset of the available imaging datasets and quantified tracing quality for 35 automatic tracing algorithms. The goal of generating such a hand-curated diverse dataset is to advance the development of tracing algorithms and enable generalizable benchmarking. Together with image quality features, we pooled the data in an interactive web application that enables users and developers to perform principal component analysis, t-distributed stochastic neighbor embedding, correlation and clustering, visualization of imaging and tracing data, and benchmarking of automatic tracing algorithms in user-defined data subsets. The image quality metrics explain most of the variance in the data, followed by neuromorphological features related to neuron size. We observed that diverse algorithms can provide complementary information to obtain accurate results and developed a method to iteratively combine methods and generate consensus reconstructions. The consensus trees obtained provide estimates of the neuron structure ground truth that typically outperform single algorithms in noisy datasets. However, specific algorithms may outperform the consensus tree strategy in specific imaging conditions. Finally, to aid users in predicting the most accurate automatic tracing results without manual annotations for comparison, we used support vector machine regression to predict reconstruction quality given an image volume and a set of automatic tracings.
Collapse
Affiliation(s)
- Linus Manubens-Gil
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Zhi Zhou
- Microsoft Corporation, Redmond, WA, USA
| | | | - Arvind Ramanathan
- Computing, Environment and Life Sciences Directorate, Argonne National Laboratory, Lemont, IL, USA
| | | | - Yufeng Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | | | - Todd Gillette
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Zongcai Ruan
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Jian Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | | | - Ting Zhao
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Li Cheng
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Lei Qu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Anhui University, Hefei, China
| | | | - Kristofer E Bouchard
- Scientific Data Division and Biological Systems and Engineering Division, Lawrence Berkeley National Lab, Berkeley, CA, USA
- Helen Wills Neuroscience Institute and Redwood Center for Theoretical Neuroscience, UC Berkeley, Berkeley, CA, USA
| | - Lin Gu
- RIKEN AIP, Tokyo, Japan
- Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, Tokyo, Japan
| | - Weidong Cai
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Shuiwang Ji
- Texas A&M University, College Station, TX, USA
| | - Badrinath Roysam
- Cullen College of Engineering, University of Houston, Houston, TX, USA
| | - Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Hongchuan Yu
- National Centre for Computer Animation, Bournemouth University, Poole, UK
| | | | - Daniel Maxim Iascone
- Department of Neuroscience, Columbia University, New York, NY, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Jie Zhou
- Department of Computer Science, Northern Illinois University, DeKalb, IL, USA
| | | | - Eduardo Conde-Sousa
- i3S, Instituto de Investigação E Inovação Em Saúde, Universidade Do Porto, Porto, Portugal
- INEB, Instituto de Engenharia Biomédica, Universidade Do Porto, Porto, Portugal
| | - Paulo Aguiar
- i3S, Instituto de Investigação E Inovação Em Saúde, Universidade Do Porto, Porto, Portugal
| | - Xiang Li
- Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Yujie Li
- Allen Institute for Brain Science, Seattle, WA, USA
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Sumit Nanda
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Yuan Wang
- Program in Neuroscience, Department of Biomedical Sciences, Florida State University College of Medicine, Tallahassee, FL, USA
| | - Leila Muresan
- Cambridge Advanced Imaging Centre, University of Cambridge, Cambridge, UK
| | - Pascal Fua
- Computer Vision Laboratory, EPFL, Lausanne, Switzerland
| | - Bing Ye
- Life Sciences Institute and Department of Cell and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
| | - Hai-Yan He
- Department of Biology, Georgetown University, Washington, DC, USA
| | - Jochen F Staiger
- Institute for Neuroanatomy, University Medical Center Göttingen, Georg-August- University Göttingen, Goettingen, Germany
| | - Manuel Peter
- Department of Stem Cell and Regenerative Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Daniel N Cox
- Neuroscience Institute, Georgia State University, Atlanta, GA, USA
| | - Michel Simonneau
- 42 ENS Paris-Saclay, CNRS, CentraleSupélec, LuMIn, Université Paris-Saclay, Gif-sur-Yvette, France
| | - Marcel Oberlaender
- Max Planck Group: In Silico Brain Sciences, Max Planck Institute for Neurobiology of Behavior - caesar, Bonn, Germany
| | - Gregory Jefferis
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK
- Department of Zoology, University of Cambridge, Cambridge, UK
| | - Kei Ito
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Institute for Quantitative Biosciences, University of Tokyo, Tokyo, Japan
- Institute of Zoology, Biocenter Cologne, University of Cologne, Cologne, Germany
| | | | - Jinhyun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea
| | - Edwin Rubel
- Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA, USA
| | | | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Aljoscha Nern
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Ann-Shyn Chiang
- Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan
| | | | - Jane Roskams
- Allen Institute for Brain Science, Seattle, WA, USA
- Department of Zoology, Life Sciences Institute, University of British Columbia, Vancouver, British Columbia, Canada
| | - Rick Livesey
- Zayed Centre for Rare Disease Research, UCL Great Ormond Street Institute of Child Health, London, UK
| | - Janine Stevens
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Tianming Liu
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Chinh Dang
- Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA, USA
| | - Yike Guo
- Data Science Institute, Imperial College London, London, UK
| | - Ning Zhong
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
- Department of Life Science and Informatics, Maebashi Institute of Technology, Maebashi, Japan
| | | | - Sean Hill
- Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | | | | | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA.
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, China.
| |
Collapse
|
7
|
Wei X, Liu Q, Liu M, Wang Y, Meijering E. 3D Soma Detection in Large-Scale Whole Brain Images via a Two-Stage Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:148-157. [PMID: 36103445 DOI: 10.1109/tmi.2022.3206605] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
3D soma detection in whole brain images is a critical step for neuron reconstruction. However, existing soma detection methods are not suitable for whole mouse brain images with large amounts of data and complex structure. In this paper, we propose a two-stage deep neural network to achieve fast and accurate soma detection in large-scale and high-resolution whole mouse brain images (more than 1TB). For the first stage, a lightweight Multi-level Cross Classification Network (MCC-Net) is proposed to filter out images without somas and generate coarse candidate images by combining the advantages of the multi convolution layer's feature extraction ability. It can speed up the detection of somas and reduce the computational complexity. For the second stage, to further obtain the accurate locations of somas in the whole mouse brain images, the Scale Fusion Segmentation Network (SFS-Net) is developed to segment soma regions from candidate images. Specifically, the SFS-Net captures multi-scale context information and establishes a complementary relationship between encoder and decoder by combining the encoder-decoder structure and a 3D Scale-Aware Pyramid Fusion (SAPF) module for better segmentation performance. The experimental results on three whole mouse brain images verify that the proposed method can achieve excellent performance and provide the reconstruction of neurons with beneficial information. Additionally, we have established a public dataset named WBMSD, including 798 high-resolution and representative images ( 256 ×256 ×256 voxels) from three whole mouse brain images, dedicated to the research of soma detection, which will be released along with this paper.
Collapse
|
8
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
9
|
Liu C, Wang D, Zhang H, Wu W, Sun W, Zhao T, Zheng N. Using Simulated Training Data of Voxel-Level Generative Models to Improve 3D Neuron Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3624-3635. [PMID: 35834465 DOI: 10.1109/tmi.2022.3191011] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Reconstructing neuron morphologies from fluorescence microscope images plays a critical role in neuroscience studies. It relies on image segmentation to produce initial masks either for further processing or final results to represent neuronal morphologies. This has been a challenging step due to the variation and complexity of noisy intensity patterns in neuron images acquired from microscopes. Whereas progresses in deep learning have brought the goal of accurate segmentation much closer to reality, creating training data for producing powerful neural networks is often laborious. To overcome the difficulty of obtaining a vast number of annotated data, we propose a novel strategy of using two-stage generative models to simulate training data with voxel-level labels. Trained upon unlabeled data by optimizing a novel objective function of preserving predefined labels, the models are able to synthesize realistic 3D images with underlying voxel labels. We showed that these synthetic images could train segmentation networks to obtain even better performance than manually labeled data. To demonstrate an immediate impact of our work, we further showed that segmentation results produced by networks trained upon synthetic data could be used to improve existing neuron reconstruction methods.
Collapse
|
10
|
Zhou H, Cao T, Liu T, Liu S, Chen L, Chen Y, Huang Q, Ye W, Zeng S, Quan T. Super-resolution Segmentation Network for Reconstruction of Packed Neurites. Neuroinformatics 2022; 20:1155-1167. [PMID: 35851944 DOI: 10.1007/s12021-022-09594-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 12/31/2022]
Abstract
Neuron reconstruction can provide the quantitative data required for measuring the neuronal morphology and is crucial in brain research. However, the difficulty in reconstructing dense neurites, wherein massive labor is required for accurate reconstruction in most cases, has not been well resolved. In this work, we provide a new pathway for solving this challenge by proposing the super-resolution segmentation network (SRSNet), which builds the mapping of the neurites in the original neuronal images and their segmentation in a higher-resolution (HR) space. During the segmentation process, the distances between the boundaries of the packed neurites are enlarged, and only the central parts of the neurites are segmented. Owing to this strategy, the super-resolution segmented images are produced for subsequent reconstruction. We carried out experiments on neuronal images with a voxel size of 0.2 μm × 0.2 μm × 1 μm produced by fMOST. SRSNet achieves an average F1 score of 0.88 for automatic packed neurites reconstruction, which takes both the precision and recall values into account, while the average F1 scores of other state-of-the-art automatic tracing methods are less than 0.70.
Collapse
Affiliation(s)
- Hang Zhou
- School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Tingting Cao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tian Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Shijie Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Lu Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Wei Ye
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.
| |
Collapse
|
11
|
Guo S, Xue J, Liu J, Ye X, Guo Y, Liu D, Zhao X, Xiong F, Han X, Peng H. Smart imaging to empower brain-wide neuroscience at single-cell levels. Brain Inform 2022; 9:10. [PMID: 35543774 PMCID: PMC9095808 DOI: 10.1186/s40708-022-00158-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 04/12/2022] [Indexed: 11/10/2022] Open
Abstract
A deep understanding of the neuronal connectivity and networks with detailed cell typing across brain regions is necessary to unravel the mechanisms behind the emotional and memorial functions as well as to find the treatment of brain impairment. Brain-wide imaging with single-cell resolution provides unique advantages to access morphological features of a neuron and to investigate the connectivity of neuron networks, which has led to exciting discoveries over the past years based on animal models, such as rodents. Nonetheless, high-throughput systems are in urgent demand to support studies of neural morphologies at larger scale and more detailed level, as well as to enable research on non-human primates (NHP) and human brains. The advances in artificial intelligence (AI) and computational resources bring great opportunity to 'smart' imaging systems, i.e., to automate, speed up, optimize and upgrade the imaging systems with AI and computational strategies. In this light, we review the important computational techniques that can support smart systems in brain-wide imaging at single-cell resolution.
Collapse
Affiliation(s)
- Shuxia Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China.
| | - Jie Xue
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Jian Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xiangqiao Ye
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Yichen Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Di Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xuan Zhao
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Feng Xiong
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xiaofeng Han
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China.
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| |
Collapse
|
12
|
Wang X, Liu M, Wang Y, Fan J, Meijering E. A 3D Tubular Flux Model for Centerline Extraction in Neuron Volumetric Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1069-1079. [PMID: 34826295 DOI: 10.1109/tmi.2021.3130987] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital morphology reconstruction from neuron volumetric images is essential for computational neuroscience. The centerline of the axonal and dendritic tree provides an effective shape representation and serves as a basis for further neuron reconstruction. However, it is still a challenge to directly extract the accurate centerline from the complex neuron structure with poor image quality. In this paper, we propose a neuron centerline extraction method based on a 3D tubular flux model via a two-stage CNN framework. In the first stage, a 3D CNN is used to learn the latent neuron structure features, namely flux features, from neuron images. In the second stage, a light-weight U-Net takes the learned flux features as input to extract the centerline with a spatial weighted average strategy to constrain the multi-voxel width response. Specifically, the labels of flux features in the first stage are generated by the 3D tubular model which calculates the geometric representations of the flux between each voxel in the tubular region and the nearest point on the centerline ground truth. Compared with self-learned features by networks, flux features, as a kind of prior knowledge, explicitly take advantage of the contextual distance and direction distribution information around the centerline, which is beneficial for the precise centerline extraction. Experiments on two challenging datasets demonstrate that the proposed method outperforms other state-of-the-art methods by 18% and 35.1% in F1-measurement and average distance scores at the most, and the extracted centerline is helpful to improve the neuron reconstruction performance.
Collapse
|
13
|
Yang B, Liu M, Wang Y, Zhang K, Meijering E. Structure-Guided Segmentation for 3D Neuron Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:903-914. [PMID: 34748483 DOI: 10.1109/tmi.2021.3125777] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital reconstruction of neuronal morphologies in 3D microscopy images is critical in the field of neuroscience. However, most existing automatic tracing algorithms cannot obtain accurate neuron reconstruction when processing 3D neuron images contaminated by strong background noises or containing weak filament signals. In this paper, we present a 3D neuron segmentation network named Structure-Guided Segmentation Network (SGSNet) to enhance weak neuronal structures and remove background noises. The network contains a shared encoding path but utilizes two decoding paths called Main Segmentation Branch (MSB) and Structure-Detection Branch (SDB), respectively. MSB is trained on binary labels to acquire the 3D neuron image segmentation maps. However, the segmentation results in challenging datasets often contain structural errors, such as discontinued segments of the weak-signal neuronal structures and missing filaments due to low signal-to-noise ratio (SNR). Therefore, SDB is presented to detect the neuronal structures by regressing neuron distance transform maps. Furthermore, a Structure Attention Module (SAM) is designed to integrate the multi-scale feature maps of the two decoding paths, and provide contextual guidance of structural features from SDB to MSB to improve the final segmentation performance. In the experiments, we evaluate our model in two challenging 3D neuron image datasets, the BigNeuron dataset and the Extended Whole Mouse Brain Sub-image (EWMBS) dataset. When using different tracing methods on the segmented images produced by our method rather than other state-of-the-art segmentation methods, the distance scores gain 42.48% and 35.83% improvement in the BigNeuron dataset and 37.75% and 23.13% in the EWMBS dataset.
Collapse
|
14
|
Huang Q, Cao T, Zeng S, Li A, Quan T. Minimizing probability graph connectivity cost for discontinuous filamentary structures tracing in neuron image. IEEE J Biomed Health Inform 2022; 26:3092-3103. [PMID: 35104232 DOI: 10.1109/jbhi.2022.3147512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Neuron tracing from optical image is critical in understanding brain function in diseases. A key problem is to trace discontinuous filamentary structures from noisy background, which is commonly encountered in neuronal and some medical images. Broken traces lead to cumulative topological errors, and current methods were hard to assemble various fragmentary traces for correct connection. In this paper, we propose a graph connectivity theoretical method for precise filamentary structure tracing in neuron image. First, we build the initial subgraphs of signals via a region-to-region based tracing method on CNN predicted probability. CNN technique removes noise interference, whereas its prediction for some elongated fragments is still incomplete. Second, we reformulate the global connection problem of individual or fragmented subgraphs under heuristic graph restrictions as a dynamic linear programming function via minimizing graph connectivity cost, where the connected cost of breakpoints are calculated using their probability strength via minimum cost path. Experimental results on challenging neuronal images proved that the proposed method outperformed existing methods and achieved similar results of manual tracing, even in some complex discontinuous issues. Performances on vessel images indicate the potential of the method for some other tubular objects tracing.
Collapse
|
15
|
Zhang H, Liu C, Yu Y, Dai J, Zhao T, Zheng N. PyNeval: A Python Toolbox for Evaluating Neuron Reconstruction Performance. Front Neuroinform 2022; 15:767936. [PMID: 35153709 PMCID: PMC8831325 DOI: 10.3389/fninf.2021.767936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 12/27/2021] [Indexed: 11/13/2022] Open
Abstract
Quality assessment of tree-like structures obtained from a neuron reconstruction algorithm is necessary for evaluating the performance of the algorithm. The lack of user-friendly software for calculating common metrics motivated us to develop a Python toolbox called PyNeval, which is the first open-source toolbox designed to evaluate reconstruction results conveniently as far as we know. The toolbox supports popular metrics in two major categories, geometrical metrics and topological metrics, with an easy way to configure custom parameters for each metric. We tested the toolbox on both synthetic data and real data to show its reliability and robustness. As a demonstration of the toolbox in real applications, we used the toolbox to improve the performance of a tracing algorithm successfully by integrating it into an optimization procedure.
Collapse
Affiliation(s)
- Han Zhang
- Qiushiq Academy for Advanced Studies (QAAS), Zhejiang University, Hangzhou, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Chao Liu
- Qiushiq Academy for Advanced Studies (QAAS), Zhejiang University, Hangzhou, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | | | - Jianhua Dai
- Collaborative Innovation Center for Artificial Intelligence by MOE and Zhejiang Provincial Government (ZJU), Hangzhou, China
| | - Ting Zhao
- Howard Hughes Medical Institute, Janelia Research Campus, Ashburn, VA, United States
- *Correspondence: Ting Zhao
| | - Nenggan Zheng
- Qiushiq Academy for Advanced Studies (QAAS), Zhejiang University, Hangzhou, China
- Zhejiang Lab, Hangzhou, China
- Collaborative Innovation Center for Artificial Intelligence by MOE and Zhejiang Provincial Government (ZJU), Hangzhou, China
- Nenggan Zheng
| |
Collapse
|
16
|
Huang Q, Cao T, Chen Y, Li A, Zeng S, Quan T. Automated Neuron Tracing Using Content-Aware Adaptive Voxel Scooping on CNN Predicted Probability Map. Front Neuroanat 2021; 15:712842. [PMID: 34497493 PMCID: PMC8419427 DOI: 10.3389/fnana.2021.712842] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 07/29/2021] [Indexed: 11/23/2022] Open
Abstract
Neuron tracing, as the essential step for neural circuit building and brain information flow analyzing, plays an important role in the understanding of brain organization and function. Though lots of methods have been proposed, automatic and accurate neuron tracing from optical images remains challenging. Current methods often had trouble in tracing the complex tree-like distorted structures and broken parts of neurite from a noisy background. To address these issues, we propose a method for accurate neuron tracing using content-aware adaptive voxel scooping on a convolutional neural network (CNN) predicted probability map. First, a 3D residual CNN was applied as preprocessing to predict the object probability and suppress high noise. Then, instead of tracing on the binary image produced by maximum classification, an adaptive voxel scooping method was presented for successive neurite tracing on the probability map, based on the internal content properties (distance, connectivity, and probability continuity along direction) of the neurite. Last, the neuron tree graph was built using the length first criterion. The proposed method was evaluated on the public BigNeuron datasets and fluorescence micro-optical sectioning tomography (fMOST) datasets and outperformed current state-of-art methods on images with neurites that had broken parts and complex structures. The high accuracy tracing proved the potential of the proposed method for neuron tracing on large-scale.
Collapse
Affiliation(s)
- Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingting Cao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
17
|
Zhang Y, Liu M, Yu F, Zeng T, Wang Y. An O-shape Neural Network With Attention Modules to Detect Junctions in Biomedical Images Without Segmentation. IEEE J Biomed Health Inform 2021; 26:774-785. [PMID: 34197332 DOI: 10.1109/jbhi.2021.3094187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Junction plays an important role in biomedical research such as retinal biometric identification, retinal image registration, eye-related disease diagnosis and neuron reconstruction. However, junction detection in original biomedical images is extremely challenging. For example, retinal images contain many tiny blood vessels with complicated structures and low contrast, which makes it challenging to detect junctions. In this paper, we propose an O-shape Network architecture with Attention modules (Attention O-Net), which includes Junction Detection Branch (JDB) and Local Enhancement Branch (LEB) to detect junctions in biomedical images without segmentation. In JDB, the heatmap indicating the probabilities of junctions is estimated and followed by choosing the positions with the local highest value as the junctions, whereas it is challenging to detect junctions when the images contain weak filament signals. Therefore, LEB is constructed to enhance the thin branch foreground and make the network pay more attention to the regions with low contrast, which is helpful to alleviate the imbalance of the foreground between thin and thick branches and to detect the junctions of the thin branch. Furthermore, attention modules are utilized to introduce the feature maps from LEB to JDB, which can establish a complementary relationship and further integrate local features and contextual information between the two branches. The proposed method achieves the highest average F1-scores of 0.82, 0.73 and 0.94 in two retinal datasets and one neuron dataset, respectively. The experimental results confirm that Attention O-Net outperforms other state-of-the-art detection methods, and is helpful for retinal biometric identification.
Collapse
|
18
|
Zhou H, Li S, Li A, Huang Q, Xiong F, Li N, Han J, Kang H, Chen Y, Li Y, Lin H, Zhang YH, Lv X, Liu X, Gong H, Luo Q, Zeng S, Quan T. GTree: an Open-source Tool for Dense Reconstruction of Brain-wide Neuronal Population. Neuroinformatics 2021; 19:305-317. [PMID: 32844332 DOI: 10.1007/s12021-020-09484-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Recent technological advancements have facilitated the imaging of specific neuronal populations at the single-axon level across the mouse brain. However, the digital reconstruction of neurons from a large dataset requires months of manual effort using the currently available software. In this study, we develop an open-source software called GTree (global tree reconstruction system) to overcome the above-mentioned problem. GTree offers an error-screening system for the fast localization of submicron errors in densely packed neurites and along with long projections across the whole brain, thus achieving reconstruction close to the ground truth. Moreover, GTree integrates a series of our previous algorithms to significantly reduce manual interference and achieve high-level automation. When applied to an entire mouse brain dataset, GTree is shown to be five times faster than widely used commercial software. Finally, using GTree, we demonstrate the reconstruction of 35 long-projection neurons around one injection site of a mouse brain. GTree is also applicable to large datasets (10 TB or higher) from various light microscopes.
Collapse
Affiliation(s)
- Hang Zhou
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Shiwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Feng Xiong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Ning Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Jiacheng Han
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Hongtao Kang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Yun Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Huimin Lin
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Yu-Hui Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Xiaohua Lv
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Xiuli Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China. .,School of Mathematics and Economics, Hubei University of Education, 430205, Wuhan, Hubei, China.
| |
Collapse
|
19
|
Shen L, Liu M, Wang C, Guo C, Meijering E, Wang Y. Efficient 3D Junction Detection in Biomedical Images Based on a Circular Sampling Model and Reverse Mapping. IEEE J Biomed Health Inform 2021; 25:1612-1623. [PMID: 33166258 DOI: 10.1109/jbhi.2020.3036743] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Detection and localization of terminations and junctions is a key step in the morphological reconstruction of tree-like structures in images. Previously, a ray-shooting model was proposed to detect termination points automatically. In this paper, we propose an automatic method for 3D junction points detection in biomedical images, relying on a circular sampling model and a 2D-to-3D reverse mapping approach. First, the existing ray-shooting model is improved to a circular sampling model to extract the pixel intensity distribution feature across the potential branches around the point of interest. The computation cost can be reduced dramatically compared to the existing ray-shooting model. Then, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is employed to detect 2D junction points in maximum intensity projections (MIPs) of sub-volume images in a given 3D image, by determining the number of branches in the candidate junction region. Further, a 2D-to-3D reverse mapping approach is used to map these detected 2D junction points in MIPs to the 3D junction points in the original 3D images. The proposed 3D junction point detection method is implemented as a build-in tool in the Vaa3D platform. Experiments on multiple 2D images and 3D images show average precision and recall rates of 87.11% and 88.33% respectively. In addition, the proposed algorithm is dozens of times faster than the existing deep-learning based model. The proposed method has excellent performance in both detection precision and computation efficiency for junction detection even in large-scale biomedical images.
Collapse
|
20
|
Yang B, Chen W, Luo H, Tan Y, Liu M, Wang Y. Neuron Image Segmentation via Learning Deep Features and Enhancing Weak Neuronal Structures. IEEE J Biomed Health Inform 2021; 25:1634-1645. [PMID: 32809948 DOI: 10.1109/jbhi.2020.3017540] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Neuron morphology reconstruction (tracing) in 3D volumetric images is critical for neuronal research. However, most existing neuron tracing methods are not applicable in challenging datasets where the neuron images are contaminated by noises or containing weak filament signals. In this paper, we present a two-stage 3D neuron segmentation approach via learning deep features and enhancing weak neuronal structures, to reduce the impact of image noise in the data and enhance the weak-signal neuronal structures. In the first stage, we train a voxel-wise multi-level fully convolutional network (FCN), which specializes in learning deep features, to obtain the 3D neuron image segmentation maps in an end-to-end manner. In the second stage, a ray-shooting model is employed to detect the discontinued segments in segmentation results of the first-stage, and the local neuron diameter of the broken point is estimated and direction of the filamentary fragment is detected by rayburst sampling algorithm. Then, a Hessian-repair model is built to repair the broken structures, by enhancing weak neuronal structures in a fibrous structure determined by the estimated local neuron diameter and the filamentary fragment direction. Experimental results demonstrate that our proposed segmentation approach achieves better segmentation performance than other state-of-the-art methods for 3D neuron segmentation. Compared with the neuron reconstruction results on the segmented images produced by other segmentation methods, the proposed approach gains 47.83% and 34.83% improvement in the average distance scores. The average Precision and Recall rates of the branch point detection with our proposed method are 38.74% and 22.53% higher than the detection results without segmentation.
Collapse
|
21
|
Zhao J, Chen X, Xiong Z, Liu D, Zeng J, Xie C, Zhang Y, Zha ZJ, Bi G, Wu F. Neuronal Population Reconstruction From Ultra-Scale Optical Microscopy Images via Progressive Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4034-4046. [PMID: 32746145 DOI: 10.1109/tmi.2020.3009148] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Reconstruction of neuronal populations from ultra-scale optical microscopy (OM) images is essential to investigate neuronal circuits and brain mechanisms. The noises, low contrast, huge memory requirement, and high computational cost pose significant challenges in the neuronal population reconstruction. Recently, many studies have been conducted to extract neuron signals using deep neural networks (DNNs). However, training such DNNs usually relies on a huge amount of voxel-wise annotations in OM images, which are expensive in terms of both finance and labor. In this paper, we propose a novel framework for dense neuronal population reconstruction from ultra-scale images. To solve the problem of high cost in obtaining manual annotations for training DNNs, we propose a progressive learning scheme for neuronal population reconstruction (PLNPR) which does not require any manual annotations. Our PLNPR scheme consists of a traditional neuron tracing module and a deep segmentation network that mutually complement and progressively promote each other. To reconstruct dense neuronal populations from a terabyte-sized ultra-scale image, we introduce an automatic framework which adaptively traces neurons block by block and fuses fragmented neurites in overlapped regions continuously and smoothly. We build a dataset "VISoR-40" which consists of 40 large-scale OM image blocks from cortical regions of a mouse. Extensive experimental results on our VISoR-40 dataset and the public BigNeuron dataset demonstrate the effectiveness and superiority of our method on neuronal population reconstruction and single neuron reconstruction. Furthermore, we successfully apply our method to reconstruct dense neuronal populations from an ultra-scale mouse brain slice. The proposed adaptive block propagation and fusion strategies greatly improve the completeness of neurites in dense neuronal population reconstruction.
Collapse
|
22
|
Huang Q, Chen Y, Liu S, Xu C, Cao T, Xu Y, Wang X, Rao G, Li A, Zeng S, Quan T. Weakly Supervised Learning of 3D Deep Network for Neuron Reconstruction. Front Neuroanat 2020; 14:38. [PMID: 32848636 PMCID: PMC7399060 DOI: 10.3389/fnana.2020.00038] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 06/05/2020] [Indexed: 11/13/2022] Open
Abstract
Digital reconstruction or tracing of 3D tree-like neuronal structures from optical microscopy images is essential for understanding the functionality of neurons and reveal the connectivity of neuronal networks. Despite the existence of numerous tracing methods, reconstructing a neuron from highly noisy images remains challenging, particularly for neurites with low and inhomogeneous intensities. Conducting deep convolutional neural network (CNN)-based segmentation prior to neuron tracing facilitates an approach to solving this problem via separation of weak neurites from a noisy background. However, large manual annotations are needed in deep learning-based methods, which is labor-intensive and limits the algorithm's generalization for different datasets. In this study, we present a weakly supervised learning method of a deep CNN for neuron reconstruction without manual annotations. Specifically, we apply a 3D residual CNN as the architecture for discriminative neuronal feature extraction. We construct the initial pseudo-labels (without manual segmentation) of the neuronal images on the basis of an existing automatic tracing method. A weakly supervised learning framework is proposed via iterative training of the CNN model for improved prediction and refining of the pseudo-labels to update training samples. The pseudo-label was iteratively modified via mining and addition of weak neurites from the CNN predicted probability map on the basis of their tubularity and continuity. The proposed method was evaluated on several challenging images from the public BigNeuron and Diadem datasets, to fMOST datasets. Owing to the adaption of 3D deep CNNs and weakly supervised learning, the presented method demonstrates effective detection of weak neurites from noisy images and achieves results similar to those of the CNN model with manual annotations. The tracing performance was significantly improved by the proposed method on both small and large datasets (>100 GB). Moreover, the proposed method proved to be superior to several novel tracing methods on original images. The results obtained on various large-scale datasets demonstrated the generalization and high precision achieved by the proposed method for neuron reconstruction.
Collapse
Affiliation(s)
- Qing Huang
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shijie Liu
- School of Mathematics and Physics, China University of Geosciences, Wuhan, China
| | - Cheng Xu
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingting Cao
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yongchao Xu
- School of Electronics Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaojun Wang
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Gong Rao
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingwei Quan
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
23
|
Radojević M, Meijering E. Automated Neuron Reconstruction from 3D Fluorescence Microscopy Images Using Sequential Monte Carlo Estimation. Neuroinformatics 2020; 17:423-442. [PMID: 30542954 PMCID: PMC6594993 DOI: 10.1007/s12021-018-9407-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Microscopic images of neuronal cells provide essential structural information about the key constituents of the brain and form the basis of many neuroscientific studies. Computational analyses of the morphological properties of the captured neurons require first converting the structural information into digital tree-like reconstructions. Many dedicated computational methods and corresponding software tools have been and are continuously being developed with the aim to automate this step while achieving human-comparable reconstruction accuracy. This pursuit is hampered by the immense diversity and intricacy of neuronal morphologies as well as the often low quality and ambiguity of the images. Here we present a novel method we developed in an effort to improve the robustness of digital reconstruction against these complicating factors. The method is based on probabilistic filtering by sequential Monte Carlo estimation and uses prediction and update models designed specifically for tracing neuronal branches in microscopic image stacks. Moreover, it uses multiple probabilistic traces to arrive at a more robust, ensemble reconstruction. The proposed method was evaluated on fluorescence microscopy image stacks of single neurons and dense neuronal networks with expert manual annotations serving as the gold standard, as well as on synthetic images with known ground truth. The results indicate that our method performs well under varying experimental conditions and compares favorably to state-of-the-art alternative methods.
Collapse
Affiliation(s)
- Miroslav Radojević
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics and Radiology, Erasmus University Medical Center, Rotterdam, The Netherlands.
| | - Erik Meijering
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics and Radiology, Erasmus University Medical Center, Rotterdam, The Netherlands
| |
Collapse
|
24
|
Li Q, Shen L. 3D Neuron Reconstruction in Tangled Neuronal Image With Deep Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:425-435. [PMID: 31295108 DOI: 10.1109/tmi.2019.2926568] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Digital reconstruction or tracing of 3D neuron is essential for understanding the brain functions. While existing automatic tracing algorithms work well for the clean neuronal image with a single neuron, they are not robust to trace the neuron surrounded by nerve fibers. We propose a 3D U-Net-based network, namely 3D U-Net Plus, to segment the neuron from the surrounding fibers before the application of tracing algorithms. All the images in BigNeuron, the biggest available neuronal image dataset, contain clean neurons with no interference of nerve fibers, which are not practical to train the segmentation network. Based upon the BigNeuron images, we synthesize a SYNethic TAngled NEuronal Image dataset (SYNTANEI) to train the proposed network, by fusing the neurons with extracted nerve fibers. Due to the adoption of dropout, àtrous convolution and Àtrous Spatial Pyramid Pooling (ASPP), experimental results on the synthetic and real tangled neuronal images show that the proposed 3D U-Net Plus network achieved very promising segmentation results. The neurons reconstructed by the tracing algorithm using the segmentation result match significantly better with the ground truth than that using the original images.
Collapse
|
25
|
Jin DZ, Zhao T, Hunt DL, Tillage RP, Hsu CL, Spruston N. ShuTu: Open-Source Software for Efficient and Accurate Reconstruction of Dendritic Morphology. Front Neuroinform 2019; 13:68. [PMID: 31736735 PMCID: PMC6834530 DOI: 10.3389/fninf.2019.00068] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Accepted: 10/14/2019] [Indexed: 11/18/2022] Open
Abstract
Neurons perform computations by integrating inputs from thousands of synapses-mostly in the dendritic tree-to drive action potential firing in the axon. One fruitful approach to studying this process is to record from neurons using patch-clamp electrodes, fill the recorded neurons with a substance that allows subsequent staining, reconstruct the three-dimensional architectures of the dendrites, and use the resulting functional and structural data to develop computer models of dendritic integration. Accurately producing quantitative reconstructions of dendrites is typically a tedious process taking many hours of manual inspection and measurement. Here we present ShuTu, a new software package that facilitates accurate and efficient reconstruction of dendrites imaged using bright-field microscopy. The program operates in two steps: (1) automated identification of dendritic processes, and (2) manual correction of errors in the automated reconstruction. This approach allows neurons with complex dendritic morphologies to be reconstructed rapidly and efficiently, thus facilitating the use of computer models to study dendritic structure-function relationships and the computations performed by single neurons.
Collapse
Affiliation(s)
- Dezhe Z. Jin
- Department of Physics and Center for Neural Engineering, The Pennsylvania State University, University Park, PA, United States
| | - Ting Zhao
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - David L. Hunt
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Rachel P. Tillage
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Ching-Lung Hsu
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Nelson Spruston
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| |
Collapse
|
26
|
|
27
|
Li S, Quan T, Xu C, Huang Q, Kang H, Chen Y, Li A, Fu L, Luo Q, Gong H, Zeng S. Optimization of Traced Neuron Skeleton Using Lasso-Based Model. Front Neuroanat 2019; 13:18. [PMID: 30846931 PMCID: PMC6393391 DOI: 10.3389/fnana.2019.00018] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Accepted: 02/01/2019] [Indexed: 11/30/2022] Open
Abstract
Reconstruction of neuronal morphology from images involves mainly the extraction of neuronal skeleton points. It is an indispensable step in the quantitative analysis of neurons. Due to the complex morphology of neurons, many widely used tracing methods have difficulties in accurately acquiring skeleton points near branch points or in structures with tortuosity. Here, we propose two models to solve these problems. One is based on an L1-norm minimization model, which can better identify tortuous structure, namely, a local structure with large curvature skeleton points; the other detects an optimized branch point by considering the combination patterns of all neurites that link to this point. We combined these two models to achieve optimized skeleton detection for a neuron. We validate our models in various datasets including MOST and BigNeuron. In addition, we demonstrate that our method can optimize the traced skeletons from large-scale images. These characteristics of our approach indicate that it can reduce manual editing of traced skeletons and help to accelerate the accurate reconstruction of neuronal morphology.
Collapse
Affiliation(s)
- Shiwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China.,School of Mathematics and Economics, Hubei University of Education, Hubei, China
| | - Cheng Xu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Hongtao Kang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Ling Fu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| |
Collapse
|
28
|
Zhang D, Liu S, Song Y, Feng D, Peng H, Cai W. Automated 3D Soma Segmentation with Morphological Surface Evolution for Neuron Reconstruction. Neuroinformatics 2019; 16:153-166. [PMID: 29344781 DOI: 10.1007/s12021-017-9353-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
The automatic neuron reconstruction is important since it accelerates the collection of 3D neuron models for the neuronal morphological studies. The majority of the previous neuron reconstruction methods only focused on tracing neuron fibres without considering the somatic surface. Thus, topological errors often present around the soma area in the results obtained by these tracing methods. Segmentation of the soma structures can be embedded in the existing neuron tracing methods to reduce such topological errors. In this paper, we present a novel method to segment the soma structures with complex geometry. It can be applied along with the existing methods in a fully automated pipeline. An approximate bounding block is firstly estimated based on a geodesic distance transform. Then the soma segmentation is obtained by evolving the surface with a set of morphological operators inside the initial bounding region. By evaluating the methods against the challenging images released by the BigNeuron project, we showed that the proposed method can outperform the existing soma segmentation methods regarding the accuracy. We also showed that the soma segmentation can be used for enhancing the results of existing neuron tracing methods.
Collapse
Affiliation(s)
- Donghao Zhang
- School of Information Technologies, University of Sydney, Sydney, NSW, Australia.
| | - Siqi Liu
- School of Information Technologies, University of Sydney, Sydney, NSW, Australia
| | - Yang Song
- School of Information Technologies, University of Sydney, Sydney, NSW, Australia
| | - Dagan Feng
- School of Information Technologies, University of Sydney, Sydney, NSW, Australia
| | | | - Weidong Cai
- School of Information Technologies, University of Sydney, Sydney, NSW, Australia.
| |
Collapse
|
29
|
Li S, Quan T, Zhou H, Yin F, Li A, Fu L, Luo Q, Gong H, Zeng S. Identifying Weak Signals in Inhomogeneous Neuronal Images for Large-Scale Tracing of Sparsely Distributed Neurites. Neuroinformatics 2019; 17:497-514. [PMID: 30635864 PMCID: PMC6841657 DOI: 10.1007/s12021-018-9414-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Tracing neurites constitutes the core of neuronal morphology reconstruction, a key step toward neuronal circuit mapping. Modern optical-imaging techniques allow observation of nearly complete mouse neuron morphologies across brain regions or even the whole brain. However, high-level automation reconstruction of neurons, i.e., the reconstruction with a few of manual edits requires discrimination of weak foreground points from the inhomogeneous background. We constructed an identification model, where empirical observations made from neuronal images were summarized into rules for designing feature vectors that to classify foreground and background, and a support vector machine (SVM) was used to learn these feature vectors. We embedded this constructed SVM classifier into a previously developed tool, SparseTracer, to obtain SparseTracer-Learned Feature Vector (ST-LFV). ST-LFV can trace sparsely distributed neurites with weak signals (contrast-to-noise ratio < 1.5) against an inhomogeneous background in datasets imaged by widely used light-microscopy techniques like confocal microscopy and two-photon microscopy. Moreover, 12 sub-blocks were extracted from different brain regions. The average recall and precision rates were 99% and 97%, respectively. These results indicated that ST-LFV is well suited for weak signal identification with varying image characteristics. We also applied ST-LFV to trace long-range neurites from images where neurites are sparsely distributed but their image intensities are weak in some cases. When tracing this long-range neurites, manual edit was required once to obtain results equivalent to the ground truth, compared with 20 times of manual edits required by SparseTracer. This improvement in the level of automatic reconstruction indicates that ST-LFV has the potential to rapidly reconstruct sparsely distributed neurons at the large scale.
Collapse
Affiliation(s)
- Shiwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MOE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China. .,MOE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China. .,School of Mathematics and Economics, Hubei University of Education, Wuhan, 430205, Hubei, China.
| | - Hang Zhou
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MOE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - FangFang Yin
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MOE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MOE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Ling Fu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MOE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MOE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MOE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MOE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| |
Collapse
|
30
|
Liu S, Zhang D, Song Y, Peng H, Cai W. Automated 3-D Neuron Tracing With Precise Branch Erasing and Confidence Controlled Back Tracking. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2441-2452. [PMID: 29993997 DOI: 10.1109/tmi.2018.2833420] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The automatic reconstruction of single neurons from microscopic images is essential to enable large-scale data-driven investigations in neuron morphology research. However, few previous methods were able to generate satisfactory results automatically from 3-D microscopic images without human intervention. In this paper, we developed a new algorithm for automatic 3-D neuron reconstruction. The main idea of the proposed algorithm is to iteratively track backward from the potential neuronal termini to the soma centre. An online confidence score is computed to decide if a tracing iteration should be stopped and discarded from the final reconstruction. The performance improvements comparing with the previous methods are mainly introduced by a more accurate estimation of the traced area and the confidence controlled back-tracking algorithm. The proposed algorithm supports large-scale batch-processing by requiring only one user specified parameter for background segmentation. We bench tested the proposed algorithm on the images obtained from both the DIADEM challenge and the BigNeuron challenge. Our proposed algorithm achieved the state-of-the-art results.
Collapse
|
31
|
Boorboor S, Jadhav, Ananth M, Talmage D, Role, Kaufman A. Visualization of Neuronal Structures in Wide-Field Microscopy Brain Images. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:10.1109/TVCG.2018.2864852. [PMID: 30136950 PMCID: PMC6382602 DOI: 10.1109/tvcg.2018.2864852] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Wide-field microscopes are commonly used in neurobiology for experimental studies of brain samples. Available visualization tools are limited to electron, two-photon, and confocal microscopy datasets, and current volume rendering techniques do not yield effective results when used with wide-field data. We present a workflow for the visualization of neuronal structures in wide-field microscopy images of brain samples. We introduce a novel gradient-based distance transform that overcomes the out-of-focus blur caused by the inherent design of wide-field microscopes. This is followed by the extraction of the 3D structure of neurites using a multi-scale curvilinear filter and cell-bodies using a Hessian-based enhancement filter. The response from these filters is then applied as an opacity map to the raw data. Based on the visualization challenges faced by domain experts, our workflow provides multiple rendering modes to enable qualitative analysis of neuronal structures, which includes separation of cell-bodies from neurites and an intensity-based classification of the structures. Additionally, we evaluate our visualization results against both a standard image processing deconvolution technique and a confocal microscopy image of the same specimen. We show that our method is significantly faster and requires less computational resources, while producing high quality visualizations. We deploy our workflow in an immersive gigapixel facility as a paradigm for the processing and visualization of large, high-resolution, wide-field microscopy brain datasets.
Collapse
|
32
|
Kayasandik C, Negi P, Laezza F, Papadakis M, Labate D. Automated sorting of neuronal trees in fluorescent images of neuronal networks using NeuroTreeTracer. Sci Rep 2018; 8:6450. [PMID: 29691458 PMCID: PMC5915526 DOI: 10.1038/s41598-018-24753-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Accepted: 04/10/2018] [Indexed: 11/09/2022] Open
Abstract
Fluorescence confocal microscopy has become increasingly more important in neuroscience due to its applications in image-based screening and profiling of neurons. Multispectral confocal imaging is useful to simultaneously probe for distribution of multiple analytes over networks of neurons. However, current automated image analysis algorithms are not designed to extract single-neuron arbors in images where neurons are not separated, hampering the ability map fluorescence signals at the single cell level. To overcome this limitation, we introduce NeuroTreeTracer - a novel image processing framework aimed at automatically extracting and sorting single-neuron traces in fluorescent images of multicellular neuronal networks. This method applies directional multiscale filters for automated segmentation of neurons and soma detection, and includes a novel tracing routine that sorts neuronal trees in the image by resolving network connectivity even when neurites appear to intersect. By extracting each neuronal tree, NeuroTreetracer enables to automatically quantify the spatial distribution of analytes of interest in the subcellular compartments of individual neurons. This software is released open-source and freely available with the goal to facilitate applications in neuron screening and profiling.
Collapse
Affiliation(s)
- Cihan Kayasandik
- University of Houston, Department of Mathematics, Houston, Texas, United States of America
| | - Pooran Negi
- University of Houston, Department of Mathematics, Houston, Texas, United States of America
| | - Fernanda Laezza
- University of Texas Medical Branch, Department of Pharmacology and Toxicology, Galveston, Texas, United States of America
| | - Manos Papadakis
- University of Houston, Department of Mathematics, Houston, Texas, United States of America
| | - Demetrio Labate
- University of Houston, Department of Mathematics, Houston, Texas, United States of America.
| |
Collapse
|
33
|
Abstract
Digital reconstruction of a single neuron occupies an important position in computational neuroscience. Although many novel methods have been proposed, recent advances in molecular labeling and imaging systems allow for the production of large and complicated neuronal datasets, which pose many challenges for neuron reconstruction, especially when discontinuous neuronal morphology appears in a strong noise environment. Here, we develop a new pipeline to address this challenge. Our pipeline is based on two methods, one is the region-to-region connection (RRC) method for detecting the initial part of a neurite, which can effectively gather local cues, i.e., avoid the whole image analysis, and thus boosts the efficacy of computation; the other is constrained principal curves method for completing the neurite reconstruction, which uses the past reconstruction information of a neurite for current reconstruction and thus can be suitable for tracing discontinuous neurites. We investigate the reconstruction performances of our pipeline and some of the best state-of-the-art algorithms on the experimental datasets, indicating the superiority of our method in reconstructing sparsely distributed neurons with discontinuous neuronal morphologies in noisy environment. We show the strong ability of our pipeline in dealing with the large-scale image dataset. We validate the effectiveness in dealing with various kinds of image stacks including those from the DIADEM challenge and BigNeuron project.
Collapse
|
34
|
Abstract
The reconstruction of neuron morphology allows to investigate how the brain works, which is one of the foremost challenges in neuroscience. This process aims at extracting the neuronal structures from microscopic imaging data. The great advances in microscopic technologies have made a huge amount of data available at the micro-, or even lower, resolution where manual inspection is time consuming, prone to error and utterly impractical. This has motivated the development of methods to automatically trace the neuronal structures, a task also known as neuron tracing. This paper surveys the latest neuron tracing methods available in the scientific literature as well as a selection of significant older papers to better place these proposals into context. They are categorized into global processing, local processing and meta-algorithm approaches. Furthermore, we point out the algorithmic components used to design each method and we report information on the datasets and the performance metrics used.
Collapse
|
35
|
Liu S, Zhang D, Liu S, Feng D, Peng H, Cai W. Rivulet: 3D Neuron Morphology Tracing with Iterative Back-Tracking. Neuroinformatics 2018; 14:387-401. [PMID: 27184384 DOI: 10.1007/s12021-016-9302-0] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
The digital reconstruction of single neurons from 3D confocal microscopic images is an important tool for understanding the neuron morphology and function. However the accurate automatic neuron reconstruction remains a challenging task due to the varying image quality and the complexity in the neuronal arborisation. Targeting the common challenges of neuron tracing, we propose a novel automatic 3D neuron reconstruction algorithm, named Rivulet, which is based on the multi-stencils fast-marching and iterative back-tracking. The proposed Rivulet algorithm is capable of tracing discontinuous areas without being interrupted by densely distributed noises. By evaluating the proposed pipeline with the data provided by the Diadem challenge and the recent BigNeuron project, Rivulet is shown to be robust to challenging microscopic imagestacks. We discussed the algorithm design in technical details regarding the relationships between the proposed algorithm and the other state-of-the-art neuron tracing algorithms.
Collapse
Affiliation(s)
- Siqi Liu
- School of Information Technologies, University of Sydney, Darlington, NSW, Australia.
| | - Donghao Zhang
- School of Information Technologies, University of Sydney, Darlington, NSW, Australia
| | - Sidong Liu
- School of Information Technologies, University of Sydney, Darlington, NSW, Australia
| | - Dagan Feng
- School of Information Technologies, University of Sydney, Darlington, NSW, Australia
| | | | - Weidong Cai
- School of Information Technologies, University of Sydney, Darlington, NSW, Australia.
| |
Collapse
|
36
|
SmartScope2: Simultaneous Imaging and Reconstruction of Neuronal Morphology. Sci Rep 2017; 7:9325. [PMID: 28839271 PMCID: PMC5571186 DOI: 10.1038/s41598-017-10067-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Accepted: 07/21/2017] [Indexed: 11/12/2022] Open
Abstract
Quantitative analysis of neuronal morphology is critical in cell type classification and for deciphering how structure gives rise to function in the brain. Most current approaches to imaging and tracing neuronal 3D morphology are data intensive. We introduce SmartScope2, the first open source, automated neuron reconstruction machine integrating online image analysis with automated multiphoton imaging. SmartScope2 takes advantage of a neuron’s sparse morphology to improve imaging speed and reduce image data stored, transferred and analyzed. We show that SmartScope2 is able to produce the complex 3D morphology of human and mouse cortical neurons with six-fold reduction in image data requirements and three times the imaging speed compared to conventional methods.
Collapse
|
37
|
Radojevic M, Meijering E. Automated neuron tracing using probability hypothesis density filtering. Bioinformatics 2017; 33:1073-1080. [PMID: 28065895 DOI: 10.1093/bioinformatics/btw751] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Accepted: 11/22/2016] [Indexed: 01/18/2023] Open
Abstract
Motivation The functionality of neurons and their role in neuronal networks is tightly connected to the cell morphology. A fundamental problem in many neurobiological studies aiming to unravel this connection is the digital reconstruction of neuronal cell morphology from microscopic image data. Many methods have been developed for this, but they are far from perfect, and better methods are needed. Results Here we present a new method for tracing neuron centerlines needed for full reconstruction. The method uses a fundamentally different approach than previous methods by considering neuron tracing as a Bayesian multi-object tracking problem. The problem is solved using probability hypothesis density filtering. Results of experiments on 2D and 3D fluorescence microscopy image datasets of real neurons indicate the proposed method performs comparably or even better than the state of the art. Availability and Implementation Software implementing the proposed neuron tracing method was written in the Java programming language as a plugin for the ImageJ platform. Source code is freely available for non-commercial use at https://bitbucket.org/miroslavradojevic/phd . Contact meijering@imagescience.org. Supplementary information Supplementary data are available at Bioinformatics online.
Collapse
|
38
|
Zandt BJ, Losnegård A, Hodneland E, Veruki ML, Lundervold A, Hartveit E. Semi-automatic 3D morphological reconstruction of neurons with densely branching morphology: Application to retinal AII amacrine cells imaged with multi-photon excitation microscopy. J Neurosci Methods 2017; 279:101-118. [PMID: 28115187 DOI: 10.1016/j.jneumeth.2017.01.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2016] [Revised: 01/10/2017] [Accepted: 01/11/2017] [Indexed: 01/30/2023]
Abstract
BACKGROUND Accurate reconstruction of the morphology of single neurons is important for morphometric studies and for developing compartmental models. However, manual morphological reconstruction can be extremely time-consuming and error-prone and algorithms for automatic reconstruction can be challenged when applied to neurons with a high density of extensively branching processes. NEW METHOD We present a procedure for semi-automatic reconstruction specifically adapted for densely branching neurons such as the AII amacrine cell found in mammalian retinas. We used whole-cell recording to fill AII amacrine cells in rat retinal slices with fluorescent dyes and acquired digital image stacks with multi-photon excitation microscopy. Our reconstruction algorithm combines elements of existing procedures, with segmentation based on adaptive thresholding and reconstruction based on a minimal spanning tree. We improved this workflow with an algorithm that reconnects neuron segments that are disconnected after adaptive thresholding, using paths extracted from the image stacks with the Fast Marching method. RESULTS By reducing the likelihood that disconnected segments were incorrectly connected to neighboring segments, our procedure generated excellent morphological reconstructions of AII amacrine cells. COMPARISON WITH EXISTING METHODS Reconstructing an AII amacrine cell required about 2h computing time, compared to 2-4days for manual reconstruction. To evaluate the performance of our method relative to manual reconstruction, we performed detailed analysis using a measure of tree structure similarity (DIADEM score), the degree of projection area overlap (Dice coefficient), and branch statistics. CONCLUSIONS We expect our procedure to be generally useful for morphological reconstruction of neurons filled with fluorescent dyes.
Collapse
Affiliation(s)
- Bas-Jan Zandt
- Department of Biomedicine, University of Bergen, Bergen, Norway
| | - Are Losnegård
- Department of Biomedicine, University of Bergen, Bergen, Norway
| | | | | | - Arvid Lundervold
- Department of Biomedicine, University of Bergen, Bergen, Norway; Department of Radiology, Haukeland University Hospital, Bergen, Norway
| | - Espen Hartveit
- Department of Biomedicine, University of Bergen, Bergen, Norway.
| |
Collapse
|
39
|
Turetken E, Benmansour F, Andres B, Glowacki P, Pfister H, Fua P. Reconstructing Curvilinear Networks Using Path Classifiers and Integer Programming. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2016; 38:2515-2530. [PMID: 26891482 DOI: 10.1109/tpami.2016.2519025] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
We propose a novel approach to automated delineation of curvilinear structures that form complex and potentially loopy networks. By representing the image data as a graph of potential paths, we first show how to weight these paths using discriminatively-trained classifiers that are both robust and generic enough to be applied to very different imaging modalities. We then present an Integer Programming approach to finding the optimal subset of paths, subject to structural and topological constraints that eliminate implausible solutions. Unlike earlier approaches that assume a tree topology for the networks, ours explicitly models the fact that the networks may contain loops, and can reconstruct both cyclic and acyclic ones. We demonstrate the effectiveness of our approach on a variety of challenging datasets including aerial images of road networks and micrographs of neural arbors, and show that it outperforms state-of-the-art techniques.
Collapse
|
40
|
Wan Y, Long F, Qu L, Xiao H, Hawrylycz M, Myers EW, Peng H. BlastNeuron for Automated Comparison, Retrieval and Clustering of 3D Neuron Morphologies. Neuroinformatics 2016; 13:487-99. [PMID: 26036213 DOI: 10.1007/s12021-015-9272-7] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Characterizing the identity and types of neurons in the brain, as well as their associated function, requires a means of quantifying and comparing 3D neuron morphology. Presently, neuron comparison methods are based on statistics from neuronal morphology such as size and number of branches, which are not fully suitable for detecting local similarities and differences in the detailed structure. We developed BlastNeuron to compare neurons in terms of their global appearance, detailed arborization patterns, and topological similarity. BlastNeuron first compares and clusters 3D neuron reconstructions based on global morphology features and moment invariants, independent of their orientations, sizes, level of reconstruction and other variations. Subsequently, BlastNeuron performs local alignment between any pair of retrieved neurons via a tree-topology driven dynamic programming method. A 3D correspondence map can thus be generated at the resolution of single reconstruction nodes. We applied BlastNeuron to three datasets: (1) 10,000+ neuron reconstructions from a public morphology database, (2) 681 newly and manually reconstructed neurons, and (3) neurons reconstructions produced using several independent reconstruction methods. Our approach was able to accurately and efficiently retrieve morphologically and functionally similar neuron structures from large morphology database, identify the local common structures, and find clusters of neurons that share similarities in both morphology and molecular profiles.
Collapse
Affiliation(s)
- Yinan Wan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Fuhui Long
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.,Allen Institute for Brain Science, Seattle, WA, USA
| | - Lei Qu
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.,Key Laboratory of Intelligent Computation and Signal Processing, Ministry of Education, Anhui University, Hefei, China
| | - Hang Xiao
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | | | - Eugene W Myers
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.,Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
| | - Hanchuan Peng
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA. .,Allen Institute for Brain Science, Seattle, WA, USA.
| |
Collapse
|
41
|
Basu S, Ooi WT, Racoceanu D. Neurite Tracing With Object Process. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1443-1451. [PMID: 26742129 DOI: 10.1109/tmi.2016.2515068] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper we present a pipeline for automatic analysis of neuronal morphology: from detection, modeling to digital reconstruction. First, we present an automatic, unsupervised object detection framework using stochastic marked point process. It extracts connected neuronal networks by fitting special configuration of marked objects to the centreline of the neurite branches in the image volume giving us position, local width and orientation information. Semantic modeling of neuronal morphology in terms of critical nodes like bifurcations and terminals, generates various geometric and morphology descriptors such as branching index, branching angles, total neurite length, internodal lengths for statistical inference on characteristic neuronal features. From the detected branches we reconstruct neuronal tree morphology using robust and efficient numerical fast marching methods. We capture a mathematical model abstracting out the relevant position, shape and connectivity information about neuronal branches from the microscopy data into connected minimum spanning trees. Such digital reconstruction is represented in standard SWC format, prevalent for archiving, sharing, and further analysis in the neuroimaging community. Our proposed pipeline outperforms state of the art methods in tracing accuracy and minimizes the subjective variability in reconstruction, inherent to semi-automatic methods.
Collapse
|
42
|
Santamaría-Pang A, Hernandez-Herrera P, Papadakis M, Saggau P, Kakadiaris IA. Automatic Morphological Reconstruction of Neurons from Multiphoton and Confocal Microscopy Images Using 3D Tubular Models. Neuroinformatics 2016; 13:297-320. [PMID: 25631538 DOI: 10.1007/s12021-014-9253-2] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The challenges faced in analyzing optical imaging data from neurons include a low signal-to-noise ratio of the acquired images and the multiscale nature of the tubular structures that range in size from hundreds of microns to hundreds of nanometers. In this paper, we address these challenges and present a computational framework for an automatic, three-dimensional (3D) morphological reconstruction of live nerve cells. The key aspects of this approach are: (i) detection of neuronal dendrites through learning 3D tubular models, and (ii) skeletonization by a new algorithm using a morphology-guided deformable model for extracting the dendritic centerline. To represent the neuron morphology, we introduce a novel representation, the Minimum Shape-Cost (MSC) Tree that approximates the dendrite centerline with sub-voxel accuracy and demonstrate the uniqueness of such a shape representation as well as its computational efficiency. We present extensive quantitative and qualitative results that demonstrate the accuracy and robustness of our method.
Collapse
Affiliation(s)
- Alberto Santamaría-Pang
- Computational Biomedicine Lab, Department of Computer Science, University of Houston, Houston, TX, 77204, USA
| | | | | | | | | |
Collapse
|
43
|
Adaptive Image Enhancement for Tracing 3D Morphologies of Neurons and Brain Vasculatures. Neuroinformatics 2016; 13:153-66. [PMID: 25310965 DOI: 10.1007/s12021-014-9249-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
It is important to digitally reconstruct the 3D morphology of neurons and brain vasculatures. A number of previous methods have been proposed to automate the reconstruction process. However, in many cases, noise and low signal contrast with respect to the image background still hamper our ability to use automation methods directly. Here, we propose an adaptive image enhancement method specifically designed to improve the signal-to-noise ratio of several types of individual neurons and brain vasculature images. Our method is based on detecting the salient features of fibrous structures, e.g. the axon and dendrites combined with adaptive estimation of the optimal context windows where such saliency would be detected. We tested this method for a range of brain image datasets and imaging modalities, including bright-field, confocal and multiphoton fluorescent images of neurons, and magnetic resonance angiograms. Applying our adaptive enhancement to these datasets led to improved accuracy and speed in automated tracing of complicated morphology of neurons and vasculatures.
Collapse
|
44
|
Detrez JR, Verstraelen P, Gebuis T, Verschuuren M, Kuijlaars J, Langlois X, Nuydens R, Timmermans JP, De Vos WH. Image Informatics Strategies for Deciphering Neuronal Network Connectivity. ADVANCES IN ANATOMY, EMBRYOLOGY, AND CELL BIOLOGY 2016; 219:123-48. [PMID: 27207365 DOI: 10.1007/978-3-319-28549-8_5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Brain function relies on an intricate network of highly dynamic neuronal connections that rewires dramatically under the impulse of various external cues and pathological conditions. Amongst the neuronal structures that show morphological plasticity are neurites, synapses, dendritic spines and even nuclei. This structural remodelling is directly connected with functional changes such as intercellular communication and the associated calcium bursting behaviour. In vitro cultured neuronal networks are valuable models for studying these morpho-functional changes. Owing to the automation and standardization of both image acquisition and image analysis, it has become possible to extract statistically relevant readouts from such networks. Here, we focus on the current state-of-the-art in image informatics that enables quantitative microscopic interrogation of neuronal networks. We describe the major correlates of neuronal connectivity and present workflows for analysing them. Finally, we provide an outlook on the challenges that remain to be addressed, and discuss how imaging algorithms can be extended beyond in vitro imaging studies.
Collapse
Affiliation(s)
- Jan R Detrez
- Laboratory of Cell Biology and Histology, Department of Veterinary Sciences, University of Antwerp, Groenenborgerlaan 171, 2020, Antwerp, Belgium
| | - Peter Verstraelen
- Laboratory of Cell Biology and Histology, Department of Veterinary Sciences, University of Antwerp, Groenenborgerlaan 171, 2020, Antwerp, Belgium
| | - Titia Gebuis
- Department of Molecular and Cellular Neurobiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, De Boelelaan 1085, 1081 HV, Amsterdam, The Netherlands
| | - Marlies Verschuuren
- Laboratory of Cell Biology and Histology, Department of Veterinary Sciences, University of Antwerp, Groenenborgerlaan 171, 2020, Antwerp, Belgium
| | - Jacobine Kuijlaars
- Neuroscience Department, Janssen Research and Development, Turnhoutseweg 30, 2340, Beerse, Belgium
- Laboratory for Cell Physiology, Biomedical Research Institute (BIOMED), Hasselt University, Agoralaan, 3590, Diepenbeek, Belgium
| | - Xavier Langlois
- Neuroscience Department, Janssen Research and Development, Turnhoutseweg 30, 2340, Beerse, Belgium
| | - Rony Nuydens
- Neuroscience Department, Janssen Research and Development, Turnhoutseweg 30, 2340, Beerse, Belgium
| | - Jean-Pierre Timmermans
- Laboratory of Cell Biology and Histology, Department of Veterinary Sciences, University of Antwerp, Groenenborgerlaan 171, 2020, Antwerp, Belgium
| | - Winnok H De Vos
- Laboratory of Cell Biology and Histology, Department of Veterinary Sciences, University of Antwerp, Groenenborgerlaan 171, 2020, Antwerp, Belgium.
- Cell Systems and Cellular Imaging, Department Molecular Biotechnology, Ghent University, Coupure Links 653, 9000, Ghent, Belgium.
| |
Collapse
|
45
|
Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images. Sci Rep 2015; 5:17062. [PMID: 26593337 PMCID: PMC4655406 DOI: 10.1038/srep17062] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2015] [Accepted: 10/02/2015] [Indexed: 12/27/2022] Open
Abstract
Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation.
Collapse
|
46
|
Quan T, Zhou H, Li J, Li S, Li A, Li Y, Lv X, Luo Q, Gong H, Zeng S. NeuroGPS-Tree: automatic reconstruction of large-scale neuronal populations with dense neurites. Nat Methods 2015; 13:51-4. [PMID: 26595210 DOI: 10.1038/nmeth.3662] [Citation(s) in RCA: 86] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2014] [Accepted: 10/22/2015] [Indexed: 02/04/2023]
Abstract
The reconstruction of neuronal populations, a key step in understanding neural circuits, remains a challenge in the presence of densely packed neurites. Here we achieved automatic reconstruction of neuronal populations by partially mimicking human strategies to separate individual neurons. For populations not resolvable by other methods, we obtained recall and precision rates of approximately 80%. We also demonstrate the reconstruction of 960 neurons within 3 h.
Collapse
Affiliation(s)
- Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,Ministy of Education Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, China.,School of Mathematics and Statistics, Hubei University of Education, Wuhan, China
| | - Hang Zhou
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,Ministy of Education Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Jing Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,Ministy of Education Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Shiwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,Ministy of Education Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,Ministy of Education Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Yuxin Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,Ministy of Education Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaohua Lv
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,Ministy of Education Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,Ministy of Education Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,Ministy of Education Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,Ministy of Education Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
47
|
Luo G, Sui D, Wang K, Chae J. Neuron anatomy structure reconstruction based on a sliding filter. BMC Bioinformatics 2015; 16:342. [PMID: 26498293 PMCID: PMC4619512 DOI: 10.1186/s12859-015-0780-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2015] [Accepted: 10/16/2015] [Indexed: 12/18/2022] Open
Abstract
BACKGROUND Reconstruction of neuron anatomy structure is a challenging and important task in neuroscience. However, few algorithms can automatically reconstruct the full structure well without manual assistance, making it essential to develop new methods for this task. METHODS This paper introduces a new pipeline for reconstructing neuron anatomy structure from 3-D microscopy image stacks. This pipeline is initialized with a set of seeds that were detected by our proposed Sliding Volume Filter (SVF), given a non-circular cross-section of a neuron cell. Then, an improved open curve snake model combined with a SVF external force is applied to trace the full skeleton of the neuron cell. A radius estimation method based on a 2D sliding band filter is developed to fit the real edge of the cross-section of the neuron cell. Finally, a surface reconstruction method based on non-parallel curve networks is used to generate the neuron cell surface to finish this pipeline. RESULTS The proposed pipeline has been evaluated using publicly available datasets. The results show that the proposed method achieves promising results in some datasets from the DIgital reconstruction of Axonal and DEndritic Morphology (DIADEM) challenge and new BigNeuron project. CONCLUSION The new pipeline works well in neuron tracing and reconstruction. It can achieve higher efficiency, stability and robustness in neuron skeleton tracing. Furthermore, the proposed radius estimation method and applied surface reconstruction method can obtain more accurate neuron anatomy structures.
Collapse
Affiliation(s)
- Gongning Luo
- Research Center of Perception and Computing, School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China.
| | - Dong Sui
- Research Center of Perception and Computing, School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China.
| | - Kuanquan Wang
- Research Center of Perception and Computing, School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China.
| | - Jinseok Chae
- Department of Computer Science and Engineering, Incheon National University, Incheon, Korea.
| |
Collapse
|
48
|
Fua P, Knott GW. Modeling brain circuitry over a wide range of scales. Front Neuroanat 2015; 9:42. [PMID: 25904852 PMCID: PMC4387921 DOI: 10.3389/fnana.2015.00042] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2015] [Accepted: 03/17/2015] [Indexed: 11/13/2022] Open
Abstract
If we are ever to unravel the mysteries of brain function at its most fundamental level, we will need a precise understanding of how its component neurons connect to each other. Electron Microscopes (EM) can now provide the nanometer resolution that is needed to image synapses, and therefore connections, while Light Microscopes (LM) see at the micrometer resolution required to model the 3D structure of the dendritic network. Since both the topology and the connection strength are integral parts of the brain's wiring diagram, being able to combine these two modalities is critically important. In fact, these microscopes now routinely produce high-resolution imagery in such large quantities that the bottleneck becomes automated processing and interpretation, which is needed for such data to be exploited to its full potential. In this paper, we briefly review the Computer Vision techniques we have developed at EPFL to address this need. They include delineating dendritic arbors from LM imagery, segmenting organelles from EM, and combining the two into a consistent representation.
Collapse
Affiliation(s)
- Pascal Fua
- Computer Vision Lab, I&C School, École Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| | - Graham W Knott
- Bioelectron Microscopy Core Facility, École Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| |
Collapse
|
49
|
Lee YH, Lin YN, Chuang CC, Lo CC. SPIN: a method of skeleton-based polarity identification for neurons. Neuroinformatics 2015; 12:487-507. [PMID: 24692020 DOI: 10.1007/s12021-014-9225-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Directional signal transmission is essential for neural circuit function and thus for connectomic analysis. The directions of signal flow can be obtained by experimentally identifying neuronal polarity (axons or dendrites). However, the experimental techniques are not applicable to existing neuronal databases in which polarity information is not available. To address the issue, we proposed SPIN: a method of Skeleton-based Polarity Identification for Neurons. SPIN was designed to work with large-scale neuronal databases in which tracing-line data are available. In SPIN, a classifier is first trained by neurons with known polarity in two steps: 1) identifying morphological features that most correlate with the polarity and 2) constructing a linear classifier by determining a discriminant axis (a specific combination of the features) and decision boundaries. Each polarity-undefined neuron is then divided into several morphological substructures (domains) and the corresponding polarities are determined using the classifier. Finally, the result is evaluated and warnings for potential errors are returned. We tested this method on fruitfly (Drosophila melanogaster) and blowfly (Calliphora vicina and Calliphora erythrocephala) unipolar neurons using data obtained from the Flycircuit and Neuromorpho databases, respectively. On average, the polarity of 84-92 % of the terminal points in each neuron could be correctly identified. An ideal performance with an accuracy between 93 and 98 % can be achieved if we fed SPIN with relatively "clean" data without artificial branches. Our result demonstrates that SPIN, as a computer-based semi-automatic method, provides quick and accurate polarity identification and is particularly suitable for analyzing large-scale data. We implemented SPIN in Matlab and released the codes under the GPLv3 license.
Collapse
Affiliation(s)
- Yi-Hsuan Lee
- Institute of Systems Neuroscience, National Tsing Hua University, Hsinchu, 30013, Taiwan
| | | | | | | |
Collapse
|
50
|
neuTube 1.0: A New Design for Efficient Neuron Reconstruction Software Based on the SWC Format. eNeuro 2015; 2:eN-MNT-0049-14. [PMID: 26464967 PMCID: PMC4586918 DOI: 10.1523/eneuro.0049-14.2014] [Citation(s) in RCA: 166] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2014] [Revised: 12/23/2014] [Accepted: 12/23/2014] [Indexed: 11/25/2022] Open
Abstract
Compared to other existing tools, the novel software we present has some unique features such as comprehensive editing functions and the combination of seed-based tracing and path searching algorithms, as well as their availability in parallel 2D and 3D visualization. These features allow the user to reconstruct neuronal morphology efficiently in a comfortable “What You See Is What You Get” (WYSIWYG) way. Brain circuit mapping requires digital reconstruction of neuronal morphologies in complicated networks. Despite recent advances in automatic algorithms, reconstruction of neuronal structures is still a bottleneck in circuit mapping due to a lack of appropriate software for both efficient reconstruction and user-friendly editing. Here we present a new software design based on the SWC format, a standardized neuromorphometric format that has been widely used for analyzing neuronal morphologies or sharing neuron reconstructions via online archives such as NeuroMorpho.org. We have also implemented the design in our open-source software called neuTube 1.0. As specified by the design, the software is equipped with parallel 2D and 3D visualization and intuitive neuron tracing/editing functions, allowing the user to efficiently reconstruct neurons from fluorescence image data and edit standard neuron structure files produced by any other reconstruction software. We show the advantages of neuTube 1.0 by comparing it to two other software tools, namely Neuromantic and Neurostudio. The software is available for free at http://www.neutracing.com, which also hosts complete software documentation and video tutorials.
Collapse
|