1
|
Chen W, Liao M, Bao S, An S, Li W, Liu X, Huang G, Gong H, Luo Q, Xiao C, Li A. A hierarchically annotated dataset drives tangled filament recognition in digital neuron reconstruction. PATTERNS (NEW YORK, N.Y.) 2024; 5:101007. [PMID: 39233689 PMCID: PMC11368685 DOI: 10.1016/j.patter.2024.101007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/25/2024] [Accepted: 05/23/2024] [Indexed: 09/06/2024]
Abstract
Reconstructing neuronal morphology is vital for classifying neurons and mapping brain connectivity. However, it remains a significant challenge due to its complex structure, dense distribution, and low image contrast. In particular, AI-assisted methods often yield numerous errors that require extensive manual intervention. Therefore, reconstructing hundreds of neurons is already a daunting task for general research projects. A key issue is the lack of specialized training for challenging regions due to inadequate data and training methods. This study extracted 2,800 challenging neuronal blocks and categorized them into multiple density levels. Furthermore, we enhanced images using an axial continuity-based network that improved three-dimensional voxel resolution while reducing the difficulty of neuron recognition. Comparing the pre- and post-enhancement results in automatic algorithms using fluorescence micro-optical sectioning tomography (fMOST) data, we observed a significant increase in the recall rate. Our study not only enhances the throughput of reconstruction but also provides a fundamental dataset for tangled neuron reconstruction.
Collapse
Affiliation(s)
- Wu Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Mingwei Liao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Shengda Bao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Sile An
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Wenwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Xin Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Ganghua Huang
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou 570228, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou 215123, China
| | - Qingming Luo
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou 570228, China
| | - Chi Xiao
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou 570228, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou 215123, China
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou 570228, China
| |
Collapse
|
2
|
Choi YK, Feng L, Jeong WK, Kim J. Connecto-informatics at the mesoscale: current advances in image processing and analysis for mapping the brain connectivity. Brain Inform 2024; 11:15. [PMID: 38833195 DOI: 10.1186/s40708-024-00228-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 05/08/2024] [Indexed: 06/06/2024] Open
Abstract
Mapping neural connections within the brain has been a fundamental goal in neuroscience to understand better its functions and changes that follow aging and diseases. Developments in imaging technology, such as microscopy and labeling tools, have allowed researchers to visualize this connectivity through high-resolution brain-wide imaging. With this, image processing and analysis have become more crucial. However, despite the wealth of neural images generated, access to an integrated image processing and analysis pipeline to process these data is challenging due to scattered information on available tools and methods. To map the neural connections, registration to atlases and feature extraction through segmentation and signal detection are necessary. In this review, our goal is to provide an updated overview of recent advances in these image-processing methods, with a particular focus on fluorescent images of the mouse brain. Our goal is to outline a pathway toward an integrated image-processing pipeline tailored for connecto-informatics. An integrated workflow of these image processing will facilitate researchers' approach to mapping brain connectivity to better understand complex brain networks and their underlying brain functions. By highlighting the image-processing tools available for fluroscent imaging of the mouse brain, this review will contribute to a deeper grasp of connecto-informatics, paving the way for better comprehension of brain connectivity and its implications.
Collapse
Affiliation(s)
- Yoon Kyoung Choi
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | | | - Won-Ki Jeong
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | - Jinhyun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea.
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea.
- KIST-SKKU Brain Research Center, SKKU Institute for Convergence, Sungkyunkwan University, Suwon, South Korea.
| |
Collapse
|
3
|
Ren J, Che J, Gong P, Wang X, Li X, Li A, Xiao C. Cross comparison representation learning for semi-supervised segmentation of cellular nuclei in immunofluorescence staining. Comput Biol Med 2024; 171:108102. [PMID: 38350398 DOI: 10.1016/j.compbiomed.2024.108102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 01/29/2024] [Accepted: 02/04/2024] [Indexed: 02/15/2024]
Abstract
The morphological analysis of cells from optical images is vital for interpreting brain function in disease states. Extracting comprehensive cell morphology from intricate backgrounds, common in neural and some medical images, poses a significant challenge. Due to the huge workload of manual recognition, automated neuron cell segmentation using deep learning algorithms with labeled data is integral to neural image analysis tools. To combat the high cost of acquiring labeled data, we propose a novel semi-supervised cell segmentation algorithm for immunofluorescence-stained cell image datasets (ISC), utilizing a mean-teacher semi-supervised learning framework. We include a "cross comparison representation learning block" to enhance the teacher-student model comparison on high-dimensional channels, thereby improving feature compactness and separability, which results in the extraction of higher-dimensional features from unlabeled data. We also suggest a new network, the Multi Pooling Layer Attention Dense Network (MPAD-Net), serving as the backbone of the student model to augment segmentation accuracy. Evaluations on the immunofluorescence staining datasets and the public CRAG dataset illustrate our method surpasses other top semi-supervised learning methods, achieving average Jaccard, Dice and Normalized Surface Dice (NSD) indicators of 83.22%, 90.95% and 81.90% with only 20% labeled data. The datasets and code are available on the website at https://github.com/Brainsmatics/CCRL.
Collapse
Affiliation(s)
- Jianran Ren
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Jingyi Che
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Peicong Gong
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Xiaojun Wang
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Xiangning Li
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Anan Li
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China; Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Chi Xiao
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China.
| |
Collapse
|
4
|
Wang Y, Lang R, Li R, Zhang J. NRTR: Neuron Reconstruction With Transformer From 3D Optical Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:886-898. [PMID: 37847618 DOI: 10.1109/tmi.2023.3323466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
The neuron reconstruction from raw Optical Microscopy (OM) image stacks is the basis of neuroscience. Manual annotation and semi-automatic neuron tracing algorithms are time-consuming and inefficient. Existing deep learning neuron reconstruction methods, although demonstrating exemplary performance, greatly demand complex rule-based components. Therefore, a crucial challenge is designing an end-to-end neuron reconstruction method that makes the overall framework simpler and model training easier. We propose a Neuron Reconstruction Transformer (NRTR) that, discarding the complex rule-based components, views neuron reconstruction as a direct set-prediction problem. To the best of our knowledge, NRTR is the first image-to-set deep learning model for end-to-end neuron reconstruction. The overall pipeline consists of the CNN backbone, Transformer encoder-decoder, and connectivity construction module. NRTR generates a point set representing neuron morphological characteristics for raw neuron images. The relationships among the points are established through connectivity construction. The point set is saved as a standard SWC file. In experiments using the BigNeuron and VISoR-40 datasets, NRTR achieves excellent neuron reconstruction results for comprehensive benchmarks and outperforms competitive baselines. Results of extensive experiments indicate that NRTR is effective at showing that neuron reconstruction is viewed as a set-prediction problem, which makes end-to-end model training available.
Collapse
|
5
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
6
|
Li Z, Shang Z, Liu J, Zhen H, Zhu E, Zhong S, Sturgess RN, Zhou Y, Hu X, Zhao X, Wu Y, Li P, Lin R, Ren J. D-LMBmap: a fully automated deep-learning pipeline for whole-brain profiling of neural circuitry. Nat Methods 2023; 20:1593-1604. [PMID: 37770711 PMCID: PMC10555838 DOI: 10.1038/s41592-023-01998-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 08/02/2023] [Indexed: 09/30/2023]
Abstract
Recent proliferation and integration of tissue-clearing methods and light-sheet fluorescence microscopy has created new opportunities to achieve mesoscale three-dimensional whole-brain connectivity mapping with exceptionally high throughput. With the rapid generation of large, high-quality imaging datasets, downstream analysis is becoming the major technical bottleneck for mesoscale connectomics. Current computational solutions are labor intensive with limited applications because of the exhaustive manual annotation and heavily customized training. Meanwhile, whole-brain data analysis always requires combining multiple packages and secondary development by users. To address these challenges, we developed D-LMBmap, an end-to-end package providing an integrated workflow containing three modules based on deep-learning algorithms for whole-brain connectivity mapping: axon segmentation, brain region segmentation and whole-brain registration. D-LMBmap does not require manual annotation for axon segmentation and achieves quantitative analysis of whole-brain projectome in a single workflow with superior accuracy for multiple cell types in all of the modalities tested.
Collapse
Affiliation(s)
- Zhongyu Li
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK
| | - Zengyi Shang
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Jingyi Liu
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Haotian Zhen
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Entao Zhu
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Shilin Zhong
- National Institute of Biological Sciences (NIBS), Beijing, China
| | - Robyn N Sturgess
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK
| | - Yitian Zhou
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Xuemeng Hu
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Xingyue Zhao
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Yi Wu
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Peiqi Li
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Rui Lin
- National Institute of Biological Sciences (NIBS), Beijing, China
| | - Jing Ren
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK.
| |
Collapse
|
7
|
Wu Y, Yang Z, Liu M, Han Y. Application of fluorescence micro-optical sectioning tomography in the cerebrovasculature and applicable vascular labeling methods. Brain Struct Funct 2023; 228:1619-1627. [PMID: 37481741 DOI: 10.1007/s00429-023-02684-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 07/07/2023] [Indexed: 07/25/2023]
Abstract
Fluorescence micro-optical sectioning tomography (fMOST) is a three-dimensional (3d) imaging method at the mesoscopic level. The whole-brain of mice can be imaged at a high resolution of 0.32 × 0.32 × 1.00 μm3. It is useful for revealing the fine morphology of intact organ tissue, even for positioning the single vessel connected with a complicated vascular network across different brain regions in the whole mouse brain. Featuring its 3d visualization of whole-brain cross-scale connections, fMOST has a vast potential to decipher brain function and diseases. This article begins with the background of fMOST technology including a widespread 3D imaging methods comparison and the basic technical principal illustration, followed by the application of fMOST in cerebrovascular research and relevant vascular labeling techniques applicable to different scenarios.
Collapse
Affiliation(s)
- Yang Wu
- Department of Neurology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, 110 Ganhe Road, Shanghai, 200437, China
| | - Zidong Yang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, 825 Zhangheng Road, Shanghai, 200127, China
| | - Mingyuan Liu
- Department of Neurology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, 110 Ganhe Road, Shanghai, 200437, China
| | - Yan Han
- Department of Neurology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, 110 Ganhe Road, Shanghai, 200437, China.
| |
Collapse
|
8
|
Ding L, Zhao X, Guo S, Liu Y, Liu L, Wang Y, Peng H. SNAP: a structure-based neuron morphology reconstruction automatic pruning pipeline. Front Neuroinform 2023; 17:1174049. [PMID: 37388757 PMCID: PMC10303825 DOI: 10.3389/fninf.2023.1174049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 05/22/2023] [Indexed: 07/01/2023] Open
Abstract
Background Neuron morphology analysis is an essential component of neuron cell-type definition. Morphology reconstruction represents a bottleneck in high-throughput morphology analysis workflow, and erroneous extra reconstruction owing to noise and entanglements in dense neuron regions restricts the usability of automated reconstruction results. We propose SNAP, a structure-based neuron morphology reconstruction pruning pipeline, to improve the usability of results by reducing erroneous extra reconstruction and splitting entangled neurons. Methods For the four different types of erroneous extra segments in reconstruction (caused by noise in the background, entanglement with dendrites of close-by neurons, entanglement with axons of other neurons, and entanglement within the same neuron), SNAP incorporates specific statistical structure information into rules for erroneous extra segment detection and achieves pruning and multiple dendrite splitting. Results Experimental results show that this pipeline accomplishes pruning with satisfactory precision and recall. It also demonstrates good multiple neuron-splitting performance. As an effective tool for post-processing reconstruction, SNAP can facilitate neuron morphology analysis.
Collapse
Affiliation(s)
- Liya Ding
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Xuan Zhao
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Shuxia Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yufeng Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lijuan Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yimin Wang
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
- Guangdong Institute of Intelligence Science and Technology, Zhuhai, China
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| |
Collapse
|
9
|
Ma R, Hao L, Tao Y, Mendoza X, Khodeiry M, Liu Y, Shyu ML, Lee RK. RGC-Net: An Automatic Reconstruction and Quantification Algorithm for Retinal Ganglion Cells Based on Deep Learning. Transl Vis Sci Technol 2023; 12:7. [PMID: 37140906 PMCID: PMC10166122 DOI: 10.1167/tvst.12.5.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Accepted: 03/31/2023] [Indexed: 05/05/2023] Open
Abstract
Purpose The purpose of this study was to develop a deep learning-based fully automated reconstruction and quantification algorithm which automatically delineates the neurites and somas of retinal ganglion cells (RGCs). Methods We trained a deep learning-based multi-task image segmentation model, RGC-Net, that automatically segments the neurites and somas in RGC images. A total of 166 RGC scans with manual annotations from human experts were used to develop this model, whereas 132 scans were used for training, and the remaining 34 scans were reserved as testing data. Post-processing techniques removed speckles or dead cells in soma segmentation results to further improve the robustness of the model. Quantification analyses were also conducted to compare five different metrics obtained by our automated algorithm and manual annotations. Results Quantitatively, our segmentation model achieves average foreground accuracy, background accuracy, overall accuracy, and dice similarity coefficient of 0.692, 0.999, 0.997, and 0.691 for the neurite segmentation task, and 0.865, 0.999, 0.997, and 0.850 for the soma segmentation task, respectively. Conclusions The experimental results demonstrate that RGC-Net can accurately and reliably reconstruct neurites and somas in RGC images. We also demonstrate our algorithm is comparable to human manually curated annotations in quantification analyses. Translational Relevance Our deep learning model provides a new tool that can trace and analyze the RGC neurites and somas efficiently and faster than manual analysis.
Collapse
Affiliation(s)
- Rui Ma
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Lili Hao
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
- Department of Ophthalmology, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Yudong Tao
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Ximena Mendoza
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Mohamed Khodeiry
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Yuan Liu
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Mei-Ling Shyu
- School of Science and Engineering, University of Missouri-Kansas City, Kansas City, MO, USA
| | - Richard K. Lee
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| |
Collapse
|
10
|
Liu Y, Zhong Y, Zhao X, Liu L, Ding L, Peng H. Tracing weak neuron fibers. Bioinformatics 2022; 39:6960919. [PMID: 36571479 PMCID: PMC9848051 DOI: 10.1093/bioinformatics/btac816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 11/01/2022] [Accepted: 12/23/2022] [Indexed: 12/27/2022] Open
Abstract
MOTIVATION Precise reconstruction of neuronal arbors is important for circuitry mapping. Many auto-tracing algorithms have been developed toward full reconstruction. However, it is still challenging to trace the weak signals of neurite fibers that often correspond to axons. RESULTS We proposed a method, named the NeuMiner, for tracing weak fibers by combining two strategies: an online sample mining strategy and a modified gamma transformation. NeuMiner improved the recall of weak signals (voxel values <20) by a large margin, from 5.1 to 27.8%. This is prominent for axons, which increased by 6.4 times, compared to 2.0 times for dendrites. Both strategies were shown to be beneficial for weak fiber recognition, and they reduced the average axonal spatial distances to gold standards by 46 and 13%, respectively. The improvement was observed on two prevalent automatic tracing algorithms and can be applied to any other tracers and image types. AVAILABILITY AND IMPLEMENTATION Source codes of NeuMiner are freely available on GitHub (https://github.com/crazylyf/neuronet/tree/semantic_fnm). Image visualization, preprocessing and tracing are conducted on the Vaa3D platform, which is accessible at the Vaa3D GitHub repository (https://github.com/Vaa3D). All training and testing images are cropped from high-resolution fMOST mouse brains downloaded from the Brain Image Library (https://www.brainimagelibrary.org/), and the corresponding gold standards are available at https://doi.brainimagelibrary.org/doi/10.35077/g.25. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Yufeng Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Ye Zhong
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Xuan Zhao
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Lijuan Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Liya Ding
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | | |
Collapse
|
11
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
12
|
Liu C, Wang D, Zhang H, Wu W, Sun W, Zhao T, Zheng N. Using Simulated Training Data of Voxel-Level Generative Models to Improve 3D Neuron Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3624-3635. [PMID: 35834465 DOI: 10.1109/tmi.2022.3191011] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Reconstructing neuron morphologies from fluorescence microscope images plays a critical role in neuroscience studies. It relies on image segmentation to produce initial masks either for further processing or final results to represent neuronal morphologies. This has been a challenging step due to the variation and complexity of noisy intensity patterns in neuron images acquired from microscopes. Whereas progresses in deep learning have brought the goal of accurate segmentation much closer to reality, creating training data for producing powerful neural networks is often laborious. To overcome the difficulty of obtaining a vast number of annotated data, we propose a novel strategy of using two-stage generative models to simulate training data with voxel-level labels. Trained upon unlabeled data by optimizing a novel objective function of preserving predefined labels, the models are able to synthesize realistic 3D images with underlying voxel labels. We showed that these synthetic images could train segmentation networks to obtain even better performance than manually labeled data. To demonstrate an immediate impact of our work, we further showed that segmentation results produced by networks trained upon synthetic data could be used to improve existing neuron reconstruction methods.
Collapse
|
13
|
Zhou H, Cao T, Liu T, Liu S, Chen L, Chen Y, Huang Q, Ye W, Zeng S, Quan T. Super-resolution Segmentation Network for Reconstruction of Packed Neurites. Neuroinformatics 2022; 20:1155-1167. [PMID: 35851944 DOI: 10.1007/s12021-022-09594-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 12/31/2022]
Abstract
Neuron reconstruction can provide the quantitative data required for measuring the neuronal morphology and is crucial in brain research. However, the difficulty in reconstructing dense neurites, wherein massive labor is required for accurate reconstruction in most cases, has not been well resolved. In this work, we provide a new pathway for solving this challenge by proposing the super-resolution segmentation network (SRSNet), which builds the mapping of the neurites in the original neuronal images and their segmentation in a higher-resolution (HR) space. During the segmentation process, the distances between the boundaries of the packed neurites are enlarged, and only the central parts of the neurites are segmented. Owing to this strategy, the super-resolution segmented images are produced for subsequent reconstruction. We carried out experiments on neuronal images with a voxel size of 0.2 μm × 0.2 μm × 1 μm produced by fMOST. SRSNet achieves an average F1 score of 0.88 for automatic packed neurites reconstruction, which takes both the precision and recall values into account, while the average F1 scores of other state-of-the-art automatic tracing methods are less than 0.70.
Collapse
Affiliation(s)
- Hang Zhou
- School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Tingting Cao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tian Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Shijie Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Lu Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Wei Ye
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.
| |
Collapse
|
14
|
Chen W, Liu M, Du H, Radojevic M, Wang Y, Meijering E. Deep-Learning-Based Automated Neuron Reconstruction From 3D Microscopy Images Using Synthetic Training Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1031-1042. [PMID: 34847022 DOI: 10.1109/tmi.2021.3130934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital reconstruction of neuronal structures from 3D microscopy images is critical for the quantitative investigation of brain circuits and functions. It is a challenging task that would greatly benefit from automatic neuron reconstruction methods. In this paper, we propose a novel method called SPE-DNR that combines spherical-patches extraction (SPE) and deep-learning for neuron reconstruction (DNR). Based on 2D Convolutional Neural Networks (CNNs) and the intensity distribution features extracted by SPE, it determines the tracing directions and classifies voxels into foreground or background. This way, starting from a set of seed points, it automatically traces the neurite centerlines and determines when to stop tracing. To avoid errors caused by imperfect manual reconstructions, we develop an image synthesizing scheme to generate synthetic training images with exact reconstructions. This scheme simulates 3D microscopy imaging conditions as well as structural defects, such as gaps and abrupt radii changes, to improve the visual realism of the synthetic images. To demonstrate the applicability and generalizability of SPE-DNR, we test it on 67 real 3D neuron microscopy images from three datasets. The experimental results show that the proposed SPE-DNR method is robust and competitive compared with other state-of-the-art neuron reconstruction methods.
Collapse
|
15
|
Huang Q, Cao T, Zeng S, Li A, Quan T. Minimizing probability graph connectivity cost for discontinuous filamentary structures tracing in neuron image. IEEE J Biomed Health Inform 2022; 26:3092-3103. [PMID: 35104232 DOI: 10.1109/jbhi.2022.3147512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Neuron tracing from optical image is critical in understanding brain function in diseases. A key problem is to trace discontinuous filamentary structures from noisy background, which is commonly encountered in neuronal and some medical images. Broken traces lead to cumulative topological errors, and current methods were hard to assemble various fragmentary traces for correct connection. In this paper, we propose a graph connectivity theoretical method for precise filamentary structure tracing in neuron image. First, we build the initial subgraphs of signals via a region-to-region based tracing method on CNN predicted probability. CNN technique removes noise interference, whereas its prediction for some elongated fragments is still incomplete. Second, we reformulate the global connection problem of individual or fragmented subgraphs under heuristic graph restrictions as a dynamic linear programming function via minimizing graph connectivity cost, where the connected cost of breakpoints are calculated using their probability strength via minimum cost path. Experimental results on challenging neuronal images proved that the proposed method outperformed existing methods and achieved similar results of manual tracing, even in some complex discontinuous issues. Performances on vessel images indicate the potential of the method for some other tubular objects tracing.
Collapse
|
16
|
Liu Y, Foustoukos G, Crochet S, Petersen CC. Axonal and Dendritic Morphology of Excitatory Neurons in Layer 2/3 Mouse Barrel Cortex Imaged Through Whole-Brain Two-Photon Tomography and Registered to a Digital Brain Atlas. Front Neuroanat 2022; 15:791015. [PMID: 35145380 PMCID: PMC8821665 DOI: 10.3389/fnana.2021.791015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 12/22/2021] [Indexed: 11/17/2022] Open
Abstract
Communication between cortical areas contributes importantly to sensory perception and cognition. On the millisecond time scale, information is signaled from one brain area to another by action potentials propagating across long-range axonal arborizations. Here, we develop and test methodology for imaging and annotating the brain-wide axonal arborizations of individual excitatory layer 2/3 neurons in mouse barrel cortex through single-cell electroporation and two-photon serial section tomography followed by registration to a digital brain atlas. Each neuron had an extensive local axon within the barrel cortex. In addition, individual neurons innervated subsets of secondary somatosensory cortex; primary somatosensory cortex for upper limb, trunk, and lower limb; primary and secondary motor cortex; visual and auditory cortical regions; dorsolateral striatum; and various fiber bundles. In the future, it will be important to assess if the diversity of axonal projections across individual layer 2/3 mouse barrel cortex neurons is accompanied by functional differences in their activity patterns.
Collapse
Affiliation(s)
| | | | | | - Carl C.H. Petersen
- Laboratory of Sensory Processing, Brain Mind Institute, Faculty of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
17
|
Liu S, Huang Q, Quan T, Zeng S, Li H. Foreground Estimation in Neuronal Images With a Sparse-Smooth Model for Robust Quantification. Front Neuroanat 2021; 15:716718. [PMID: 34764857 PMCID: PMC8576439 DOI: 10.3389/fnana.2021.716718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Accepted: 10/04/2021] [Indexed: 11/13/2022] Open
Abstract
3D volume imaging has been regarded as a basic tool to explore the organization and function of the neuronal system. Foreground estimation from neuronal image is essential in the quantification and analysis of neuronal image such as soma counting, neurite tracing and neuron reconstruction. However, the complexity of neuronal structure itself and differences in the imaging procedure, including different optical systems and biological labeling methods, result in various and complex neuronal images, which greatly challenge foreground estimation from neuronal image. In this study, we propose a robust sparse-smooth model (RSSM) to separate the foreground and the background of neuronal image. The model combines the different smoothness levels of the foreground and the background, and the sparsity of the foreground. These prior constraints together contribute to the robustness of foreground estimation from a variety of neuronal images. We demonstrate the proposed RSSM method could promote some best available tools to trace neurites or locate somas from neuronal images with their default parameters, and the quantified results are similar or superior to the results that generated from the original images. The proposed method is proved to be robust in the foreground estimation from different neuronal images, and helps to improve the usability of current quantitative tools on various neuronal images with several applications.
Collapse
Affiliation(s)
- Shijie Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Qing Huang
- School of Computer Science and Engineering/Artificial Intelligence, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Hongwei Li
- School of Mathematics and Physics, China University of Geosciences, Wuhan, China
| |
Collapse
|
18
|
Chen X, Zhang C, Zhao J, Xiong Z, Zha ZJ, Wu F. Weakly Supervised Neuron Reconstruction From Optical Microscopy Images With Morphological Priors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3205-3216. [PMID: 33999814 DOI: 10.1109/tmi.2021.3080695] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Manually labeling neurons from high-resolution but noisy and low-contrast optical microscopy (OM) images is tedious. As a result, the lack of annotated data poses a key challenge when applying deep learning techniques for reconstructing neurons from noisy and low-contrast OM images. While traditional tracing methods provide a possible way to efficiently generate labels for supervised network training, the generated pseudo-labels contain many noisy and incorrect labels, which lead to severe performance degradation. On the other hand, the publicly available dataset, BigNeuron, provides a large number of single 3D neurons that are reconstructed using various imaging paradigms and tracing methods. Though the raw OM images are not fully available for these neurons, they convey essential morphological priors for complex 3D neuron structures. In this paper, we propose a new approach to exploit morphological priors from neurons that have been reconstructed for training a deep neural network to extract neuron signals from OM images. We integrate a deep segmentation network in a generative adversarial network (GAN), expecting the segmentation network to be weakly supervised by pseudo-labels at the pixel level while utilizing the supervision of previously reconstructed neurons at the morphology level. In our morphological-prior-guided neuron reconstruction GAN, named MP-NRGAN, the segmentation network extracts neuron signals from raw images, and the discriminator network encourages the extracted neurons to follow the morphology distribution of reconstructed neurons. Comprehensive experiments on the public VISoR-40 dataset and BigNeuron dataset demonstrate that our proposed MP-NRGAN outperforms state-of-the-art approaches with less training effort.
Collapse
|
19
|
Li Q, Shen L. Neuron segmentation using 3D wavelet integrated encoder-decoder network. Bioinformatics 2021; 38:809-817. [PMID: 34647994 PMCID: PMC8756182 DOI: 10.1093/bioinformatics/btab716] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 09/13/2021] [Accepted: 10/12/2021] [Indexed: 02/03/2023] Open
Abstract
MOTIVATION 3D neuron segmentation is a key step for the neuron digital reconstruction, which is essential for exploring brain circuits and understanding brain functions. However, the fine line-shaped nerve fibers of neuron could spread in a large region, which brings great computational cost to the neuron segmentation. Meanwhile, the strong noises and disconnected nerve fibers bring great challenges to the task. RESULTS In this article, we propose a 3D wavelet and deep learning-based 3D neuron segmentation method. The neuronal image is first partitioned into neuronal cubes to simplify the segmentation task. Then, we design 3D WaveUNet, the first 3D wavelet integrated encoder-decoder network, to segment the nerve fibers in the cubes; the wavelets could assist the deep networks in suppressing data noises and connecting the broken fibers. We also produce a Neuronal Cube Dataset (NeuCuDa) using the biggest available annotated neuronal image dataset, BigNeuron, to train 3D WaveUNet. Finally, the nerve fibers segmented in cubes are assembled to generate the complete neuron, which is digitally reconstructed using an available automatic tracing algorithm. The experimental results show that our neuron segmentation method could completely extract the target neuron in noisy neuronal images. The integrated 3D wavelets can efficiently improve the performance of 3D neuron segmentation and reconstruction. AVAILABILITYAND IMPLEMENTATION The data and codes for this work are available at https://github.com/LiQiufu/3D-WaveUNet. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Qiufu Li
- Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China,AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen 518060, China,Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China
| | | |
Collapse
|
20
|
Huang Q, Cao T, Chen Y, Li A, Zeng S, Quan T. Automated Neuron Tracing Using Content-Aware Adaptive Voxel Scooping on CNN Predicted Probability Map. Front Neuroanat 2021; 15:712842. [PMID: 34497493 PMCID: PMC8419427 DOI: 10.3389/fnana.2021.712842] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 07/29/2021] [Indexed: 11/23/2022] Open
Abstract
Neuron tracing, as the essential step for neural circuit building and brain information flow analyzing, plays an important role in the understanding of brain organization and function. Though lots of methods have been proposed, automatic and accurate neuron tracing from optical images remains challenging. Current methods often had trouble in tracing the complex tree-like distorted structures and broken parts of neurite from a noisy background. To address these issues, we propose a method for accurate neuron tracing using content-aware adaptive voxel scooping on a convolutional neural network (CNN) predicted probability map. First, a 3D residual CNN was applied as preprocessing to predict the object probability and suppress high noise. Then, instead of tracing on the binary image produced by maximum classification, an adaptive voxel scooping method was presented for successive neurite tracing on the probability map, based on the internal content properties (distance, connectivity, and probability continuity along direction) of the neurite. Last, the neuron tree graph was built using the length first criterion. The proposed method was evaluated on the public BigNeuron datasets and fluorescence micro-optical sectioning tomography (fMOST) datasets and outperformed current state-of-art methods on images with neurites that had broken parts and complex structures. The high accuracy tracing proved the potential of the proposed method for neuron tracing on large-scale.
Collapse
Affiliation(s)
- Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingting Cao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
21
|
Li Q, Zhang Y, Liang H, Gong H, Jiang L, Liu Q, Shen L. Deep learning based neuronal soma detection and counting for Alzheimer's disease analysis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 203:106023. [PMID: 33744751 DOI: 10.1016/j.cmpb.2021.106023] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 02/21/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Alzheimer's Disease (AD) is associated with neuronal damage and decrease. Micro-Optical Sectioning Tomography (MOST) provides an approach to acquire high-resolution images for neuron analysis in the whole-brain. Application of this technique to AD mouse brain enables us to investigate neuron changes during the progression of AD pathology. However, how to deal with the huge amount of data becomes the bottleneck. METHODS Using MOST technology, we acquired 3D whole-brain images of six AD mice, and sampled the imaging data of four regions in each mouse brain for AD progression analysis. To count the number of neurons, we proposed a deep learning based method by detecting neuronal soma in the neuronal images. In our method, the neuronal images were first cut into small cubes, then a Convolutional Neural Network (CNN) classifier was designed to detect the neuronal soma by classifying the cubes into three categories, "soma", "fiber", and "background". RESULTS Compared with the manual method and currently available NeuroGPS software, our method demonstrates faster speed and higher accuracy in identifying neurons from the MOST images. By applying our method to various brain regions of 6-month-old and 12-month-old AD mice, we found that the amount of neurons in three brain regions (lateral entorhinal cortex, medial entorhinal cortex, and presubiculum) decreased slightly with the increase of age, which is consistent with the experimental results previously reported. CONCLUSION This paper provides a new method to automatically handle the huge amounts of data and accurately identify neuronal soma from the MOST images. It also provides the potential possibility to construct a whole-brain neuron projection to reveal the impact of AD pathology on mouse brain.
Collapse
Affiliation(s)
- Qiufu Li
- Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong, 518060, China; AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen 518060, China; Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China
| | - Yu Zhang
- College of Life Sciences and Oceanography, Shenzhen University, Shenzhen, Guangdong, 518055, China
| | - Hanbang Liang
- Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong, 518060, China; AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen 518060, China; Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China
| | - Hui Gong
- National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan 430074, China
| | - Liang Jiang
- College of Life Sciences and Oceanography, Shenzhen University, Shenzhen, Guangdong, 518055, China.
| | - Qiong Liu
- College of Life Sciences and Oceanography, Shenzhen University, Shenzhen, Guangdong, 518055, China; Shenzhen Bay Laboratory, Shenzhen, 518055, China; Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China.
| | - Linlin Shen
- Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong, 518060, China; AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen 518060, China; Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China.
| |
Collapse
|