1
|
Liu M, Wu S, Chen R, Lin Z, Wang Y, Meijering E. Brain Image Segmentation for Ultrascale Neuron Reconstruction via an Adaptive Dual-Task Learning Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2574-2586. [PMID: 38373129 DOI: 10.1109/tmi.2024.3367384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Accurate morphological reconstruction of neurons in whole brain images is critical for brain science research. However, due to the wide range of whole brain imaging, uneven staining, and optical system fluctuations, there are significant differences in image properties between different regions of the ultrascale brain image, such as dramatically varying voxel intensities and inhomogeneous distribution of background noise, posing an enormous challenge to neuron reconstruction from whole brain images. In this paper, we propose an adaptive dual-task learning network (ADTL-Net) to quickly and accurately extract neuronal structures from ultrascale brain images. Specifically, this framework includes an External Features Classifier (EFC) and a Parameter Adaptive Segmentation Decoder (PASD), which share the same Multi-Scale Feature Encoder (MSFE). MSFE introduces an attention module named Channel Space Fusion Module (CSFM) to extract structure and intensity distribution features of neurons at different scales for addressing the problem of anisotropy in 3D space. Then, EFC is designed to classify these feature maps based on external features, such as foreground intensity distributions and image smoothness, and select specific PASD parameters to decode them of different classes to obtain accurate segmentation results. PASD contains multiple sets of parameters trained by different representative complex signal-to-noise distribution image blocks to handle various images more robustly. Experimental results prove that compared with other advanced segmentation methods for neuron reconstruction, the proposed method achieves state-of-the-art results in the task of neuron reconstruction from ultrascale brain images, with an improvement of about 49% in speed and 12% in F1 score.
Collapse
|
2
|
Zeng Y, Wang Y. Complete Neuron Reconstruction Based on Branch Confidence. Brain Sci 2024; 14:396. [PMID: 38672045 PMCID: PMC11047972 DOI: 10.3390/brainsci14040396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 04/04/2024] [Accepted: 04/09/2024] [Indexed: 04/28/2024] Open
Abstract
In the past few years, significant advancements in microscopic imaging technology have led to the production of numerous high-resolution images capturing brain neurons at the micrometer scale. The reconstructed structure of neurons from neuronal images can serve as a valuable reference for research in brain diseases and neuroscience. Currently, there lacks an accurate and efficient method for neuron reconstruction. Manual reconstruction remains the primary approach, offering high accuracy but requiring significant time investment. While some automatic reconstruction methods are faster, they often sacrifice accuracy and cannot be directly relied upon. Therefore, the primary goal of this paper is to develop a neuron reconstruction tool that is both efficient and accurate. The tool aids users in reconstructing complete neurons by calculating the confidence of branches during the reconstruction process. The method models the neuron reconstruction as multiple Markov chains, and calculates the confidence of the connections between branches by simulating the reconstruction artifacts in the results. Users iteratively modify low-confidence branches to ensure precise and efficient neuron reconstruction. Experiments on both the publicly accessible BigNeuron dataset and a self-created Whole-Brain dataset demonstrate that the tool achieves high accuracy similar to manual reconstruction, while significantly reducing reconstruction time.
Collapse
Affiliation(s)
- Ying Zeng
- School of Computer Science and Technology, Shanghai University, Shanghai 200444, China;
- Guangdong Institute of Intelligence Science and Technology, Zhuhai 519031, China
| | - Yimin Wang
- Guangdong Institute of Intelligence Science and Technology, Zhuhai 519031, China
| |
Collapse
|
3
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
4
|
Song J, Lian Z, Xiao L. Deep Open-Curve Snake for Discriminative 3D Neuron Tracking. IEEE J Biomed Health Inform 2023; 27:5815-5826. [PMID: 37773913 DOI: 10.1109/jbhi.2023.3320804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/01/2023]
Abstract
Open-Curve Snake (OCS) has been successfully used in three-dimensional tracking of neurites. However, it is limited when dealing with noise-contaminated weak filament signals in real-world applications. In addition, its tracking results are highly sensitive to initial seeds and depend only on image gradient-derived forces. To address these issues and boost the canonical OCS tracker to a new level of learnable deep learning algorithms, we present Deep Open-Curve Snake (DOCS), a novel discriminative 3D neuron tracking framework that simultaneously learns a 3D distance-regression discriminator and a 3D deeply-learned tracker under the energy minimization, which can promote each other. In particular, the open curve tracking process in DOCS is formed as convolutional neural network prediction procedures of new deformation fields, stretching directions, and local radii and iteratively updated by minimizing a tractable energy function containing fitting forces and curve length. By sharing the same deep learning architectures in an end-to-end trainable framework, DOCS is able to fully grasp the information available in the volumetric neuronal data to address segmentation, tracing, and reconstruction of complete neuron structures in the wild. We demonstrated the superiority of DOCS by evaluating it on both the BigNeuron and Diadem datasets where consistently state-of-the-art performances were achieved for comparison against current neuron tracing and tracking approaches. Our method improves the average overlap score and distance score about 1.7% and 17% in the BigNeuron challenge data set, respectively, and the average overlap score about 4.1% in the Diadem dataset.
Collapse
|
5
|
Li Z, Shang Z, Liu J, Zhen H, Zhu E, Zhong S, Sturgess RN, Zhou Y, Hu X, Zhao X, Wu Y, Li P, Lin R, Ren J. D-LMBmap: a fully automated deep-learning pipeline for whole-brain profiling of neural circuitry. Nat Methods 2023; 20:1593-1604. [PMID: 37770711 PMCID: PMC10555838 DOI: 10.1038/s41592-023-01998-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 08/02/2023] [Indexed: 09/30/2023]
Abstract
Recent proliferation and integration of tissue-clearing methods and light-sheet fluorescence microscopy has created new opportunities to achieve mesoscale three-dimensional whole-brain connectivity mapping with exceptionally high throughput. With the rapid generation of large, high-quality imaging datasets, downstream analysis is becoming the major technical bottleneck for mesoscale connectomics. Current computational solutions are labor intensive with limited applications because of the exhaustive manual annotation and heavily customized training. Meanwhile, whole-brain data analysis always requires combining multiple packages and secondary development by users. To address these challenges, we developed D-LMBmap, an end-to-end package providing an integrated workflow containing three modules based on deep-learning algorithms for whole-brain connectivity mapping: axon segmentation, brain region segmentation and whole-brain registration. D-LMBmap does not require manual annotation for axon segmentation and achieves quantitative analysis of whole-brain projectome in a single workflow with superior accuracy for multiple cell types in all of the modalities tested.
Collapse
Affiliation(s)
- Zhongyu Li
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK
| | - Zengyi Shang
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Jingyi Liu
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Haotian Zhen
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Entao Zhu
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Shilin Zhong
- National Institute of Biological Sciences (NIBS), Beijing, China
| | - Robyn N Sturgess
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK
| | - Yitian Zhou
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Xuemeng Hu
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Xingyue Zhao
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Yi Wu
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Peiqi Li
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Rui Lin
- National Institute of Biological Sciences (NIBS), Beijing, China
| | - Jing Ren
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK.
| |
Collapse
|
6
|
Cudic M, Diamond JS, Noble JA. Unpaired mesh-to-image translation for 3D fluorescent microscopy images of neurons. Med Image Anal 2023; 86:102768. [PMID: 36857945 DOI: 10.1016/j.media.2023.102768] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 01/18/2023] [Accepted: 02/08/2023] [Indexed: 02/12/2023]
Abstract
While Generative Adversarial Networks (GANs) can now reliably produce realistic images in a multitude of imaging domains, they are ill-equipped to model thin, stochastic textures present in many large 3D fluorescent microscopy (FM) images acquired in biological research. This is especially problematic in neuroscience where the lack of ground truth data impedes the development of automated image analysis algorithms for neurons and neural populations. We therefore propose an unpaired mesh-to-image translation methodology for generating volumetric FM images of neurons from paired ground truths. We start by learning unique FM styles efficiently through a Gramian-based discriminator. Then, we stylize 3D voxelized meshes of previously reconstructed neurons by successively generating slices. As a result, we effectively create a synthetic microscope and can acquire realistic FM images of neurons with control over the image content and imaging configurations. We demonstrate the feasibility of our architecture and its superior performance compared to state-of-the-art image translation architectures through a variety of texture-based metrics, unsupervised segmentation accuracy, and an expert opinion test. In this study, we use 2 synthetic FM datasets and 2 newly acquired FM datasets of retinal neurons.
Collapse
Affiliation(s)
- Mihael Cudic
- National Institutes of Health Oxford-Cambridge Scholars Program, USA; National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA; Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK
| | - Jeffrey S Diamond
- National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA
| | - J Alison Noble
- Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK.
| |
Collapse
|
7
|
Han T, Wu J, Luo W, Wang H, Jin Z, Qu L. Review of Generative Adversarial Networks in mono- and cross-modal biomedical image registration. Front Neuroinform 2022; 16:933230. [PMID: 36483313 PMCID: PMC9724825 DOI: 10.3389/fninf.2022.933230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 10/13/2022] [Indexed: 09/19/2023] Open
Abstract
Biomedical image registration refers to aligning corresponding anatomical structures among different images, which is critical to many tasks, such as brain atlas building, tumor growth monitoring, and image fusion-based medical diagnosis. However, high-throughput biomedical image registration remains challenging due to inherent variations in the intensity, texture, and anatomy resulting from different imaging modalities, different sample preparation methods, or different developmental stages of the imaged subject. Recently, Generative Adversarial Networks (GAN) have attracted increasing interest in both mono- and cross-modal biomedical image registrations due to their special ability to eliminate the modal variance and their adversarial training strategy. This paper provides a comprehensive survey of the GAN-based mono- and cross-modal biomedical image registration methods. According to the different implementation strategies, we organize the GAN-based mono- and cross-modal biomedical image registration methods into four categories: modality translation, symmetric learning, adversarial strategies, and joint training. The key concepts, the main contributions, and the advantages and disadvantages of the different strategies are summarized and discussed. Finally, we analyze the statistics of all the cited works from different points of view and reveal future trends for GAN-based biomedical image registration studies.
Collapse
Affiliation(s)
- Tingting Han
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Jun Wu
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Wenting Luo
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Huiming Wang
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Zhe Jin
- School of Artificial Intelligence, Anhui University, Hefei, China
| | - Lei Qu
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| |
Collapse
|
8
|
Zhu X, Liu X, Liu S, Shen Y, You L, Wang Y. Robust quasi-uniform surface meshing of neuronal morphology using line skeleton-based progressive convolution approximation. Front Neuroinform 2022; 16:953930. [DOI: 10.3389/fninf.2022.953930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 08/31/2022] [Indexed: 11/13/2022] Open
Abstract
Creating high-quality polygonal meshes which represent the membrane surface of neurons for both visualization and numerical simulation purposes is an important yet nontrivial task, due to their irregular and complicated structures. In this paper, we develop a novel approach of constructing a watertight 3D mesh from the abstract point-and-diameter representation of the given neuronal morphology. The membrane shape of the neuron is reconstructed by progressively deforming an initial sphere with the guidance of the neuronal skeleton, which can be regarded as a digital sculpting process. To efficiently deform the surface, a local mapping is adopted to simulate the animation skinning. As a result, only the vertices within the region of influence (ROI) of the current skeletal position need to be updated. The ROI is determined based on the finite-support convolution kernel, which is convolved along the line skeleton of the neuron to generate a potential field that further smooths the overall surface at both unidirectional and bifurcating regions. Meanwhile, the mesh quality during the entire evolution is always guaranteed by a set of quasi-uniform rules, which split excessively long edges, collapse undersized ones, and adjust vertices within the tangent plane to produce regular triangles. Additionally, the local vertices density on the result mesh is decided by the radius and curvature of neurites to achieve adaptiveness.
Collapse
|