1
|
Zhang L, Huang L, Yuan Z, Hang Y, Zeng Y, Li K, Wang L, Zeng H, Chen X, Zhang H, Xi J, Chen D, Gao Z, Le L, Chen J, Ye W, Liu L, Wang Y, Peng H. Collaborative augmented reconstruction of 3D neuron morphology in mouse and human brains. Nat Methods 2024; 21:1936-1946. [PMID: 39232199 PMCID: PMC11468770 DOI: 10.1038/s41592-024-02401-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 07/30/2024] [Indexed: 09/06/2024]
Abstract
Digital reconstruction of the intricate 3D morphology of individual neurons from microscopic images is a crucial challenge in both individual laboratories and large-scale projects focusing on cell types and brain anatomy. This task often fails in both conventional manual reconstruction and state-of-the-art artificial intelligence (AI)-based automatic reconstruction algorithms. It is also challenging to organize multiple neuroanatomists to generate and cross-validate biologically relevant and mutually agreed upon reconstructions in large-scale data production. Based on collaborative group intelligence augmented by AI, we developed a collaborative augmented reconstruction (CAR) platform for neuron reconstruction at scale. This platform allows for immersive interaction and efficient collaborative editing of neuron anatomy using a variety of devices, such as desktop workstations, virtual reality headsets and mobile phones, enabling users to contribute anytime and anywhere and to take advantage of several AI-based automation tools. We tested CAR's applicability for challenging mouse and human neurons toward scaled and faithful data production.
Collapse
Affiliation(s)
- Lingli Zhang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lei Huang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Zexin Yuan
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
- School of Future Technology, Shanghai University, Shanghai, China
| | - Yuning Hang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Ying Zeng
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Kaixiang Li
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lijun Wang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Haoyu Zeng
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Xin Chen
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Hairuo Zhang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Jiaqi Xi
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
| | - Danni Chen
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
| | - Ziqin Gao
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Longxin Le
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
- School of Future Technology, Shanghai University, Shanghai, China
| | - Jie Chen
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Wen Ye
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lijuan Liu
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yimin Wang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China.
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China.
| | - Hanchuan Peng
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China.
| |
Collapse
|
2
|
Dong X, Dong W, Guo X. Diagnosis of acute hyperglycemia based on data-driven prediction models. SLAS Technol 2024; 29:100182. [PMID: 39209117 DOI: 10.1016/j.slast.2024.100182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Revised: 07/09/2024] [Accepted: 08/26/2024] [Indexed: 09/04/2024]
Abstract
Acute hyperglycemia is a common endocrine and metabolic disorder that seriously threatens the health and life of patients. Exploring effective diagnostic methods and treatment strategies for acute hyperglycemia to improve treatment quality and patient satisfaction is currently one of the hotspots and difficulties in medical research. This article introduced a method for diagnosing acute hyperglycemia based on data-driven prediction models. In the experiment, clinical data from 1000 patients with acute hyperglycemia were collected. Through data cleaning and feature engineering, 10 features related to acute hyperglycemia were selected, including BMI (Body Mass Index), TG (triacylglycerol), HDL-C (High-density lipoprotein cholesterol), etc. The support vector machine (SVM) model was used for training and testing. The experimental results showed that the SVM model can effectively predict the occurrence of acute hyperglycemia, with an average accuracy of 96 %, a recall rate of 84 %, and an F1 value of 89 %. The diagnostic method for acute hyperglycemia based on data-driven prediction models has a certain reference value, which can be used as a clinical auxiliary diagnostic tool to improve the early diagnosis and treatment success rate of acute hyperglycemia patients.
Collapse
Affiliation(s)
- Xinxin Dong
- Department of Geriatrics, General Hospital of Taiyuan Iron Steel (Group) Co., Ltd, Taiyuan 030003, Shanxi, China
| | - Wenping Dong
- Department of Geriatrics, General Hospital of Taiyuan Iron Steel (Group) Co., Ltd, Taiyuan 030003, Shanxi, China.
| | - Xueshan Guo
- Department of Operations Management, General Hospital of Taiyuan Iron Steel (Group) Co., Ltd, Taiyuan 030003, Shanxi, China
| |
Collapse
|
3
|
Chen W, Liao M, Bao S, An S, Li W, Liu X, Huang G, Gong H, Luo Q, Xiao C, Li A. A hierarchically annotated dataset drives tangled filament recognition in digital neuron reconstruction. PATTERNS (NEW YORK, N.Y.) 2024; 5:101007. [PMID: 39233689 PMCID: PMC11368685 DOI: 10.1016/j.patter.2024.101007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/25/2024] [Accepted: 05/23/2024] [Indexed: 09/06/2024]
Abstract
Reconstructing neuronal morphology is vital for classifying neurons and mapping brain connectivity. However, it remains a significant challenge due to its complex structure, dense distribution, and low image contrast. In particular, AI-assisted methods often yield numerous errors that require extensive manual intervention. Therefore, reconstructing hundreds of neurons is already a daunting task for general research projects. A key issue is the lack of specialized training for challenging regions due to inadequate data and training methods. This study extracted 2,800 challenging neuronal blocks and categorized them into multiple density levels. Furthermore, we enhanced images using an axial continuity-based network that improved three-dimensional voxel resolution while reducing the difficulty of neuron recognition. Comparing the pre- and post-enhancement results in automatic algorithms using fluorescence micro-optical sectioning tomography (fMOST) data, we observed a significant increase in the recall rate. Our study not only enhances the throughput of reconstruction but also provides a fundamental dataset for tangled neuron reconstruction.
Collapse
Affiliation(s)
- Wu Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Mingwei Liao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Shengda Bao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Sile An
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Wenwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Xin Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Ganghua Huang
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou 570228, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou 215123, China
| | - Qingming Luo
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou 570228, China
| | - Chi Xiao
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou 570228, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan 430074, China
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou 215123, China
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou 570228, China
| |
Collapse
|
4
|
Liu M, Wu S, Chen R, Lin Z, Wang Y, Meijering E. Brain Image Segmentation for Ultrascale Neuron Reconstruction via an Adaptive Dual-Task Learning Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2574-2586. [PMID: 38373129 DOI: 10.1109/tmi.2024.3367384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Accurate morphological reconstruction of neurons in whole brain images is critical for brain science research. However, due to the wide range of whole brain imaging, uneven staining, and optical system fluctuations, there are significant differences in image properties between different regions of the ultrascale brain image, such as dramatically varying voxel intensities and inhomogeneous distribution of background noise, posing an enormous challenge to neuron reconstruction from whole brain images. In this paper, we propose an adaptive dual-task learning network (ADTL-Net) to quickly and accurately extract neuronal structures from ultrascale brain images. Specifically, this framework includes an External Features Classifier (EFC) and a Parameter Adaptive Segmentation Decoder (PASD), which share the same Multi-Scale Feature Encoder (MSFE). MSFE introduces an attention module named Channel Space Fusion Module (CSFM) to extract structure and intensity distribution features of neurons at different scales for addressing the problem of anisotropy in 3D space. Then, EFC is designed to classify these feature maps based on external features, such as foreground intensity distributions and image smoothness, and select specific PASD parameters to decode them of different classes to obtain accurate segmentation results. PASD contains multiple sets of parameters trained by different representative complex signal-to-noise distribution image blocks to handle various images more robustly. Experimental results prove that compared with other advanced segmentation methods for neuron reconstruction, the proposed method achieves state-of-the-art results in the task of neuron reconstruction from ultrascale brain images, with an improvement of about 49% in speed and 12% in F1 score.
Collapse
|
5
|
Caznok Silveira AC, Antunes ASLM, Athié MCP, da Silva BF, Ribeiro dos Santos JV, Canateli C, Fontoura MA, Pinto A, Pimentel-Silva LR, Avansini SH, de Carvalho M. Between neurons and networks: investigating mesoscale brain connectivity in neurological and psychiatric disorders. Front Neurosci 2024; 18:1340345. [PMID: 38445254 PMCID: PMC10912403 DOI: 10.3389/fnins.2024.1340345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 01/29/2024] [Indexed: 03/07/2024] Open
Abstract
The study of brain connectivity has been a cornerstone in understanding the complexities of neurological and psychiatric disorders. It has provided invaluable insights into the functional architecture of the brain and how it is perturbed in disorders. However, a persistent challenge has been achieving the proper spatial resolution, and developing computational algorithms to address biological questions at the multi-cellular level, a scale often referred to as the mesoscale. Historically, neuroimaging studies of brain connectivity have predominantly focused on the macroscale, providing insights into inter-regional brain connections but often falling short of resolving the intricacies of neural circuitry at the cellular or mesoscale level. This limitation has hindered our ability to fully comprehend the underlying mechanisms of neurological and psychiatric disorders and to develop targeted interventions. In light of this issue, our review manuscript seeks to bridge this critical gap by delving into the domain of mesoscale neuroimaging. We aim to provide a comprehensive overview of conditions affected by aberrant neural connections, image acquisition techniques, feature extraction, and data analysis methods that are specifically tailored to the mesoscale. We further delineate the potential of brain connectivity research to elucidate complex biological questions, with a particular focus on schizophrenia and epilepsy. This review encompasses topics such as dendritic spine quantification, single neuron morphology, and brain region connectivity. We aim to showcase the applicability and significance of mesoscale neuroimaging techniques in the field of neuroscience, highlighting their potential for gaining insights into the complexities of neurological and psychiatric disorders.
Collapse
Affiliation(s)
- Ana Clara Caznok Silveira
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
- School of Electrical and Computer Engineering, University of Campinas, Campinas, Brazil
| | | | - Maria Carolina Pedro Athié
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| | - Bárbara Filomena da Silva
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| | | | - Camila Canateli
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| | - Marina Alves Fontoura
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| | - Allan Pinto
- Brazilian Synchrotron Light Laboratory, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| | | | - Simoni Helena Avansini
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| | - Murilo de Carvalho
- National Laboratory of Biosciences, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
- Brazilian Synchrotron Light Laboratory, Brazilian Center for Research in Energy and Materials, Campinas, Brazil
| |
Collapse
|
6
|
Wu YC, Chang CY, Huang YT, Chen SY, Chen CH, Kao HK. Artificial Intelligence Image Recognition System for Preventing Wrong-Site Upper Limb Surgery. Diagnostics (Basel) 2023; 13:3667. [PMID: 38132251 PMCID: PMC10743305 DOI: 10.3390/diagnostics13243667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 11/30/2023] [Accepted: 12/02/2023] [Indexed: 12/23/2023] Open
Abstract
Our image recognition system employs a deep learning model to differentiate between the left and right upper limbs in images, allowing doctors to determine the correct surgical position. From the experimental results, it was found that the precision rate and the recall rate of the intelligent image recognition system for preventing wrong-site upper limb surgery proposed in this paper could reach 98% and 93%, respectively. The results proved that our Artificial Intelligence Image Recognition System (AIIRS) could indeed assist orthopedic surgeons in preventing the occurrence of wrong-site left and right upper limb surgery. At the same time, in future, we will apply for an IRB based on our prototype experimental results and we will conduct the second phase of human trials. The results of this research paper are of great benefit and research value to upper limb orthopedic surgery.
Collapse
Affiliation(s)
- Yi-Chao Wu
- Department of Electronic Engineering, National Yunlin University of Science and Technology, Yunlin 950359, Taiwan;
| | - Chao-Yun Chang
- Interdisciplinary Program of Green and Information Technology, National Taitung University, Taitung 950359, Taiwan; (C.-Y.C.); (Y.-T.H.); (S.-Y.C.)
| | - Yu-Tse Huang
- Interdisciplinary Program of Green and Information Technology, National Taitung University, Taitung 950359, Taiwan; (C.-Y.C.); (Y.-T.H.); (S.-Y.C.)
| | - Sung-Yuan Chen
- Interdisciplinary Program of Green and Information Technology, National Taitung University, Taitung 950359, Taiwan; (C.-Y.C.); (Y.-T.H.); (S.-Y.C.)
| | - Cheng-Hsuan Chen
- Department of Electrical Engineering, National Central University, Taoyuan 320317, Taiwan;
- Department of Electrical Engineering, Fu Jen Catholic University, New Taipei City 242062, Taiwan
| | - Hsuan-Kai Kao
- Department of Orthopedic Surgery, Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
- Bone and Joint Research Center, Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
- College of Medicine, Chang Gung University, Taoyuan 333423, Taiwan
| |
Collapse
|
7
|
Tang Y, Hong W, Xu X, Li M, Jin L. Traumatic rib fracture patterns associated with bone mineral density statuses derived from CT images. Front Endocrinol (Lausanne) 2023; 14:1304219. [PMID: 38155951 PMCID: PMC10754511 DOI: 10.3389/fendo.2023.1304219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 11/27/2023] [Indexed: 12/30/2023] Open
Abstract
Background The impact of decreased bone mineral density (BMD) on traumatic rib fractures remains unknown. We combined computed tomography (CT) and artificial intelligence (AI) to measure BMD and explore its impact on traumatic rib fractures and their patterns. Methods The retrospective cohort comprised patients who visited our hospital from 2017-2018; the prospective cohort (control group) was consecutively recruited from the same hospital from February-June 2023. All patients had blunt chest trauma and underwent CT. Volumetric BMD of L1 vertebra was measured by using an AI software. Analyses were done by using BMD categorized as osteoporosis (<80 mg/cm3), osteopenia (80-120 mg/cm3), or normal (>120 mg/cm3). Pearson's χ2, Fisher's exact, or Kruskal-Wallis tests and Bonferroni correction were used for comparisons. Negative binomial, and logistic regression analyses were used to assess the associations and impacts of BMD status. Sensitivity analyses were also performed. Findings The retrospective cohort included 2,076 eligible patients, of whom 954 (46%) had normal BMD, 806 (38.8%) had osteopenia, and 316 (15.2%) had osteoporosis. After sex- and age-adjustment, osteoporosis was significantly associated with higher rib fracture rates, and a higher likelihood of fractures in ribs 4-7. Furthermore, both the osteopenia and osteoporosis groups demonstrated a significantly higher number of fractured ribs and fracture sites on ribs, with a higher likelihood of fractures in ribs 1-3, as well as flail chest. The prospective cohort included 205 eligible patients, of whom 92 (44.9%) had normal BMD, 74 (36.1%) had osteopenia, and 39 (19.0%) had osteoporosis. The findings observed within this cohort were in concurrence with those in the retrospective cohort. Interpretation Traumatic rib fractures are associated with decreased BMD. CT-AI can help to identify individuals who have decreased BMD and a greater rib fracture rate, along with their fracture patterns.
Collapse
Affiliation(s)
- Yilin Tang
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai, China
| | - Wei Hong
- Department of Geriatrics and Gerontology, Huadong Hospital, Affiliated with Fudan University, Shanghai, China
| | - Xinxin Xu
- Clinical Research Center for Geriatric Medicine, Huadong Hospital, Affiliated with Fudan University, Shanghai, China
| | - Ming Li
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai, China
- Diagnosis and Treatment Center of Small Lung Nodules, Huadong Hospital, Affiliated with Fudan University, Shanghai, China
| | - Liang Jin
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai, China
- Diagnosis and Treatment Center of Small Lung Nodules, Huadong Hospital, Affiliated with Fudan University, Shanghai, China
- Radiology Department, Huashan Hospital Affiliated with Fudan University, Shanghai, China
| |
Collapse
|
8
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
9
|
Zou Y, Wang Y, Kong X, Chen T, Chen J, Li Y. Deep Learner System Based on Focal Color Retinal Fundus Images to Assist in Diagnosis. Diagnostics (Basel) 2023; 13:2985. [PMID: 37761352 PMCID: PMC10529281 DOI: 10.3390/diagnostics13182985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 08/31/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023] Open
Abstract
Retinal diseases are a serious and widespread ophthalmic disease that seriously affects patients' vision and quality of life. With the aging of the population and the change in lifestyle, the incidence rate of retinal diseases has increased year by year. However, traditional diagnostic methods often require experienced doctors to analyze and judge fundus images, which carries the risk of subjectivity and misdiagnosis. This paper will analyze an intelligent medical system based on focal retinal image-aided diagnosis and use a convolutional neural network (CNN) to recognize, classify, and detect hard exudates (HEs) in fundus images (FIs). The research results indicate that under the same other conditions, the accuracy, recall, and precision of the system in diagnosing five types of patients with pathological changes under color retinal FIs range from 86.4% to 98.6%. Under conventional retinopathy FIs, the accuracy, recall, and accuracy of the system in diagnosing five types of patients ranged from 70.1% to 85%. The results show that the application of focus color retinal FIs in the intelligent medical system has high accuracy and reliability for the early detection and diagnosis of diabetic retinopathy and has important clinical applications.
Collapse
Affiliation(s)
- Yanli Zou
- School of Basic Medical Sciences, Southern Medical University, Guangzhou 510000, China;
- Department of Ophthalmology, Foshan Hospital Affiliated to Southern Medical University, Foshan 528000, China;
| | - Yujuan Wang
- Internal Medicine, Brookdale University Hospital Medical Center, New York, NY 11212, USA
| | - Xiangbin Kong
- Department of Ophthalmology, Foshan Hospital Affiliated to Southern Medical University, Foshan 528000, China;
| | - Tingting Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510000, China (J.C.)
| | - Jiangna Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510000, China (J.C.)
| | - Yiqun Li
- Department of Orthopedics, Foshan Hospital Affiliated to Southern Medical University, Foshan 528000, China
| |
Collapse
|
10
|
Wei X, Liu Q, Liu M, Wang Y, Meijering E. 3D Soma Detection in Large-Scale Whole Brain Images via a Two-Stage Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:148-157. [PMID: 36103445 DOI: 10.1109/tmi.2022.3206605] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
3D soma detection in whole brain images is a critical step for neuron reconstruction. However, existing soma detection methods are not suitable for whole mouse brain images with large amounts of data and complex structure. In this paper, we propose a two-stage deep neural network to achieve fast and accurate soma detection in large-scale and high-resolution whole mouse brain images (more than 1TB). For the first stage, a lightweight Multi-level Cross Classification Network (MCC-Net) is proposed to filter out images without somas and generate coarse candidate images by combining the advantages of the multi convolution layer's feature extraction ability. It can speed up the detection of somas and reduce the computational complexity. For the second stage, to further obtain the accurate locations of somas in the whole mouse brain images, the Scale Fusion Segmentation Network (SFS-Net) is developed to segment soma regions from candidate images. Specifically, the SFS-Net captures multi-scale context information and establishes a complementary relationship between encoder and decoder by combining the encoder-decoder structure and a 3D Scale-Aware Pyramid Fusion (SAPF) module for better segmentation performance. The experimental results on three whole mouse brain images verify that the proposed method can achieve excellent performance and provide the reconstruction of neurons with beneficial information. Additionally, we have established a public dataset named WBMSD, including 798 high-resolution and representative images ( 256 ×256 ×256 voxels) from three whole mouse brain images, dedicated to the research of soma detection, which will be released along with this paper.
Collapse
|
11
|
Liu Y, Zhong Y, Zhao X, Liu L, Ding L, Peng H. Tracing weak neuron fibers. Bioinformatics 2022; 39:6960919. [PMID: 36571479 PMCID: PMC9848051 DOI: 10.1093/bioinformatics/btac816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 11/01/2022] [Accepted: 12/23/2022] [Indexed: 12/27/2022] Open
Abstract
MOTIVATION Precise reconstruction of neuronal arbors is important for circuitry mapping. Many auto-tracing algorithms have been developed toward full reconstruction. However, it is still challenging to trace the weak signals of neurite fibers that often correspond to axons. RESULTS We proposed a method, named the NeuMiner, for tracing weak fibers by combining two strategies: an online sample mining strategy and a modified gamma transformation. NeuMiner improved the recall of weak signals (voxel values <20) by a large margin, from 5.1 to 27.8%. This is prominent for axons, which increased by 6.4 times, compared to 2.0 times for dendrites. Both strategies were shown to be beneficial for weak fiber recognition, and they reduced the average axonal spatial distances to gold standards by 46 and 13%, respectively. The improvement was observed on two prevalent automatic tracing algorithms and can be applied to any other tracers and image types. AVAILABILITY AND IMPLEMENTATION Source codes of NeuMiner are freely available on GitHub (https://github.com/crazylyf/neuronet/tree/semantic_fnm). Image visualization, preprocessing and tracing are conducted on the Vaa3D platform, which is accessible at the Vaa3D GitHub repository (https://github.com/Vaa3D). All training and testing images are cropped from high-resolution fMOST mouse brains downloaded from the Brain Image Library (https://www.brainimagelibrary.org/), and the corresponding gold standards are available at https://doi.brainimagelibrary.org/doi/10.35077/g.25. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Yufeng Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Ye Zhong
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Xuan Zhao
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Lijuan Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Liya Ding
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | | |
Collapse
|
12
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
13
|
Chen W, Liu M, Du H, Radojevic M, Wang Y, Meijering E. Deep-Learning-Based Automated Neuron Reconstruction From 3D Microscopy Images Using Synthetic Training Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1031-1042. [PMID: 34847022 DOI: 10.1109/tmi.2021.3130934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital reconstruction of neuronal structures from 3D microscopy images is critical for the quantitative investigation of brain circuits and functions. It is a challenging task that would greatly benefit from automatic neuron reconstruction methods. In this paper, we propose a novel method called SPE-DNR that combines spherical-patches extraction (SPE) and deep-learning for neuron reconstruction (DNR). Based on 2D Convolutional Neural Networks (CNNs) and the intensity distribution features extracted by SPE, it determines the tracing directions and classifies voxels into foreground or background. This way, starting from a set of seed points, it automatically traces the neurite centerlines and determines when to stop tracing. To avoid errors caused by imperfect manual reconstructions, we develop an image synthesizing scheme to generate synthetic training images with exact reconstructions. This scheme simulates 3D microscopy imaging conditions as well as structural defects, such as gaps and abrupt radii changes, to improve the visual realism of the synthetic images. To demonstrate the applicability and generalizability of SPE-DNR, we test it on 67 real 3D neuron microscopy images from three datasets. The experimental results show that the proposed SPE-DNR method is robust and competitive compared with other state-of-the-art neuron reconstruction methods.
Collapse
|
14
|
Wang X, Liu M, Wang Y, Fan J, Meijering E. A 3D Tubular Flux Model for Centerline Extraction in Neuron Volumetric Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1069-1079. [PMID: 34826295 DOI: 10.1109/tmi.2021.3130987] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital morphology reconstruction from neuron volumetric images is essential for computational neuroscience. The centerline of the axonal and dendritic tree provides an effective shape representation and serves as a basis for further neuron reconstruction. However, it is still a challenge to directly extract the accurate centerline from the complex neuron structure with poor image quality. In this paper, we propose a neuron centerline extraction method based on a 3D tubular flux model via a two-stage CNN framework. In the first stage, a 3D CNN is used to learn the latent neuron structure features, namely flux features, from neuron images. In the second stage, a light-weight U-Net takes the learned flux features as input to extract the centerline with a spatial weighted average strategy to constrain the multi-voxel width response. Specifically, the labels of flux features in the first stage are generated by the 3D tubular model which calculates the geometric representations of the flux between each voxel in the tubular region and the nearest point on the centerline ground truth. Compared with self-learned features by networks, flux features, as a kind of prior knowledge, explicitly take advantage of the contextual distance and direction distribution information around the centerline, which is beneficial for the precise centerline extraction. Experiments on two challenging datasets demonstrate that the proposed method outperforms other state-of-the-art methods by 18% and 35.1% in F1-measurement and average distance scores at the most, and the extracted centerline is helpful to improve the neuron reconstruction performance.
Collapse
|
15
|
Yang B, Liu M, Wang Y, Zhang K, Meijering E. Structure-Guided Segmentation for 3D Neuron Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:903-914. [PMID: 34748483 DOI: 10.1109/tmi.2021.3125777] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital reconstruction of neuronal morphologies in 3D microscopy images is critical in the field of neuroscience. However, most existing automatic tracing algorithms cannot obtain accurate neuron reconstruction when processing 3D neuron images contaminated by strong background noises or containing weak filament signals. In this paper, we present a 3D neuron segmentation network named Structure-Guided Segmentation Network (SGSNet) to enhance weak neuronal structures and remove background noises. The network contains a shared encoding path but utilizes two decoding paths called Main Segmentation Branch (MSB) and Structure-Detection Branch (SDB), respectively. MSB is trained on binary labels to acquire the 3D neuron image segmentation maps. However, the segmentation results in challenging datasets often contain structural errors, such as discontinued segments of the weak-signal neuronal structures and missing filaments due to low signal-to-noise ratio (SNR). Therefore, SDB is presented to detect the neuronal structures by regressing neuron distance transform maps. Furthermore, a Structure Attention Module (SAM) is designed to integrate the multi-scale feature maps of the two decoding paths, and provide contextual guidance of structural features from SDB to MSB to improve the final segmentation performance. In the experiments, we evaluate our model in two challenging 3D neuron image datasets, the BigNeuron dataset and the Extended Whole Mouse Brain Sub-image (EWMBS) dataset. When using different tracing methods on the segmented images produced by our method rather than other state-of-the-art segmentation methods, the distance scores gain 42.48% and 35.83% improvement in the BigNeuron dataset and 37.75% and 23.13% in the EWMBS dataset.
Collapse
|
16
|
Su YT, Lu Y, Chen M, Liu AA. Deep Reinforcement Learning-Based Progressive Sequence Saliency Discovery Network for Mitosis Detection In Time-Lapse Phase-Contrast Microscopy Images. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:854-865. [PMID: 32841120 DOI: 10.1109/tcbb.2020.3019042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Mitosis detection plays an important role in the analysis of cell status and behavior and is therefore widely utilized in many biological research and medical applications. In this article, we propose a deep reinforcement learning-based progressive sequence saliency discovery network (PSSD)for mitosis detection in time-lapse phase contrast microscopy images. By discovering the salient frames when cell state changes in the sequence, PSSD can more effectively model the mitosis process for mitosis detection. We formulate the discovery of salient frames as a Markov Decision Process (MDP)that progressively adjusts the selection positions of salient frames in the sequence, and further leverage deep reinforcement learning to learn the policy in the salient frame discovery process. The proposed method consists of two parts: 1)the saliency discovery module that selects the salient frames from the input cell image sequence by progressively adjusting the selection positions of salient frames; 2)the mitosis identification module that takes a sequence of salient frames and performs temporal information fusion for mitotic sequence classification. Since the policy network of the saliency discovery module is trained under the guidance of the mitosis identification module, PSSD can comprehensively explore the salient frames that are beneficial for mitosis detection. To our knowledge, this is the first work to implement deep reinforcement learning to the mitosis detection problem. In the experiment, we evaluate the proposed method on the largest mitosis detection dataset, C2C12-16. Experiment results show that compared with the state-of-the-arts, the proposed method can achieve significant improvement for both mitosis identification and temporal localization on C2C12-16.
Collapse
|
17
|
Li Y, Ren T, Li J, Wang H, Li X, Li A. VBNet: An end-to-end 3D neural network for vessel bifurcation point detection in mesoscopic brain images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106567. [PMID: 34906786 DOI: 10.1016/j.cmpb.2021.106567] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2021] [Revised: 11/29/2021] [Accepted: 11/29/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate detection of vessel bifurcation points from mesoscopic whole-brain images plays an important role in reconstructing cerebrovascular networks and understanding the pathogenesis of brain diseases. Existing detection methods are either less accurate or inefficient. In this paper, we propose VBNet, an end-to-end, one-stage neural network to detect vessel bifurcation points in 3D images. METHODS Firstly, we designed a 3D convolutional neural network (CNN), which input a 3D image and output the coordinates of bifurcation points in this image. The network contains a two-scale architecture to detect large bifurcation points and small bifurcation points, respectively, which takes into account the accuracy and efficiency of detection. Then, to solve the problem of low accuracy caused by the imbalance between the numbers of large bifurcations and small bifurcations, we designed a weighted loss function based on the radius distribution of blood vessels. Finally, we extended the method to detect bifurcation points in large-scale volumes. RESULTS The proposed method was tested on two mouse cerebral vascular datasets and a synthetic dataset. In the synthetic dataset, the F1-score of the proposed method reached 96.37%. In two real datasets, the F1-score was 92.35% and 86.18%, respectively. The detection effect of the proposed method reached the state-of-the-art level. CONCLUSIONS We proposed a novel method for detecting vessel bifurcation points in 3D images. It can be used to precisely locate vessel bifurcations from various cerebrovascular images. This method can be further used to reconstruct and analyze vascular networks, and also for researchers to design detection methods for other targets in 3D biomedical images.
Collapse
Affiliation(s)
- Yuxin Li
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an 710048, China.
| | - Tong Ren
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an 710048, China
| | - Junhuai Li
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an 710048, China
| | - Huaijun Wang
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an 710048, China
| | - Xiangning Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan 430074, China; HUST-Suzhou Institute for Brainsmatics, Suzhou 215123, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan 430074, China; HUST-Suzhou Institute for Brainsmatics, Suzhou 215123, China.
| |
Collapse
|
18
|
Zhang Y, Liu M, Yu F, Zeng T, Wang Y. An O-shape Neural Network With Attention Modules to Detect Junctions in Biomedical Images Without Segmentation. IEEE J Biomed Health Inform 2021; 26:774-785. [PMID: 34197332 DOI: 10.1109/jbhi.2021.3094187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Junction plays an important role in biomedical research such as retinal biometric identification, retinal image registration, eye-related disease diagnosis and neuron reconstruction. However, junction detection in original biomedical images is extremely challenging. For example, retinal images contain many tiny blood vessels with complicated structures and low contrast, which makes it challenging to detect junctions. In this paper, we propose an O-shape Network architecture with Attention modules (Attention O-Net), which includes Junction Detection Branch (JDB) and Local Enhancement Branch (LEB) to detect junctions in biomedical images without segmentation. In JDB, the heatmap indicating the probabilities of junctions is estimated and followed by choosing the positions with the local highest value as the junctions, whereas it is challenging to detect junctions when the images contain weak filament signals. Therefore, LEB is constructed to enhance the thin branch foreground and make the network pay more attention to the regions with low contrast, which is helpful to alleviate the imbalance of the foreground between thin and thick branches and to detect the junctions of the thin branch. Furthermore, attention modules are utilized to introduce the feature maps from LEB to JDB, which can establish a complementary relationship and further integrate local features and contextual information between the two branches. The proposed method achieves the highest average F1-scores of 0.82, 0.73 and 0.94 in two retinal datasets and one neuron dataset, respectively. The experimental results confirm that Attention O-Net outperforms other state-of-the-art detection methods, and is helpful for retinal biometric identification.
Collapse
|
19
|
Shen L, Liu M, Wang C, Guo C, Meijering E, Wang Y. Efficient 3D Junction Detection in Biomedical Images Based on a Circular Sampling Model and Reverse Mapping. IEEE J Biomed Health Inform 2021; 25:1612-1623. [PMID: 33166258 DOI: 10.1109/jbhi.2020.3036743] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Detection and localization of terminations and junctions is a key step in the morphological reconstruction of tree-like structures in images. Previously, a ray-shooting model was proposed to detect termination points automatically. In this paper, we propose an automatic method for 3D junction points detection in biomedical images, relying on a circular sampling model and a 2D-to-3D reverse mapping approach. First, the existing ray-shooting model is improved to a circular sampling model to extract the pixel intensity distribution feature across the potential branches around the point of interest. The computation cost can be reduced dramatically compared to the existing ray-shooting model. Then, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is employed to detect 2D junction points in maximum intensity projections (MIPs) of sub-volume images in a given 3D image, by determining the number of branches in the candidate junction region. Further, a 2D-to-3D reverse mapping approach is used to map these detected 2D junction points in MIPs to the 3D junction points in the original 3D images. The proposed 3D junction point detection method is implemented as a build-in tool in the Vaa3D platform. Experiments on multiple 2D images and 3D images show average precision and recall rates of 87.11% and 88.33% respectively. In addition, the proposed algorithm is dozens of times faster than the existing deep-learning based model. The proposed method has excellent performance in both detection precision and computation efficiency for junction detection even in large-scale biomedical images.
Collapse
|
20
|
Yang B, Chen W, Luo H, Tan Y, Liu M, Wang Y. Neuron Image Segmentation via Learning Deep Features and Enhancing Weak Neuronal Structures. IEEE J Biomed Health Inform 2021; 25:1634-1645. [PMID: 32809948 DOI: 10.1109/jbhi.2020.3017540] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Neuron morphology reconstruction (tracing) in 3D volumetric images is critical for neuronal research. However, most existing neuron tracing methods are not applicable in challenging datasets where the neuron images are contaminated by noises or containing weak filament signals. In this paper, we present a two-stage 3D neuron segmentation approach via learning deep features and enhancing weak neuronal structures, to reduce the impact of image noise in the data and enhance the weak-signal neuronal structures. In the first stage, we train a voxel-wise multi-level fully convolutional network (FCN), which specializes in learning deep features, to obtain the 3D neuron image segmentation maps in an end-to-end manner. In the second stage, a ray-shooting model is employed to detect the discontinued segments in segmentation results of the first-stage, and the local neuron diameter of the broken point is estimated and direction of the filamentary fragment is detected by rayburst sampling algorithm. Then, a Hessian-repair model is built to repair the broken structures, by enhancing weak neuronal structures in a fibrous structure determined by the estimated local neuron diameter and the filamentary fragment direction. Experimental results demonstrate that our proposed segmentation approach achieves better segmentation performance than other state-of-the-art methods for 3D neuron segmentation. Compared with the neuron reconstruction results on the segmented images produced by other segmentation methods, the proposed approach gains 47.83% and 34.83% improvement in the average distance scores. The average Precision and Recall rates of the branch point detection with our proposed method are 38.74% and 22.53% higher than the detection results without segmentation.
Collapse
|
21
|
Chen W, Liu M, Zhan Q, Tan Y, Meijering E, Radojevic M, Wang Y. Spherical-Patches Extraction for Deep-Learning-Based Critical Points Detection in 3D Neuron Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:527-538. [PMID: 33055023 DOI: 10.1109/tmi.2020.3031289] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Digital reconstruction of neuronal structures is very important to neuroscience research. Many existing reconstruction algorithms require a set of good seed points. 3D neuron critical points, including terminations, branch points and cross-over points, are good candidates for such seed points. However, a method that can simultaneously detect all types of critical points has barely been explored. In this work, we present a method to simultaneously detect all 3 types of 3D critical points in neuron microscopy images, based on a spherical-patches extraction (SPE) method and a 2D multi-stream convolutional neural network (CNN). SPE uses a set of concentric spherical surfaces centered at a given critical point candidate to extract intensity distribution features around the point. Then, a group of 2D spherical patches is generated by projecting the surfaces into 2D rectangular image patches according to the orders of the azimuth and the polar angles. Finally, a 2D multi-stream CNN, in which each stream receives one spherical patch as input, is designed to learn the intensity distribution features from those spherical patches and classify the given critical point candidate into one of four classes: termination, branch point, cross-over point or non-critical point. Experimental results confirm that the proposed method outperforms other state-of-the-art critical points detection methods. The critical points based neuron reconstruction results demonstrate the potential of the detected neuron critical points to be good seed points for neuron reconstruction. Additionally, we have established a public dataset dedicated for neuron critical points detection, which has been released along with this article.
Collapse
|
22
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. 3D Neuron Microscopy Image Segmentation via the Ray-Shooting Model and a DC-BLSTM Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:26-37. [PMID: 32881683 DOI: 10.1109/tmi.2020.3021493] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The morphology reconstruction (tracing) of neurons in 3D microscopy images is important to neuroscience research. However, this task remains very challenging because of the low signal-to-noise ratio (SNR) and the discontinued segments of neurite patterns in the images. In this paper, we present a neuronal structure segmentation method based on the ray-shooting model and the Long Short-Term Memory (LSTM)-based network to enhance the weak-signal neuronal structures and remove background noise in 3D neuron microscopy images. Specifically, the ray-shooting model is used to extract the intensity distribution features within a local region of the image. And we design a neural network based on the dual channel bidirectional LSTM (DC-BLSTM) to detect the foreground voxels according to the voxel-intensity features and boundary-response features extracted by multiple ray-shooting models that are generated in the whole image. This way, we transform the 3D image segmentation task into multiple 1D ray/sequence segmentation tasks, which makes it much easier to label the training samples than many existing Convolutional Neural Network (CNN) based 3D neuron image segmentation methods. In the experiments, we evaluate the performance of our method on the challenging 3D neuron images from two datasets, the BigNeuron dataset and the Whole Mouse Brain Sub-image (WMBS) dataset. Compared with the neuron tracing results on the segmented images produced by other state-of-the-art neuron segmentation methods, our method improves the distance scores by about 32% and 27% in the BigNeuron dataset, and about 38% and 27% in the WMBS dataset.
Collapse
|