1
|
Liu M, Wu S, Chen R, Lin Z, Wang Y, Meijering E. Brain Image Segmentation for Ultrascale Neuron Reconstruction via an Adaptive Dual-Task Learning Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2574-2586. [PMID: 38373129 DOI: 10.1109/tmi.2024.3367384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Accurate morphological reconstruction of neurons in whole brain images is critical for brain science research. However, due to the wide range of whole brain imaging, uneven staining, and optical system fluctuations, there are significant differences in image properties between different regions of the ultrascale brain image, such as dramatically varying voxel intensities and inhomogeneous distribution of background noise, posing an enormous challenge to neuron reconstruction from whole brain images. In this paper, we propose an adaptive dual-task learning network (ADTL-Net) to quickly and accurately extract neuronal structures from ultrascale brain images. Specifically, this framework includes an External Features Classifier (EFC) and a Parameter Adaptive Segmentation Decoder (PASD), which share the same Multi-Scale Feature Encoder (MSFE). MSFE introduces an attention module named Channel Space Fusion Module (CSFM) to extract structure and intensity distribution features of neurons at different scales for addressing the problem of anisotropy in 3D space. Then, EFC is designed to classify these feature maps based on external features, such as foreground intensity distributions and image smoothness, and select specific PASD parameters to decode them of different classes to obtain accurate segmentation results. PASD contains multiple sets of parameters trained by different representative complex signal-to-noise distribution image blocks to handle various images more robustly. Experimental results prove that compared with other advanced segmentation methods for neuron reconstruction, the proposed method achieves state-of-the-art results in the task of neuron reconstruction from ultrascale brain images, with an improvement of about 49% in speed and 12% in F1 score.
Collapse
|
2
|
Hoffmann C, Cho E, Zalesky A, Di Biase MA. From pixels to connections: exploring in vitro neuron reconstruction software for network graph generation. Commun Biol 2024; 7:571. [PMID: 38750282 PMCID: PMC11096190 DOI: 10.1038/s42003-024-06264-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Accepted: 04/29/2024] [Indexed: 05/18/2024] Open
Abstract
Digital reconstruction has been instrumental in deciphering how in vitro neuron architecture shapes information flow. Emerging approaches reconstruct neural systems as networks with the aim of understanding their organization through graph theory. Computational tools dedicated to this objective build models of nodes and edges based on key cellular features such as somata, axons, and dendrites. Fully automatic implementations of these tools are readily available, but they may also be purpose-built from specialized algorithms in the form of multi-step pipelines. Here we review software tools informing the construction of network models, spanning from noise reduction and segmentation to full network reconstruction. The scope and core specifications of each tool are explicitly defined to assist bench scientists in selecting the most suitable option for their microscopy dataset. Existing tools provide a foundation for complete network reconstruction, however more progress is needed in establishing morphological bases for directed/weighted connectivity and in software validation.
Collapse
Affiliation(s)
- Cassandra Hoffmann
- Systems Neuroscience Lab, Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne, Parkville, Australia.
| | - Ellie Cho
- Biological Optical Microscopy Platform, University of Melbourne, Parkville, Australia
| | - Andrew Zalesky
- Systems Neuroscience Lab, Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne, Parkville, Australia
- Department of Biomedical Engineering, The University of Melbourne, Parkville, Australia
| | - Maria A Di Biase
- Systems Neuroscience Lab, Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne, Parkville, Australia
- Stem Cell Disease Modelling Lab, Department of Anatomy and Physiology, The University of Melbourne, Parkville, Australia
- Psychiatry Neuroimaging Laboratory, Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| |
Collapse
|
3
|
Chen Q, Peng J, Zhao S, Liu W. Automatic artery/vein classification methods for retinal blood vessel: A review. Comput Med Imaging Graph 2024; 113:102355. [PMID: 38377630 DOI: 10.1016/j.compmedimag.2024.102355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 02/06/2024] [Accepted: 02/06/2024] [Indexed: 02/22/2024]
Abstract
Automatic retinal arteriovenous classification can assist ophthalmologists in disease early diagnosis. Deep learning-based methods and topological graph-based methods have become the main solutions for retinal arteriovenous classification in recent years. This paper reviews the automatic retinal arteriovenous classification methods from 2003 to 2022. Firstly, we compare different methods and provide comparison tables of the summary results. Secondly, we complete the classification of the public arteriovenous classification datasets and provide the annotation development tables of different datasets. Finally, we sort out the challenges of evaluation methods and provide a comprehensive evaluation system. Quantitative and qualitative analysis shows the changes in research hotspots over time, Quantitative and qualitative analyses reveal the evolution of research hotspots over time, highlighting the significance of exploring the integration of deep learning with topological information in future research.
Collapse
Affiliation(s)
- Qihan Chen
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| | - Jianqing Peng
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China; Guangdong Provincial Key Laboratory of Fire Science and Technology, Guangzhou 510006, China.
| | - Shen Zhao
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China.
| | - Wanquan Liu
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| |
Collapse
|
4
|
Gratacos G, Chakrabarti A, Ju T. Tree Recovery by Dynamic Programming. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:15870-15882. [PMID: 37505999 DOI: 10.1109/tpami.2023.3299868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2023]
Abstract
Tree-like structures are common, naturally occurring objects that are of interest to many fields of study, such as plant science and biomedicine. Analysis of these structures is typically based on skeletons extracted from captured data, which often contain spurious cycles that need to be removed. We propose a dynamic programming algorithm for solving the NP-hard tree recovery problem formulated by (Estrada et al. 2015), which seeks a least-cost partitioning of the graph nodes that yields a directed tree. Our algorithm finds the optimal solution by iteratively contracting the graph via node-merging until the problem can be trivially solved. By carefully designing the merging sequence, our algorithm can efficiently recover optimal trees for many real-world data where (Estrada et al. 2015) only produces sub-optimal solutions. We also propose an approximate variant of dynamic programming using beam search, which can process graphs containing thousands of cycles with significantly improved optimality and efficiency compared with (Estrada et al. 2015).
Collapse
|
5
|
Wang G, Huang Y, Ma K, Duan Z, Luo Z, Xiao P, Yuan J. Automatic vessel crossing and bifurcation detection based on multi-attention network vessel segmentation and directed graph search. Comput Biol Med 2023; 155:106647. [PMID: 36848799 DOI: 10.1016/j.compbiomed.2023.106647] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 01/04/2023] [Accepted: 02/07/2023] [Indexed: 02/17/2023]
Abstract
Analysis of the vascular tree is the basic premise to automatically diagnose retinal biomarkers associated with ophthalmic and systemic diseases, among which accurate identification of intersection and bifurcation points is quite challenging but important for disentangling complex vascular network and tracking vessel morphology. In this paper, we present a novel directed graph search-based multi-attentive neural network approach to automatically segment the vascular network and separate intersections and bifurcations from color fundus images. Our approach uses multi-dimensional attention to adaptively integrate local features and their global dependencies while learning to focus on target structures at different scales to generate binary vascular maps. A directed graphical representation of the vascular network is constructed to represent the topology and spatial connectivity of the vascular structures. Using local geometric information including color difference, diameter, and angle, the complex vascular tree is decomposed into multiple sub-trees to finally classify and label vascular feature points. The proposed method has been tested on the DRIVE dataset and the IOSTAR dataset containing 40 images and 30 images, respectively, with 0.863 and 0.764 F1-score of detection points and average accuracy of 0.914 and 0.854 for classification points. These results demonstrate the superiority of our proposed method outperforming state-of-the-art methods in feature point detection and classification.
Collapse
Affiliation(s)
- Gengyuan Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China; School of Life Sciences, South China University of Technology, Guangzhou, 510006, Guangdong, China
| | - Yuancong Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
| | - Ke Ma
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
| | - Zhengyu Duan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
| | - Zhongzhou Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
| | - Peng Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China.
| | - Jin Yuan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China.
| |
Collapse
|
6
|
Wei X, Liu Q, Liu M, Wang Y, Meijering E. 3D Soma Detection in Large-Scale Whole Brain Images via a Two-Stage Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:148-157. [PMID: 36103445 DOI: 10.1109/tmi.2022.3206605] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
3D soma detection in whole brain images is a critical step for neuron reconstruction. However, existing soma detection methods are not suitable for whole mouse brain images with large amounts of data and complex structure. In this paper, we propose a two-stage deep neural network to achieve fast and accurate soma detection in large-scale and high-resolution whole mouse brain images (more than 1TB). For the first stage, a lightweight Multi-level Cross Classification Network (MCC-Net) is proposed to filter out images without somas and generate coarse candidate images by combining the advantages of the multi convolution layer's feature extraction ability. It can speed up the detection of somas and reduce the computational complexity. For the second stage, to further obtain the accurate locations of somas in the whole mouse brain images, the Scale Fusion Segmentation Network (SFS-Net) is developed to segment soma regions from candidate images. Specifically, the SFS-Net captures multi-scale context information and establishes a complementary relationship between encoder and decoder by combining the encoder-decoder structure and a 3D Scale-Aware Pyramid Fusion (SAPF) module for better segmentation performance. The experimental results on three whole mouse brain images verify that the proposed method can achieve excellent performance and provide the reconstruction of neurons with beneficial information. Additionally, we have established a public dataset named WBMSD, including 798 high-resolution and representative images ( 256 ×256 ×256 voxels) from three whole mouse brain images, dedicated to the research of soma detection, which will be released along with this paper.
Collapse
|
7
|
Kuang X, Xu X, Fang L, Kozegar E, Chen H, Sun Y, Huang F, Tan T. Improved fully convolutional neuron networks on small retinal vessel segmentation using local phase as attention. Front Med (Lausanne) 2023; 10:1038534. [PMID: 36936204 PMCID: PMC10014569 DOI: 10.3389/fmed.2023.1038534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 02/09/2023] [Indexed: 03/06/2023] Open
Abstract
Retinal images have been proven significant in diagnosing multiple diseases such as diabetes, glaucoma, and hypertension. Retinal vessel segmentation is crucial for the quantitative analysis of retinal images. However, current methods mainly concentrate on the segmentation performance of overall retinal vessel structures. The small vessels do not receive enough attention due to their small percentage in the full retinal images. Small retinal vessels are much more sensitive to the blood circulation system and have great significance in the early diagnosis and warning of various diseases. This paper combined two unsupervised methods, local phase congruency (LPC) and orientation scores (OS), with a deep learning network based on the U-Net as attention. And we proposed the U-Net using local phase congruency and orientation scores (UN-LPCOS), which showed a remarkable ability to identify and segment small retinal vessels. A new metric called sensitivity on a small ship (Sesv ) was also proposed to evaluate the methods' performance on the small vessel segmentation. Our strategy was validated on both the DRIVE dataset and the data from Maastricht Study and achieved outstanding segmentation performance on both the overall vessel structure and small vessels.
Collapse
Affiliation(s)
- Xihe Kuang
- The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Xiayu Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi’an Jiaotong University, Xi'an, Shaanxi, China
| | - Leyuan Fang
- College of Electrical and Information Engineering, Hunan University, Changsha, Hunan, China
| | - Ehsan Kozegar
- Faculty of Technology and Engineering (East of Guilan), University of Guilan, Rudsar-Vajargah, Guilan, Iran
| | - Huachao Chen
- Faculty of Applied Sciences, Macao Polytechnic University, Macau, Macao SAR, China
| | - Yue Sun
- Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Fan Huang
- The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Tao Tan
- Faculty of Applied Sciences, Macao Polytechnic University, Macau, Macao SAR, China
- Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
- *Correspondence: Tao Tan,
| |
Collapse
|
8
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
9
|
Zhou H, Cao T, Liu T, Liu S, Chen L, Chen Y, Huang Q, Ye W, Zeng S, Quan T. Super-resolution Segmentation Network for Reconstruction of Packed Neurites. Neuroinformatics 2022; 20:1155-1167. [PMID: 35851944 DOI: 10.1007/s12021-022-09594-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 12/31/2022]
Abstract
Neuron reconstruction can provide the quantitative data required for measuring the neuronal morphology and is crucial in brain research. However, the difficulty in reconstructing dense neurites, wherein massive labor is required for accurate reconstruction in most cases, has not been well resolved. In this work, we provide a new pathway for solving this challenge by proposing the super-resolution segmentation network (SRSNet), which builds the mapping of the neurites in the original neuronal images and their segmentation in a higher-resolution (HR) space. During the segmentation process, the distances between the boundaries of the packed neurites are enlarged, and only the central parts of the neurites are segmented. Owing to this strategy, the super-resolution segmented images are produced for subsequent reconstruction. We carried out experiments on neuronal images with a voxel size of 0.2 μm × 0.2 μm × 1 μm produced by fMOST. SRSNet achieves an average F1 score of 0.88 for automatic packed neurites reconstruction, which takes both the precision and recall values into account, while the average F1 scores of other state-of-the-art automatic tracing methods are less than 0.70.
Collapse
Affiliation(s)
- Hang Zhou
- School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Tingting Cao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tian Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Shijie Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Lu Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Wei Ye
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.
| |
Collapse
|
10
|
Chen W, Liu M, Du H, Radojevic M, Wang Y, Meijering E. Deep-Learning-Based Automated Neuron Reconstruction From 3D Microscopy Images Using Synthetic Training Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1031-1042. [PMID: 34847022 DOI: 10.1109/tmi.2021.3130934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital reconstruction of neuronal structures from 3D microscopy images is critical for the quantitative investigation of brain circuits and functions. It is a challenging task that would greatly benefit from automatic neuron reconstruction methods. In this paper, we propose a novel method called SPE-DNR that combines spherical-patches extraction (SPE) and deep-learning for neuron reconstruction (DNR). Based on 2D Convolutional Neural Networks (CNNs) and the intensity distribution features extracted by SPE, it determines the tracing directions and classifies voxels into foreground or background. This way, starting from a set of seed points, it automatically traces the neurite centerlines and determines when to stop tracing. To avoid errors caused by imperfect manual reconstructions, we develop an image synthesizing scheme to generate synthetic training images with exact reconstructions. This scheme simulates 3D microscopy imaging conditions as well as structural defects, such as gaps and abrupt radii changes, to improve the visual realism of the synthetic images. To demonstrate the applicability and generalizability of SPE-DNR, we test it on 67 real 3D neuron microscopy images from three datasets. The experimental results show that the proposed SPE-DNR method is robust and competitive compared with other state-of-the-art neuron reconstruction methods.
Collapse
|
11
|
Li Y, Ren T, Li J, Wang H, Li X, Li A. VBNet: An end-to-end 3D neural network for vessel bifurcation point detection in mesoscopic brain images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106567. [PMID: 34906786 DOI: 10.1016/j.cmpb.2021.106567] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2021] [Revised: 11/29/2021] [Accepted: 11/29/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate detection of vessel bifurcation points from mesoscopic whole-brain images plays an important role in reconstructing cerebrovascular networks and understanding the pathogenesis of brain diseases. Existing detection methods are either less accurate or inefficient. In this paper, we propose VBNet, an end-to-end, one-stage neural network to detect vessel bifurcation points in 3D images. METHODS Firstly, we designed a 3D convolutional neural network (CNN), which input a 3D image and output the coordinates of bifurcation points in this image. The network contains a two-scale architecture to detect large bifurcation points and small bifurcation points, respectively, which takes into account the accuracy and efficiency of detection. Then, to solve the problem of low accuracy caused by the imbalance between the numbers of large bifurcations and small bifurcations, we designed a weighted loss function based on the radius distribution of blood vessels. Finally, we extended the method to detect bifurcation points in large-scale volumes. RESULTS The proposed method was tested on two mouse cerebral vascular datasets and a synthetic dataset. In the synthetic dataset, the F1-score of the proposed method reached 96.37%. In two real datasets, the F1-score was 92.35% and 86.18%, respectively. The detection effect of the proposed method reached the state-of-the-art level. CONCLUSIONS We proposed a novel method for detecting vessel bifurcation points in 3D images. It can be used to precisely locate vessel bifurcations from various cerebrovascular images. This method can be further used to reconstruct and analyze vascular networks, and also for researchers to design detection methods for other targets in 3D biomedical images.
Collapse
Affiliation(s)
- Yuxin Li
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an 710048, China.
| | - Tong Ren
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an 710048, China
| | - Junhuai Li
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an 710048, China
| | - Huaijun Wang
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an 710048, China
| | - Xiangning Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan 430074, China; HUST-Suzhou Institute for Brainsmatics, Suzhou 215123, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan 430074, China; HUST-Suzhou Institute for Brainsmatics, Suzhou 215123, China.
| |
Collapse
|
12
|
Liu L, Chen D, Shu M, Li B, Shu H, Paques M, Cohen LD. Trajectory Grouping With Curvature Regularization for Tubular Structure Tracking. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 31:405-418. [PMID: 34874858 DOI: 10.1109/tip.2021.3131940] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Tubular structure tracking is a crucial task in the fields of computer vision and medical image analysis. The minimal paths-based approaches have exhibited their strong ability in tracing tubular structures, by which a tubular structure can be naturally modeled as a minimal geodesic path computed with a suitable geodesic metric. However, existing minimal paths-based tracing approaches still suffer from difficulties such as the shortcuts and short branches combination problems, especially when dealing with the images involving complicated tubular tree structures or background. In this paper, we introduce a new minimal paths-based model for minimally interactive tubular structure centerline extraction in conjunction with a perceptual grouping scheme. Basically, we take into account the prescribed tubular trajectories and curvature-penalized geodesic paths to seek suitable shortest paths. The proposed approach can benefit from the local smoothness prior on tubular structures and the global optimality of the used graph-based path searching scheme. Experimental results on both synthetic and real images prove that the proposed model indeed obtains outperformance comparing with the state-of-the-art minimal paths-based tubular structure tracing algorithms.
Collapse
|
13
|
Zhang Y, Liu M, Yu F, Zeng T, Wang Y. An O-shape Neural Network With Attention Modules to Detect Junctions in Biomedical Images Without Segmentation. IEEE J Biomed Health Inform 2021; 26:774-785. [PMID: 34197332 DOI: 10.1109/jbhi.2021.3094187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Junction plays an important role in biomedical research such as retinal biometric identification, retinal image registration, eye-related disease diagnosis and neuron reconstruction. However, junction detection in original biomedical images is extremely challenging. For example, retinal images contain many tiny blood vessels with complicated structures and low contrast, which makes it challenging to detect junctions. In this paper, we propose an O-shape Network architecture with Attention modules (Attention O-Net), which includes Junction Detection Branch (JDB) and Local Enhancement Branch (LEB) to detect junctions in biomedical images without segmentation. In JDB, the heatmap indicating the probabilities of junctions is estimated and followed by choosing the positions with the local highest value as the junctions, whereas it is challenging to detect junctions when the images contain weak filament signals. Therefore, LEB is constructed to enhance the thin branch foreground and make the network pay more attention to the regions with low contrast, which is helpful to alleviate the imbalance of the foreground between thin and thick branches and to detect the junctions of the thin branch. Furthermore, attention modules are utilized to introduce the feature maps from LEB to JDB, which can establish a complementary relationship and further integrate local features and contextual information between the two branches. The proposed method achieves the highest average F1-scores of 0.82, 0.73 and 0.94 in two retinal datasets and one neuron dataset, respectively. The experimental results confirm that Attention O-Net outperforms other state-of-the-art detection methods, and is helpful for retinal biometric identification.
Collapse
|
14
|
Shen L, Liu M, Wang C, Guo C, Meijering E, Wang Y. Efficient 3D Junction Detection in Biomedical Images Based on a Circular Sampling Model and Reverse Mapping. IEEE J Biomed Health Inform 2021; 25:1612-1623. [PMID: 33166258 DOI: 10.1109/jbhi.2020.3036743] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Detection and localization of terminations and junctions is a key step in the morphological reconstruction of tree-like structures in images. Previously, a ray-shooting model was proposed to detect termination points automatically. In this paper, we propose an automatic method for 3D junction points detection in biomedical images, relying on a circular sampling model and a 2D-to-3D reverse mapping approach. First, the existing ray-shooting model is improved to a circular sampling model to extract the pixel intensity distribution feature across the potential branches around the point of interest. The computation cost can be reduced dramatically compared to the existing ray-shooting model. Then, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is employed to detect 2D junction points in maximum intensity projections (MIPs) of sub-volume images in a given 3D image, by determining the number of branches in the candidate junction region. Further, a 2D-to-3D reverse mapping approach is used to map these detected 2D junction points in MIPs to the 3D junction points in the original 3D images. The proposed 3D junction point detection method is implemented as a build-in tool in the Vaa3D platform. Experiments on multiple 2D images and 3D images show average precision and recall rates of 87.11% and 88.33% respectively. In addition, the proposed algorithm is dozens of times faster than the existing deep-learning based model. The proposed method has excellent performance in both detection precision and computation efficiency for junction detection even in large-scale biomedical images.
Collapse
|
15
|
Yang B, Chen W, Luo H, Tan Y, Liu M, Wang Y. Neuron Image Segmentation via Learning Deep Features and Enhancing Weak Neuronal Structures. IEEE J Biomed Health Inform 2021; 25:1634-1645. [PMID: 32809948 DOI: 10.1109/jbhi.2020.3017540] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Neuron morphology reconstruction (tracing) in 3D volumetric images is critical for neuronal research. However, most existing neuron tracing methods are not applicable in challenging datasets where the neuron images are contaminated by noises or containing weak filament signals. In this paper, we present a two-stage 3D neuron segmentation approach via learning deep features and enhancing weak neuronal structures, to reduce the impact of image noise in the data and enhance the weak-signal neuronal structures. In the first stage, we train a voxel-wise multi-level fully convolutional network (FCN), which specializes in learning deep features, to obtain the 3D neuron image segmentation maps in an end-to-end manner. In the second stage, a ray-shooting model is employed to detect the discontinued segments in segmentation results of the first-stage, and the local neuron diameter of the broken point is estimated and direction of the filamentary fragment is detected by rayburst sampling algorithm. Then, a Hessian-repair model is built to repair the broken structures, by enhancing weak neuronal structures in a fibrous structure determined by the estimated local neuron diameter and the filamentary fragment direction. Experimental results demonstrate that our proposed segmentation approach achieves better segmentation performance than other state-of-the-art methods for 3D neuron segmentation. Compared with the neuron reconstruction results on the segmented images produced by other segmentation methods, the proposed approach gains 47.83% and 34.83% improvement in the average distance scores. The average Precision and Recall rates of the branch point detection with our proposed method are 38.74% and 22.53% higher than the detection results without segmentation.
Collapse
|
16
|
Chen W, Liu M, Zhan Q, Tan Y, Meijering E, Radojevic M, Wang Y. Spherical-Patches Extraction for Deep-Learning-Based Critical Points Detection in 3D Neuron Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:527-538. [PMID: 33055023 DOI: 10.1109/tmi.2020.3031289] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Digital reconstruction of neuronal structures is very important to neuroscience research. Many existing reconstruction algorithms require a set of good seed points. 3D neuron critical points, including terminations, branch points and cross-over points, are good candidates for such seed points. However, a method that can simultaneously detect all types of critical points has barely been explored. In this work, we present a method to simultaneously detect all 3 types of 3D critical points in neuron microscopy images, based on a spherical-patches extraction (SPE) method and a 2D multi-stream convolutional neural network (CNN). SPE uses a set of concentric spherical surfaces centered at a given critical point candidate to extract intensity distribution features around the point. Then, a group of 2D spherical patches is generated by projecting the surfaces into 2D rectangular image patches according to the orders of the azimuth and the polar angles. Finally, a 2D multi-stream CNN, in which each stream receives one spherical patch as input, is designed to learn the intensity distribution features from those spherical patches and classify the given critical point candidate into one of four classes: termination, branch point, cross-over point or non-critical point. Experimental results confirm that the proposed method outperforms other state-of-the-art critical points detection methods. The critical points based neuron reconstruction results demonstrate the potential of the detected neuron critical points to be good seed points for neuron reconstruction. Additionally, we have established a public dataset dedicated for neuron critical points detection, which has been released along with this article.
Collapse
|
17
|
Zhao J, Chen X, Xiong Z, Liu D, Zeng J, Xie C, Zhang Y, Zha ZJ, Bi G, Wu F. Neuronal Population Reconstruction From Ultra-Scale Optical Microscopy Images via Progressive Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4034-4046. [PMID: 32746145 DOI: 10.1109/tmi.2020.3009148] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Reconstruction of neuronal populations from ultra-scale optical microscopy (OM) images is essential to investigate neuronal circuits and brain mechanisms. The noises, low contrast, huge memory requirement, and high computational cost pose significant challenges in the neuronal population reconstruction. Recently, many studies have been conducted to extract neuron signals using deep neural networks (DNNs). However, training such DNNs usually relies on a huge amount of voxel-wise annotations in OM images, which are expensive in terms of both finance and labor. In this paper, we propose a novel framework for dense neuronal population reconstruction from ultra-scale images. To solve the problem of high cost in obtaining manual annotations for training DNNs, we propose a progressive learning scheme for neuronal population reconstruction (PLNPR) which does not require any manual annotations. Our PLNPR scheme consists of a traditional neuron tracing module and a deep segmentation network that mutually complement and progressively promote each other. To reconstruct dense neuronal populations from a terabyte-sized ultra-scale image, we introduce an automatic framework which adaptively traces neurons block by block and fuses fragmented neurites in overlapped regions continuously and smoothly. We build a dataset "VISoR-40" which consists of 40 large-scale OM image blocks from cortical regions of a mouse. Extensive experimental results on our VISoR-40 dataset and the public BigNeuron dataset demonstrate the effectiveness and superiority of our method on neuronal population reconstruction and single neuron reconstruction. Furthermore, we successfully apply our method to reconstruct dense neuronal populations from an ultra-scale mouse brain slice. The proposed adaptive block propagation and fusion strategies greatly improve the completeness of neurites in dense neuronal population reconstruction.
Collapse
|
18
|
Wang Z, Jiang X, Liu J, Cheng KT, Yang X. Multi-Task Siamese Network for Retinal Artery/Vein Separation via Deep Convolution Along Vessel. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2904-2919. [PMID: 32167888 DOI: 10.1109/tmi.2020.2980117] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Vascular tree disentanglement and vessel type classification are two crucial steps of the graph-based method for retinal artery-vein (A/V) separation. Existing approaches treat them as two independent tasks and mostly rely on ad hoc rules (e.g. change of vessel directions) and hand-crafted features (e.g. color, thickness) to handle them respectively. However, we argue that the two tasks are highly correlated and should be handled jointly since knowing the A/V type can unravel those highly entangled vascular trees, which in turn helps to infer the types of connected vessels that are hard to classify based on only appearance. Therefore, designing features and models isolatedly for the two tasks often leads to a suboptimal solution of A/V separation. In view of this, this paper proposes a multi-task siamese network which aims to learn the two tasks jointly and thus yields more robust deep features for accurate A/V separation. Specifically, we first introduce Convolution Along Vessel (CAV) to extract the visual features by convolving a fundus image along vessel segments, and the geometric features by tracking the directions of blood flow in vessels. The siamese network is then trained to learn multiple tasks: i) classifying A/V types of vessel segments using visual features only, and ii) estimating the similarity of every two connected segments by comparing their visual and geometric features in order to disentangle the vasculature into individual vessel trees. Finally, the results of two tasks mutually correct each other to accomplish final A/V separation. Experimental results demonstrate that our method can achieve accuracy values of 94.7%, 96.9%, and 94.5% on three major databases (DRIVE, INSPIRE, WIDE) respectively, which outperforms recent state-of-the-arts.
Collapse
|
19
|
Huang Q, Chen Y, Liu S, Xu C, Cao T, Xu Y, Wang X, Rao G, Li A, Zeng S, Quan T. Weakly Supervised Learning of 3D Deep Network for Neuron Reconstruction. Front Neuroanat 2020; 14:38. [PMID: 32848636 PMCID: PMC7399060 DOI: 10.3389/fnana.2020.00038] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 06/05/2020] [Indexed: 11/13/2022] Open
Abstract
Digital reconstruction or tracing of 3D tree-like neuronal structures from optical microscopy images is essential for understanding the functionality of neurons and reveal the connectivity of neuronal networks. Despite the existence of numerous tracing methods, reconstructing a neuron from highly noisy images remains challenging, particularly for neurites with low and inhomogeneous intensities. Conducting deep convolutional neural network (CNN)-based segmentation prior to neuron tracing facilitates an approach to solving this problem via separation of weak neurites from a noisy background. However, large manual annotations are needed in deep learning-based methods, which is labor-intensive and limits the algorithm's generalization for different datasets. In this study, we present a weakly supervised learning method of a deep CNN for neuron reconstruction without manual annotations. Specifically, we apply a 3D residual CNN as the architecture for discriminative neuronal feature extraction. We construct the initial pseudo-labels (without manual segmentation) of the neuronal images on the basis of an existing automatic tracing method. A weakly supervised learning framework is proposed via iterative training of the CNN model for improved prediction and refining of the pseudo-labels to update training samples. The pseudo-label was iteratively modified via mining and addition of weak neurites from the CNN predicted probability map on the basis of their tubularity and continuity. The proposed method was evaluated on several challenging images from the public BigNeuron and Diadem datasets, to fMOST datasets. Owing to the adaption of 3D deep CNNs and weakly supervised learning, the presented method demonstrates effective detection of weak neurites from noisy images and achieves results similar to those of the CNN model with manual annotations. The tracing performance was significantly improved by the proposed method on both small and large datasets (>100 GB). Moreover, the proposed method proved to be superior to several novel tracing methods on original images. The results obtained on various large-scale datasets demonstrated the generalization and high precision achieved by the proposed method for neuron reconstruction.
Collapse
Affiliation(s)
- Qing Huang
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shijie Liu
- School of Mathematics and Physics, China University of Geosciences, Wuhan, China
| | - Cheng Xu
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingting Cao
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yongchao Xu
- School of Electronics Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaojun Wang
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Gong Rao
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingwei Quan
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
20
|
Shao W, Huang SJ, Liu M, Zhang D. Querying Representative and Informative Super-Pixels for Filament Segmentation in Bioimages. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2020; 17:1394-1405. [PMID: 30640624 DOI: 10.1109/tcbb.2019.2892741] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Segmenting bioimage based filaments is a critical step in a wide range of applications, including neuron reconstruction and blood vessel tracing. To achieve an acceptable segmentation performance, most of the existing methods need to annotate amounts of filamentary images in the training stage. Hence, these methods have to face the common challenge that the annotation cost is usually high. To address this problem, we propose an interactive segmentation method to actively select a few super-pixels for annotation, which can alleviate the burden of annotators. Specifically, we first apply a Simple Linear Iterative Clustering (i.e., SLIC) algorithm to segment filamentary images into compact and consistent super-pixels, and then propose a novel batch-mode based active learning method to select the most representative and informative (i.e., BMRI) super-pixels for pixel-level annotation. We then use a bagging strategy to extract several sets of pixels from the annotated super-pixels, and further use them to build different Laplacian Regularized Gaussian Mixture Models (Lap-GMM) for pixel-level segmentation. Finally, we perform the classifier ensemble by combining multiple Lap-GMM models based on a majority voting strategy. We evaluate our method on three public available filamentary image datasets. Experimental results show that, to achieve comparable performance with the existing methods, the proposed algorithm can save 40 percent annotation efforts for experts.
Collapse
|
21
|
Tan Y, Liu M, Chen W, Wang X, Peng H, Wang Y. DeepBranch: Deep Neural Networks for Branch Point Detection in Biomedical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1195-1205. [PMID: 31603774 DOI: 10.1109/tmi.2019.2945980] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Morphology reconstruction of tree-like structures in volumetric images, such as neurons, retinal blood vessels, and bronchi, is of fundamental interest for biomedical research. 3D branch points play an important role in many reconstruction applications, especially for graph-based or seed-based reconstruction methods and can help to visualize the morphology structures. There are a few hand-crafted models proposed to detect the branch points. However, they are highly dependent on the empirical setting of the parameters for different images. In this paper, we propose a DeepBranch model for branch point detection with two-level designed convolutional networks, a candidate region segmenter and a false positive reducer. On the first level, an improved 3D U-Net model with anisotropic convolution kernels is employed to detect initial candidates. Compared with the traditional sliding window strategy, the improved 3D U-Net can avoid massive redundant computations and dramatically speed up the detection process by employing dense-inference with fully convolutional neural networks (FCN). On the second level, a method based on multi-scale multi-view convolutional neural networks (MSMV-Net) is proposed for false positive reduction by feeding multi-scale views of 3D volumes into multiple streams of 2D convolution neural networks (CNNs), which can take full advantage of spatial contextual information as well as fit different sizes. Experiments on multiple 3D biomedical images of neurons, retinal blood vessels and bronchi confirm that the proposed 3D branch point detection method outperforms other state-of-the-art detection methods, and is helpful for graph-based or seed-based reconstruction methods.
Collapse
|
22
|
Guo Y, Peng Y. BSCN: bidirectional symmetric cascade network for retinal vessel segmentation. BMC Med Imaging 2020; 20:20. [PMID: 32070306 PMCID: PMC7029442 DOI: 10.1186/s12880-020-0412-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2019] [Accepted: 01/14/2020] [Indexed: 11/18/2022] Open
Abstract
Background Retinal blood vessel segmentation has an important guiding significance for the analysis and diagnosis of cardiovascular diseases such as hypertension and diabetes. But the traditional manual method of retinal blood vessel segmentation is not only time-consuming and laborious but also cannot guarantee the accuracy and efficiency of diagnosis. Therefore, it is especially significant to create a computer-aided method of automatic and accurate retinal vessel segmentation. Methods In order to extract the blood vessels’ contours of different diameters to realize fine segmentation of retinal vessels, we propose a Bidirectional Symmetric Cascade Network (BSCN) where each layer is supervised by vessel contour labels of specific diameter scale instead of using one general ground truth to train different network layers. In addition, to increase the multi-scale feature representation of retinal blood vessels, we propose the Dense Dilated Convolution Module (DDCM), which extracts retinal vessel features of different diameters by adjusting the dilation rate in the dilated convolution branches and generates two blood vessel contour prediction results by two directions respectively. All dense dilated convolution module outputs are fused to obtain the final vessel segmentation results. Results We experimented the three datasets of DRIVE, STARE, HRF and CHASE_DB1, and the proposed method reaches accuracy of 0.9846/0.9872/0.9856/0.9889 and AUC of 0.9874/0.9941/0.9882/0.9874 on DRIVE, STARE, HRF and CHASE_DB1. Conclusions The experimental results show that compared with the state-of-art methods, the proposed method has strong robustness, it not only avoids the adverse interference of the lesion background but also detects the tiny blood vessels at the intersection accurately.
Collapse
Affiliation(s)
- Yanfei Guo
- College of Information Science and Engineering,Shandong University of Science and Technology, Shandong, Qingdao 266590, China
| | - Yanjun Peng
- College of Information Science and Engineering,Shandong University of Science and Technology, Shandong, Qingdao 266590, China. .,Shandong Province Key Laboratory of Wisdom Mining Information Technology, Shandong University of Science and Technology, Shandong, Qingdao 266590, China.
| |
Collapse
|
23
|
Zhao Y, Xie J, Zhang H, Zheng Y, Zhao Y, Qi H, Zhao Y, Su P, Liu J, Liu Y. Retinal Vascular Network Topology Reconstruction and Artery/Vein Classification via Dominant Set Clustering. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:341-356. [PMID: 31283498 DOI: 10.1109/tmi.2019.2926492] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The estimation of vascular network topology in complex networks is important in understanding the relationship between vascular changes and a wide spectrum of diseases. Automatic classification of the retinal vascular trees into arteries and veins is of direct assistance to the ophthalmologist in terms of diagnosis and treatment of eye disease. However, it is challenging due to their projective ambiguity and subtle changes in appearance, contrast, and geometry in the imaging process. In this paper, we propose a novel method that is capable of making the artery/vein (A/V) distinction in retinal color fundus images based on vascular network topological properties. To this end, we adapt the concept of dominant set clustering and formalize the retinal blood vessel topology estimation and the A/V classification as a pairwise clustering problem. The graph is constructed through image segmentation, skeletonization, and identification of significant nodes. The edge weight is defined as the inverse Euclidean distance between its two end points in the feature space of intensity, orientation, curvature, diameter, and entropy. The reconstructed vascular network is classified into arteries and veins based on their intensity and morphology. The proposed approach has been applied to five public databases, namely INSPIRE, IOSTAR, VICAVR, DRIVE, and WIDE, and achieved high accuracies of 95.1%, 94.2%, 93.8%, 91.1%, and 91.0%, respectively. Furthermore, we have made manual annotations of the blood vessel topologies for INSPIRE, IOSTAR, VICAVR, and DRIVE datasets, and these annotations are released for public access so as to facilitate researchers in the community.
Collapse
|
24
|
Zhao H, Sun Y, Li H. Retinal vascular junction detection and classification via deep neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 183:105096. [PMID: 31586789 DOI: 10.1016/j.cmpb.2019.105096] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Revised: 09/09/2019] [Accepted: 09/25/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVES The retinal fundus contains intricate vascular trees, some of which are mutually intersected and overlapped. The intersection and overlapping of retinal vessels represent vascular junctions (i.e. bifurcation and crossover) in 2D retinal images. These junctions are important for analyzing vascular diseases and tracking the morphology of vessels. In this paper, we propose a two-stage pipeline to detect and classify the junction points. METHODS In the detection stage, a RCNN-based Junction Proposal Network is utilized to search the potential bifurcation and crossover locations directly on color retinal images, which is followed by a Junction Refinement Network to eliminate the false detections. In the classification stage, the detected junction points are identified as crossover or bifurcation using the proposed Junction Classification Network that shares the same model structure with the refinement network. RESULTS Our approach achieves 70% and 60% F1-score on DRIVE and IOSTAR dataset respectively which outperform the state-of-the-art methods by 4.5% and 1.7%, with a high and balanced precision and recall values. CONCLUSIONS This paper proposes a new junction detection and classification method which performs directly on color retinal images without any vessel segmentation nor skeleton preprocessing. The superior performance demonstrates that the effectiveness of our approach.
Collapse
Affiliation(s)
- He Zhao
- School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Yun Sun
- School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Huiqi Li
- School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
25
|
Li S, Quan T, Zhou H, Huang Q, Guan T, Chen Y, Xu C, Kang H, Li A, Fu L, Luo Q, Gong H, Zeng S. Brain-Wide Shape Reconstruction of a Traced Neuron Using the Convex Image Segmentation Method. Neuroinformatics 2019; 18:199-218. [PMID: 31396858 DOI: 10.1007/s12021-019-09434-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Neuronal shape reconstruction is a helpful technique for establishing neuron identity, inferring neuronal connections, mapping neuronal circuits, and so on. Advances in optical imaging techniques have enabled data collection that includes the shape of a neuron across the whole brain, considerably extending the scope of neuronal anatomy. However, such datasets often include many fuzzy neurites and many crossover regions that neurites are closely attached, which make neuronal shape reconstruction more challenging. In this study, we proposed a convex image segmentation model for neuronal shape reconstruction that segments a neurite into cross sections along its traced skeleton. Both the sparse nature of gradient images and the rule that fuzzy neurites usually have a small radius are utilized to improve neuronal shape reconstruction in regions with fuzzy neurites. Because the model is closely related to the traced skeleton point, we can use this relationship for identifying neurite with crossover regions. We demonstrated the performance of our model on various datasets, including those with fuzzy neurites and neurites with crossover regions, and we verified that our model could robustly reconstruct the neuron shape on a brain-wide scale.
Collapse
Affiliation(s)
- Shiwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China. .,School of Mathematics and Economics, Hubei University of Education, Wuhan, 430205, Hubei, China.
| | - Hang Zhou
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Tao Guan
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Cheng Xu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Hongtao Kang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Ling Fu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| |
Collapse
|
26
|
Liu M, Chen W, Wang C, Peng H. A Multiscale Ray-Shooting Model for Termination Detection of Tree-Like Structures in Biomedical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1923-1934. [PMID: 30668496 DOI: 10.1109/tmi.2019.2893117] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Digital reconstruction (tracing) of tree-like structures, such as neurons, retinal blood vessels, and bronchi, from volumetric images and 2D images is very important to biomedical research. Many existing reconstruction algorithms rely on a set of good seed points. The 2D or 3D terminations are good candidates for such seed points. In this paper, we propose an automatic method to detect terminations for tree-like structures based on a multiscale ray-shooting model and a termination visual prior. The multiscale ray-shooting model detects 2D terminations by extracting and analyzing the multiscale intensity distribution features around a termination candidate. The range of scale is adaptively determined according to the local neurite diameter estimated by the Rayburst sampling algorithm in combination with the gray-weighted distance transform. The termination visual prior is based on a key observation-when observing a 3D termination from three orthogonal directions without occlusion, we can recognize it in at least two views. Using this prior with the multiscale ray-shooting model, we can detect 3D terminations with high accuracies. Experiments on 3D neuron image stacks, 2D neuron images, 3D bronchus image stacks, and 2D retinal blood vessel images exhibit average precision and recall rates of 87.50% and 90.54%. The experimental results confirm that the proposed method outperforms other the state-of-the-art termination detection methods.
Collapse
|
27
|
Zhang J, Bekkers E, Chen D, Berendschot TTJM, Schouten J, Pluim JPW, Shi Y, Dashtbozorg B, Romeny BMTH. Reconnection of Interrupted Curvilinear Structures via Cortically Inspired Completion for Ophthalmologic Images. IEEE Trans Biomed Eng 2019; 65:1151-1165. [PMID: 29683430 DOI: 10.1109/tbme.2017.2787025] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE In this paper, we propose a robust, efficient, and automatic reconnection algorithm for bridging interrupted curvilinear skeletons in ophthalmologic images. METHODS This method employs the contour completion process, i.e., mathematical modeling of the direction process in the roto-translation group to achieve line propagation/completion. The completion process can be used to reconstruct interrupted curves by considering their local consistency. An explicit scheme with finite-difference approximation is used to construct the three-dimensional (3-D) completion kernel, where we choose the Gamma distribution for time integration. To process structures in , the orientation score framework is exploited to lift the 2-D curvilinear segments into the 3-D space. The propagation and reconnection of interrupted segments are achieved by convolving the completion kernel with orientation scores via iterative group convolutions. To overcome the problem of incorrect skeletonization of 2-D structures at junctions, a 3-D segment-wise thinning technique is proposed to process each segment separately in orientation scores. RESULTS Validations on 4 datasets with different image modalities show that our method achieves an average success rate of in reconnecting gaps of sizes from to , including challenging junction structures. CONCLUSION The reconnection approach can be a useful and reliable technique for bridging complex curvilinear interruptions. SIGNIFICANCE The presented method is a critical work to obtain more complete curvilinear structures in ophthalmologic images. It provides better topological and geometric connectivities for further analysis.
Collapse
|
28
|
Girard F, Kavalec C, Cheriet F. Joint segmentation and classification of retinal arteries/veins from fundus images. Artif Intell Med 2019; 94:96-109. [DOI: 10.1016/j.artmed.2019.02.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2017] [Revised: 08/09/2018] [Accepted: 02/17/2019] [Indexed: 11/17/2022]
|
29
|
Li S, Quan T, Xu C, Huang Q, Kang H, Chen Y, Li A, Fu L, Luo Q, Gong H, Zeng S. Optimization of Traced Neuron Skeleton Using Lasso-Based Model. Front Neuroanat 2019; 13:18. [PMID: 30846931 PMCID: PMC6393391 DOI: 10.3389/fnana.2019.00018] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Accepted: 02/01/2019] [Indexed: 11/30/2022] Open
Abstract
Reconstruction of neuronal morphology from images involves mainly the extraction of neuronal skeleton points. It is an indispensable step in the quantitative analysis of neurons. Due to the complex morphology of neurons, many widely used tracing methods have difficulties in accurately acquiring skeleton points near branch points or in structures with tortuosity. Here, we propose two models to solve these problems. One is based on an L1-norm minimization model, which can better identify tortuous structure, namely, a local structure with large curvature skeleton points; the other detects an optimized branch point by considering the combination patterns of all neurites that link to this point. We combined these two models to achieve optimized skeleton detection for a neuron. We validate our models in various datasets including MOST and BigNeuron. In addition, we demonstrate that our method can optimize the traced skeletons from large-scale images. These characteristics of our approach indicate that it can reduce manual editing of traced skeletons and help to accelerate the accurate reconstruction of neuronal morphology.
Collapse
Affiliation(s)
- Shiwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China.,School of Mathematics and Economics, Hubei University of Education, Hubei, China
| | - Cheng Xu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Hongtao Kang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Ling Fu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Hubei, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, China
| |
Collapse
|
30
|
Skibbe H, Reisert M, Nakae K, Watakabe A, Hata J, Mizukami H, Okano H, Yamamori T, Ishii S. PAT-Probabilistic Axon Tracking for Densely Labeled Neurons in Large 3-D Micrographs. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:69-78. [PMID: 30010551 DOI: 10.1109/tmi.2018.2855736] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
A major goal of contemporary neuroscience research is to map the structural connectivity of mammalian brain using microscopy imaging data. In this context, the reconstruction of densely labeled axons from two-photon microscopy images is a challenging and important task. The visually overlapping, crossing, and often strongly distorted images of the axons allow many ambiguous interpretations to be made. We address the problem of tracking axons in densely labeled samples of neurons in large image data sets acquired from marmoset brains. Our high-resolution images were acquired using two-photon microscopy and they provided whole brain coverage, occupying terabytes of memory. Both the image distortions and the large data set size frequently make it impractical to apply present-day neuron tracing algorithms to such data due to the optimization of such algorithms to the precise tracing of either single or sparse sets of neurons. Thus, new tracking techniques are needed. We propose a probabilistic axon tracking algorithm (PAT). PAT tackles the tracking of axons in two steps: locally (L-PAT) and globally (G-PAT). L-PAT is a probabilistic tracking algorithm that can tackle distorted, cluttered images of densely labeled axons. L-PAT divides a large micrograph into smaller image stacks. It then processes each image stack independently before mapping the axons in each image to a sparse model of axon trajectories. G-PAT merges the sparse L-PAT models into a single global model of axon trajectories by minimizing a global objective function using a probabilistic optimization method. We demonstrate the superior performance of PAT over standard approaches on synthetic data. Furthermore, we successfully apply PAT to densely labeled axons in large images acquired from marmoset brains.
Collapse
|
31
|
Synthesizing retinal and neuronal images with generative adversarial nets. Med Image Anal 2018; 49:14-26. [DOI: 10.1016/j.media.2018.07.001] [Citation(s) in RCA: 107] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Revised: 06/08/2018] [Accepted: 07/03/2018] [Indexed: 11/23/2022]
|
32
|
Na T, Xie J, Zhao Y, Zhao Y, Liu Y, Wang Y, Liu J. Retinal vascular segmentation using superpixel-based line operator and its application to vascular topology estimation. Med Phys 2018; 45:3132-3146. [PMID: 29744887 DOI: 10.1002/mp.12953] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Revised: 03/28/2018] [Accepted: 04/22/2018] [Indexed: 02/03/2023] Open
Abstract
PURPOSE Automatic methods of analyzing of retinal vascular networks, such as retinal blood vessel detection, vascular network topology estimation, and arteries/veins classification are of great assistance to the ophthalmologist in terms of diagnosis and treatment of a wide spectrum of diseases. METHODS We propose a new framework for precisely segmenting retinal vasculatures, constructing retinal vascular network topology, and separating the arteries and veins. A nonlocal total variation inspired Retinex model is employed to remove the image intensity inhomogeneities and relatively poor contrast. For better generalizability and segmentation performance, a superpixel-based line operator is proposed as to distinguish between lines and the edges, thus allowing more tolerance in the position of the respective contours. The concept of dominant sets clustering is adopted to estimate retinal vessel topology and classify the vessel network into arteries and veins. RESULTS The proposed segmentation method yields competitive results on three public data sets (STARE, DRIVE, and IOSTAR), and it has superior performance when compared with unsupervised segmentation methods, with accuracy of 0.954, 0.957, and 0.964, respectively. The topology estimation approach has been applied to five public databases (DRIVE,STARE, INSPIRE, IOSTAR, and VICAVR) and achieved high accuracy of 0.830, 0.910, 0.915, 0.928, and 0.889, respectively. The accuracies of arteries/veins classification based on the estimated vascular topology on three public databases (INSPIRE, DRIVE and VICAVR) are 0.90.9, 0.910, and 0.907, respectively. CONCLUSIONS The experimental results show that the proposed framework has effectively addressed crossover problem, a bottleneck issue in segmentation and vascular topology reconstruction. The vascular topology information significantly improves the accuracy on arteries/veins classification.
Collapse
Affiliation(s)
- Tong Na
- Georgetown Preparatory School, North Bethesda, 20852, USA.,Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China.,Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Jianyang Xie
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China.,Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China.,Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Yifan Zhao
- School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, MK43 0AL, UK
| | - Yue Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Jiang Liu
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China
| |
Collapse
|
33
|
Abbasi-Sureshjani S, Favali M, Citti G, Sarti A, Ter Haar Romeny BM. Curvature Integration in a 5D Kernel for Extracting Vessel Connections in Retinal Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:606-621. [PMID: 28991743 DOI: 10.1109/tip.2017.2761543] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Tree-like structures, such as retinal images, are widely studied in computer-aided diagnosis systems for large-scale screening programs. Despite several segmentation and tracking methods proposed in the literature, there still exist several limitations specifically when two or more curvilinear structures cross or bifurcate, or in the presence of interrupted lines or highly curved blood vessels. In this paper, we propose a novel approach based on multi-orientation scores augmented with a contextual affinity matrix, which both are inspired by the geometry of the primary visual cortex (V1) and their contextual connections. The connectivity is described with a 5D kernel obtained as the fundamental solution of the Fokker-Planck equation modeling the cortical connectivity in the lifted space of positions, orientations, curvatures, and intensity. It is further used in a self-tuning spectral clustering step to identify the main perceptual units in the stimuli. The proposed method has been validated on several easy as well as challenging structures in a set of artificial images and actual retinal patches. Supported by quantitative and qualitative results, the method is capable of overcoming the limitations of current state-of-the-art techniques.
Collapse
|
34
|
Bekkers EJ, Chen D, Portegies JM. Nilpotent Approximations of Sub-Riemannian Distances for Fast Perceptual Grouping of Blood Vessels in 2D and 3D. JOURNAL OF MATHEMATICAL IMAGING AND VISION 2018; 60:882-899. [PMID: 30996523 PMCID: PMC6438598 DOI: 10.1007/s10851-018-0787-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Accepted: 01/05/2018] [Indexed: 06/09/2023]
Abstract
We propose an efficient approach for the grouping of local orientations (points on vessels) via nilpotent approximations of sub-Riemannian distances in the 2D and 3D roto-translation groups SE(2) and SE(3). In our distance approximations we consider homogeneous norms on nilpotent groups that locally approximate SE(n), and which are obtained via the exponential and logarithmic map on SE(n). In a qualitative validation we show that the norms provide accurate approximations of the true sub-Riemannian distances, and we discuss their relations to the fundamental solution of the sub-Laplacian on SE(n). The quantitative experiments further confirm the accuracy of the approximations. Quantitative results are obtained by evaluating perceptual grouping performance of retinal blood vessels in 2D images and curves in challenging 3D synthetic volumes. The results show that (1) sub-Riemannian geometry is essential in achieving top performance and (2) grouping via the fast analytic approximations performs almost equally, or better, than data-adaptive fast marching approaches on R n and SE(n).
Collapse
Affiliation(s)
- Erik J. Bekkers
- Centre for Analysis, Scientific computing and Applications (CASA), Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Da Chen
- CNRS, UMR 7534, CEREMADE, University Paris Dauphine, PSL Research University, 75016 Paris, France
| | - Jorg M. Portegies
- Centre for Analysis, Scientific computing and Applications (CASA), Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
35
|
Retinal Vessels Segmentation Techniques and Algorithms: A Survey. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8020155] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
36
|
Abbasi-Sureshjani S, Zhang J, Duits R, ter Haar Romeny B. Retrieving challenging vessel connections in retinal images by line co-occurrence statistics. BIOLOGICAL CYBERNETICS 2017; 111:237-247. [PMID: 28488018 PMCID: PMC5506202 DOI: 10.1007/s00422-017-0718-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Accepted: 04/19/2017] [Indexed: 06/07/2023]
Abstract
Natural images contain often curvilinear structures, which might be disconnected, or partly occluded. Recovering the missing connection of disconnected structures is an open issue and needs appropriate geometric reasoning. We propose to find line co-occurrence statistics from the centerlines of blood vessels in retinal images and show its remarkable similarity to a well-known probabilistic model for the connectivity pattern in the primary visual cortex. Furthermore, the probabilistic model is trained from the data via statistics and used for automated grouping of interrupted vessels in a spectral clustering based approach. Several challenging image patches are investigated around junction points, where successful results indicate the perfect match of the trained model to the profiles of blood vessels in retinal images. Also, comparisons among several statistical models obtained from different datasets reveal their high similarity, i.e., they are independent of the dataset. On top of that the best approximation of the statistical model with the symmetrized extension of the probabilistic model on the projective line bundle is found with a least square error smaller than [Formula: see text]. Apparently, the direction process on the projective line bundle is a good continuation model for vessels in retinal images.
Collapse
Affiliation(s)
- Samaneh Abbasi-Sureshjani
- Department of Biomedical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
| | - Jiong Zhang
- Department of Biomedical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
| | - Remco Duits
- Department of Biomedical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
- Department of Mathematics and Computer Science, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
| | - Bart ter Haar Romeny
- Department of Biomedical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
- Department of Biomedical and Information Engineering, Northeastern University, 500 Zhihui Street, Shenyang, 110167 China
| |
Collapse
|
37
|
Jiang P, Dou Q. Fundus vessel segmentation based on self-adaptive classification strategy. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2017. [DOI: 10.3233/jifs-161432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Ping Jiang
- School of Computer Science and Technology, Shandong Institute of Business and Technology, Yantai, China
- College of Computer Science and Technology, Jilin University, Changchun, China
| | - Quansheng Dou
- School of Computer Science and Technology, Shandong Institute of Business and Technology, Yantai, China
| |
Collapse
|
38
|
Radojevic M, Meijering E. Automated neuron tracing using probability hypothesis density filtering. Bioinformatics 2017; 33:1073-1080. [PMID: 28065895 DOI: 10.1093/bioinformatics/btw751] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Accepted: 11/22/2016] [Indexed: 01/18/2023] Open
Abstract
Motivation The functionality of neurons and their role in neuronal networks is tightly connected to the cell morphology. A fundamental problem in many neurobiological studies aiming to unravel this connection is the digital reconstruction of neuronal cell morphology from microscopic image data. Many methods have been developed for this, but they are far from perfect, and better methods are needed. Results Here we present a new method for tracing neuron centerlines needed for full reconstruction. The method uses a fundamentally different approach than previous methods by considering neuron tracing as a Bayesian multi-object tracking problem. The problem is solved using probability hypothesis density filtering. Results of experiments on 2D and 3D fluorescence microscopy image datasets of real neurons indicate the proposed method performs comparably or even better than the state of the art. Availability and Implementation Software implementing the proposed neuron tracing method was written in the Java programming language as a plugin for the ImageJ platform. Source code is freely available for non-commercial use at https://bitbucket.org/miroslavradojevic/phd . Contact meijering@imagescience.org. Supplementary information Supplementary data are available at Bioinformatics online.
Collapse
|
39
|
Zhang Z, Xia S, Kanchanawong P. An integrated enhancement and reconstruction strategy for the quantitative extraction of actin stress fibers from fluorescence micrographs. BMC Bioinformatics 2017; 18:268. [PMID: 28532442 PMCID: PMC5440974 DOI: 10.1186/s12859-017-1684-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2016] [Accepted: 05/11/2017] [Indexed: 01/23/2023] Open
Abstract
BACKGROUND The stress fibers are prominent organization of actin filaments that perform important functions in cellular processes such as migration, polarization, and traction force generation, and whose collective organization reflects the physiological and mechanical activities of the cells. Easily visualized by fluorescence microscopy, the stress fibers are widely used as qualitative descriptors of cell phenotypes. However, due to the complexity of the stress fibers and the presence of other actin-containing cellular features, images of stress fibers are relatively challenging to quantitatively analyze using previously developed approaches, requiring significant user intervention. This poses a challenge for the automation of their detection, segmentation, and quantitative analysis. RESULT Here we describe an open-source software package, SFEX (Stress Fiber Extractor), which is geared for efficient enhancement, segmentation, and analysis of actin stress fibers in adherent tissue culture cells. Our method made use of a carefully chosen image filtering technique to enhance filamentous structures, effectively facilitating the detection and segmentation of stress fibers by binary thresholding. We subdivided the skeletons of stress fiber traces into piecewise-linear fragments, and used a set of geometric criteria to reconstruct the stress fiber networks by pairing appropriate fiber fragments. Our strategy enables the trajectory of a majority of stress fibers within the cells to be comprehensively extracted. We also present a method for quantifying the dimensions of the stress fibers using an image gradient-based approach. We determine the optimal parameter space using sensitivity analysis, and demonstrate the utility of our approach by analyzing actin stress fibers in cells cultured on various micropattern substrates. CONCLUSION We present an open-source graphically-interfaced computational tool for the extraction and quantification of stress fibers in adherent cells with minimal user input. This facilitates the automated extraction of actin stress fibers from fluorescence images. We highlight their potential uses by analyzing images of cells with shapes constrained by fibronectin micropatterns. The method we reported here could serve as the first step in the detection and characterization of the spatial properties of actin stress fibers to enable further detailed morphological analysis.
Collapse
Affiliation(s)
- Zhen Zhang
- Mechanobiology Institute, Singapore, 117411, Republic of Singapore
| | - Shumin Xia
- Mechanobiology Institute, Singapore, 117411, Republic of Singapore
| | - Pakorn Kanchanawong
- Mechanobiology Institute, Singapore, 117411, Republic of Singapore.
- Department of Biomedical Engineering, National University of Singapore, Singapore, 117411, Republic of Singapore.
| |
Collapse
|
40
|
Gu L, Zhang X, Zhao H, Li H, Cheng L. Segment 2D and 3D Filaments by Learning Structured and Contextual Features. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:596-606. [PMID: 27831862 DOI: 10.1109/tmi.2016.2623357] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We focus on the challenging problem of filamentary structure segmentation in both 2D and 3D images, including retinal vessels and neurons, among others. Despite the increasing amount of efforts in learning based methods to tackle this problem, there still lack proper data-driven feature construction mechanisms to sufficiently encode contextual labelling information, which might hinder the segmentation performance. This observation prompts us to propose a data-driven approach to learn structured and contextual features in this paper. The structured features aim to integrate local spatial label patterns into the feature space, thus endowing the follow-up tree classifiers capability to grouping training examples with similar structure into the same leaf node when splitting the feature space, and further yielding contextual features to capture more of the global contextual information. Empirical evaluations demonstrate that our approach outperforms state-of-the-arts on well-regarded testbeds over a variety of applications. Our code is also made publicly available in support of the open-source research activities.
Collapse
|
41
|
Ong KH, De J, Cheng L, Ahmed S, Yu W. NeuronCyto II: An automatic and quantitative solution for crossover neural cells in high throughput screening. Cytometry A 2016; 89:747-54. [PMID: 27233092 PMCID: PMC5089663 DOI: 10.1002/cyto.a.22872] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Revised: 04/04/2016] [Accepted: 04/21/2016] [Indexed: 11/21/2022]
Abstract
Microscopy is a fundamental technology driving new biological discoveries. Today microscopy allows a large number of images to be acquired using, for example, High Throughput Screening (HTS) and 4D imaging. It is essential to be able to interrogate these images and extract quantitative information in an automated fashion. In the context of neurobiology, it is important to automatically quantify the morphology of neurons in terms of neurite number, length, branching and complexity, etc. One major issue in quantification of neuronal morphology is the “crossover” problem where neurites cross and it is difficult to assign which neurite belongs to which cell body. In the present study, we provide a solution to the “crossover” problem, the software package NeuronCyto II. NeuronCyto II is an interactive and user‐friendly software package for automatic neurite quantification. It has a well‐designed graphical user interface (GUI) with only a few free parameters allowing users to optimize the software by themselves and extract relevant quantitative information routinely. Users are able to interact with the images and the numerical features through the Result Inspector. The processing of neurites without crossover was presented in our previous work. Our solution for the “crossover” problem is developed based on our recently published work with directed graph theory. Both methods are implemented in NeuronCyto II. The results show that our solution is able to significantly improve the reliability and accuracy of the neurons displaying “crossover.” NeuronCyto II is freely available at the website: https://sites.google.com/site/neuroncyto/, which includes user support and where software upgrades will also be placed in the future. © 2016 The Authors. Cytometry Part A Published by Wiley Periodicals, Inc. on behalf of ISAC.
Collapse
Affiliation(s)
- Kok Haur Ong
- Central Imaging Facility, Institute of Molecule and Cell Biology (IMCB), a*STAR, Singapore
| | - Jaydeep De
- Imaging Informatics Division, Bioinformatics Institute (BII), a*STAR, Singapore
| | - Li Cheng
- Imaging Informatics Division, Bioinformatics Institute (BII), a*STAR, Singapore
| | - Sohail Ahmed
- Neural Stem Cell Lab, Institute of Medical Biology (IMB), a*STAR, Singapore
| | - Weimiao Yu
- Central Imaging Facility, Institute of Molecule and Cell Biology (IMCB), a*STAR, Singapore
| |
Collapse
|