101
|
Fully automatic volume measurement of the adrenal gland on CT using deep learning to classify adrenal hyperplasia. Eur Radiol 2022; 33:4292-4302. [PMID: 36571602 DOI: 10.1007/s00330-022-09347-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 10/03/2022] [Accepted: 11/29/2022] [Indexed: 12/27/2022]
Abstract
OBJECTIVES To develop a fully automated deep learning model for adrenal segmentation and to evaluate its performance in classifying adrenal hyperplasia. METHODS This retrospective study evaluated automated adrenal segmentation in 308 abdominal CT scans from 48 patients with adrenal hyperplasia and 260 patients with normal glands from 2010 to 2021 (mean age, 42 years; 156 women). The dataset was split into training, validation, and test sets at a ratio of 6:2:2. Contrast-enhanced CT images and manually drawn adrenal gland masks were used to develop a U-Net-based segmentation model. Predicted adrenal volumes were obtained by fivefold splitting of the dataset without overlapping the test set. Adrenal volumes and anthropometric parameters (height, weight, and sex) were utilized to develop an algorithm to classify adrenal hyperplasia, using multilayer perceptron, support vector classification, a random forest classifier, and a decision tree classifier. To measure the performance of the developed model, the dice coefficient and intraclass correlation coefficient (ICC) were used for segmentation, and area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity were used for classification. RESULTS The model for segmenting adrenal glands achieved a Dice coefficient of 0.7009 for 308 cases and an ICC of 0.91 (95% CI, 0.90-0.93) for adrenal volume. The models for classifying hyperplasia had the following results: AUC, 0.98-0.99; accuracy, 0.948-0.961; sensitivity, 0.750-0.813; and specificity, 0.973-1.000. CONCLUSION The proposed segmentation algorithm can accurately segment the adrenal glands on CT scans and may help clinicians identify possible cases of adrenal hyperplasia. KEY POINTS • A deep learning segmentation method can accurately segment the adrenal gland, which is a small organ, on CT scans. • The machine learning algorithm to classify adrenal hyperplasia using adrenal volume and anthropometric parameters (height, weight, and sex) showed good performance. • The proposed segmentation algorithm may help clinicians identify possible cases of adrenal hyperplasia.
Collapse
|
102
|
Effects of Image Quality on the Accuracy Human Pose Estimation and Detection of Eye Lid Opening/Closing Using Openpose and DLib. J Imaging 2022; 8:jimaging8120330. [PMID: 36547495 PMCID: PMC9783075 DOI: 10.3390/jimaging8120330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 11/25/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022] Open
Abstract
OBJECTIVE The application of computer models in continuous patient activity monitoring using video cameras is complicated by the capture of images of varying qualities due to poor lighting conditions and lower image resolutions. Insufficient literature has assessed the effects of image resolution, color depth, noise level, and low light on the inference of eye opening and closing and body landmarks from digital images. METHOD This study systematically assessed the effects of varying image resolutions (from 100 × 100 pixels to 20 × 20 pixels at an interval of 10 pixels), lighting conditions (from 42 to 2 lux with an interval of 2 lux), color-depths (from 16.7 M colors to 8 M, 1 M, 512 K, 216 K, 64 K, 8 K, 1 K, 729, 512, 343, 216, 125, 64, 27, and 8 colors), and noise levels on the accuracy and model performance in eye dimension estimation and body keypoint localization using the Dlib library and OpenPose with images from the Closed Eyes in the Wild and the COCO datasets, as well as photographs of the face captured at different light intensities. RESULTS The model accuracy and rate of model failure remained acceptable at an image resolution of 60 × 60 pixels, a color depth of 343 colors, a light intensity of 14 lux, and a Gaussian noise level of 4% (i.e., 4% of pixels replaced by Gaussian noise). CONCLUSIONS The Dlib and OpenPose models failed to detect eye dimensions and body keypoints only at low image resolutions, lighting conditions, and color depths. CLINICAL IMPACT Our established baseline threshold values will be useful for future work in the application of computer vision in continuous patient monitoring.
Collapse
|
103
|
Huang SY, Hsu WL, Hsu RJ, Liu DW. Fully Convolutional Network for the Semantic Segmentation of Medical Images: A Survey. Diagnostics (Basel) 2022; 12:diagnostics12112765. [PMID: 36428824 PMCID: PMC9689961 DOI: 10.3390/diagnostics12112765] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/19/2022] [Accepted: 11/04/2022] [Indexed: 11/16/2022] Open
Abstract
There have been major developments in deep learning in computer vision since the 2010s. Deep learning has contributed to a wealth of data in medical image processing, and semantic segmentation is a salient technique in this field. This study retrospectively reviews recent studies on the application of deep learning for segmentation tasks in medical imaging and proposes potential directions for future development, including model development, data augmentation processing, and dataset creation. The strengths and deficiencies of studies on models and data augmentation, as well as their application to medical image segmentation, were analyzed. Fully convolutional network developments have led to the creation of the U-Net and its derivatives. Another noteworthy image segmentation model is DeepLab. Regarding data augmentation, due to the low data volume of medical images, most studies focus on means to increase the wealth of medical image data. Generative adversarial networks (GAN) increase data volume via deep learning. Despite the increasing types of medical image datasets, there is still a deficiency of datasets on specific problems, which should be improved moving forward. Considering the wealth of ongoing research on the application of deep learning processing to medical image segmentation, the data volume and practical clinical application problems must be addressed to ensure that the results are properly applied.
Collapse
Affiliation(s)
- Sheng-Yao Huang
- Institute of Medical Science, Tzu Chi University, Hualien 97071, Taiwan
- Department of Radiation Oncology, Hualien Tzu Chi General Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
| | - Wen-Lin Hsu
- Department of Radiation Oncology, Hualien Tzu Chi General Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- Cancer Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- School of Medicine, Tzu Chi University, Hualien 97071, Taiwan
| | - Ren-Jun Hsu
- Institute of Medical Science, Tzu Chi University, Hualien 97071, Taiwan
- Cancer Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- School of Medicine, Tzu Chi University, Hualien 97071, Taiwan
- Correspondence: (R.-J.H.); (D.-W.L.); Tel. & Fax: +886-3-8561825 (R.-J.H. & D.-W.L.)
| | - Dai-Wei Liu
- Institute of Medical Science, Tzu Chi University, Hualien 97071, Taiwan
- Department of Radiation Oncology, Hualien Tzu Chi General Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- Cancer Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- School of Medicine, Tzu Chi University, Hualien 97071, Taiwan
- Correspondence: (R.-J.H.); (D.-W.L.); Tel. & Fax: +886-3-8561825 (R.-J.H. & D.-W.L.)
| |
Collapse
|
104
|
Multi-organ segmentation of abdominal structures from non-contrast and contrast enhanced CT images. Sci Rep 2022; 12:19093. [PMID: 36351987 PMCID: PMC9646761 DOI: 10.1038/s41598-022-21206-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 09/23/2022] [Indexed: 11/10/2022] Open
Abstract
Manually delineating upper abdominal organs at risk (OARs) is a time-consuming task. To develop a deep-learning-based tool for accurate and robust auto-segmentation of these OARs, forty pancreatic cancer patients with contrast-enhanced breath-hold computed tomographic (CT) images were selected. We trained a three-dimensional (3D) U-Net ensemble that automatically segments all organ contours concurrently with the self-configuring nnU-Net framework. Our tool's performance was assessed on a held-out test set of 30 patients quantitatively. Five radiation oncologists from three different institutions assessed the performance of the tool using a 5-point Likert scale on an additional 75 randomly selected test patients. The mean (± std. dev.) Dice similarity coefficient values between the automatic segmentation and the ground truth on contrast-enhanced CT images were 0.80 ± 0.08, 0.89 ± 0.05, 0.90 ± 0.06, 0.92 ± 0.03, 0.96 ± 0.01, 0.97 ± 0.01, 0.96 ± 0.01, and 0.96 ± 0.01 for the duodenum, small bowel, large bowel, stomach, liver, spleen, right kidney, and left kidney, respectively. 89.3% (contrast-enhanced) and 85.3% (non-contrast-enhanced) of duodenum contours were scored as a 3 or above, which required only minor edits. More than 90% of the other organs' contours were scored as a 3 or above. Our tool achieved a high level of clinical acceptability with a small training dataset and provides accurate contours for treatment planning.
Collapse
|
105
|
Shi F, Hu W, Wu J, Han M, Wang J, Zhang W, Zhou Q, Zhou J, Wei Y, Shao Y, Chen Y, Yu Y, Cao X, Zhan Y, Zhou XS, Gao Y, Shen D. Deep learning empowered volume delineation of whole-body organs-at-risk for accelerated radiotherapy. Nat Commun 2022; 13:6566. [PMID: 36323677 PMCID: PMC9630370 DOI: 10.1038/s41467-022-34257-x] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 10/19/2022] [Indexed: 11/05/2022] Open
Abstract
In radiotherapy for cancer patients, an indispensable process is to delineate organs-at-risk (OARs) and tumors. However, it is the most time-consuming step as manual delineation is always required from radiation oncologists. Herein, we propose a lightweight deep learning framework for radiotherapy treatment planning (RTP), named RTP-Net, to promote an automatic, rapid, and precise initialization of whole-body OARs and tumors. Briefly, the framework implements a cascade coarse-to-fine segmentation, with adaptive module for both small and large organs, and attention mechanisms for organs and boundaries. Our experiments show three merits: 1) Extensively evaluates on 67 delineation tasks on a large-scale dataset of 28,581 cases; 2) Demonstrates comparable or superior accuracy with an average Dice of 0.95; 3) Achieves near real-time delineation in most tasks with <2 s. This framework could be utilized to accelerate the contouring process in the All-in-One radiotherapy scheme, and thus greatly shorten the turnaround time of patients.
Collapse
Affiliation(s)
- Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Weigang Hu
- grid.452404.30000 0004 1808 0942Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China ,grid.8547.e0000 0001 0125 2443Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Miaofei Han
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jiazhou Wang
- grid.452404.30000 0004 1808 0942Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China ,grid.8547.e0000 0001 0125 2443Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Wei Zhang
- grid.497849.fRadiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Qing Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jingjie Zhou
- grid.497849.fRadiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Shao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yanbo Chen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yue Yu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiaohuan Cao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yaozong Gao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China ,grid.440637.20000 0004 4657 8879School of Biomedical Engineering, ShanghaiTech University, Shanghai, China ,grid.452344.0Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
106
|
Lee CE, Park H, Shin YG, Chung M. Voxel-wise adversarial semi-supervised learning for medical image segmentation. Comput Biol Med 2022; 150:106152. [PMID: 36208595 DOI: 10.1016/j.compbiomed.2022.106152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 09/03/2022] [Accepted: 09/24/2022] [Indexed: 11/17/2022]
Abstract
BACKGROUND AND OBJECTIVE Semi-supervised learning for medical image segmentation is an important area of research for alleviating the huge cost associated with the construction of reliable large-scale annotations in the medical domain. Recent semi-supervised approaches have demonstrated promising results by employing consistency regularization, pseudo-labeling techniques, and adversarial learning. These methods primarily attempt to learn the distribution of labeled and unlabeled data by enforcing consistency in the predictions or embedding context. However, previous approaches have focused only on local discrepancy minimization or context relations across single classes. METHODS In this paper, we introduce a novel adversarial learning-based semi-supervised segmentation method that effectively embeds both local and global features from multiple hidden layers and learns context relations between multiple classes. Our voxel-wise adversarial learning method utilizes a voxel-wise feature discriminator, which considers multilayer voxel-wise features (involving both local and global features) as an input by embedding class-specific voxel-wise feature distribution. Furthermore, our previous representation learning method is improved by overcoming information loss and learning stability problems, which enables rich representations of labeled data. RESULT In the experiments, we used the Left Atrial Segmentation Challenge dataset and the Abdominal Multi-Organ dataset to prove the effectiveness of our method in both single class and multiclass segmentation. The experimental results demonstrate that our method outperforms current best-performing state-of-the-art semi-supervised learning approaches. Our proposed adversarial learning-based semi-supervised segmentation method successfully leveraged unlabeled data to improve the network performance by 2% in Dice score coefficient for multi-organ dataset. CONCLUSION We compare our approach to a wide range of medical datasets, and showed our method can be adapted to embed class-specific features. Furthermore, visual interpretation of the feature space demonstrates that our proposed method enables a well-distributed and separated feature space from both labeled and unlabeled data, which improves the overall prediction results.
Collapse
Affiliation(s)
| | - Hyelim Park
- Department of Computer Science and Engineering, Seoul National University, Republic of Korea.
| | - Yeong-Gil Shin
- Department of Computer Science and Engineering, Seoul National University, Republic of Korea.
| | - Minyoung Chung
- School of Software, Soongsil University, 369 Sangdo-Ro, Dongjak-Gu, Seoul, 06978, Republic of Korea.
| |
Collapse
|
107
|
Fan D, Gajawelli N, Paulli A, Perry E, Tanedo J, Deoni S, Wang Y, Linguraru MG, Lepore N. NEC-NET : Segmentation and Feature Extraction Network for the Neurocranium in Early Childhood. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12567:125671K. [PMID: 39540004 PMCID: PMC11557371 DOI: 10.1117/12.2670281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
In early life, the neurocranium undergoes rapid changes to accommodate the expanding brain. Neurocranial maturation can be disrupted by developmental abnormalities and environmental factors such as sleep position. To establish a baseline for the early detection of anomalies, it is important to understand how this structure typically grows in healthy children. Here, we designed a deep neural network pipeline NEC-NET, including segmentation and classification, to analyze the normative development of the neurocranium in T1 MR images from healthy children aged 12 to 60 months old. The pipeline optimizes the segmentation of the neurocranium and shows the preliminary results of age-based regional differences among infants.
Collapse
Affiliation(s)
- Di Fan
- CIBORG Lab, Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, CA, USA
| | - Niharika Gajawelli
- CIBORG Lab, Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, CA, USA
| | - Athelia Paulli
- CIBORG Lab, Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, CA, USA
- Sheikh Zayed Institute for Pediatric Surgical Innovation. Children’s National Hospital, Washington, D.C, USA
| | - Eryn Perry
- CIBORG Lab, Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, CA, USA
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA
| | - Jeff Tanedo
- CIBORG Lab, Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, CA, USA
| | - Sean Deoni
- Bill and Melinda Gates Foundation, Seattle, WA, USA
| | - Yalin Wang
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA
| | - Marius George Linguraru
- Departments of Radiology and Pediatrics, George Washington University School of Medicine and Health Sciences, Washington, D.C, USA
- Sheikh Zayed Institute for Pediatric Surgical Innovation. Children’s National Hospital, Washington, D.C, USA
| | - Natasha Lepore
- CIBORG Lab, Department of Radiology, Children’s Hospital Los Angeles, Los Angeles, CA, USA
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA
- Department of Pediatrics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
108
|
Guhan B, Almutairi L, Sowmiya S, Snekhalatha U, Rajalakshmi T, Aslam SM. Automated system for classification of COVID-19 infection from lung CT images based on machine learning and deep learning techniques. Sci Rep 2022; 12:17417. [PMID: 36257964 PMCID: PMC9579174 DOI: 10.1038/s41598-022-20804-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Accepted: 09/19/2022] [Indexed: 01/12/2023] Open
Abstract
The objectives of our proposed study were as follows: First objective is to segment the CT images using a k-means clustering algorithm for extracting the region of interest and to extract textural features using gray level co-occurrence matrix (GLCM). Second objective is to implement machine learning classifiers such as Naïve bayes, bagging and Reptree to classify the images into two image classes namely COVID and non-COVID and to compare the performance of the three pre-trained CNN models such as AlexNet, ResNet50 and SqueezeNet with that of the proposed machine learning classifiers. Our dataset consists of 100 COVID and non-COVID images which are pre-processed and segmented with our proposed algorithm. Following the feature extraction process, three machine learning classifiers (Naive Bayes, Bagging, and REPTree) were used to classify the normal and covid patients. We had implemented the three pre-trained CNN models such as AlexNet, ResNet50 and SqueezeNet for comparing their performance with machine learning classifiers. In machine learning, the Naive Bayes classifier achieved the highest accuracy of 97%, whereas the ResNet50 CNN model attained the highest accuracy of 99%. Hence the deep learning networks outperformed well compared to the machine learning techniques in the classification of Covid-19 images.
Collapse
Affiliation(s)
- Bhargavee Guhan
- grid.412742.60000 0004 0635 5080Department of Biomedical Engineering, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu 603203 India
| | - Laila Almutairi
- grid.449051.d0000 0004 0441 5633Department of Computer Engineering, College of Computer and Information Sciences, Majmaah University, Al Majmaah, 11952 Saudi Arabia
| | - S. Sowmiya
- grid.412742.60000 0004 0635 5080Department of Biomedical Engineering, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu 603203 India
| | - U. Snekhalatha
- grid.412742.60000 0004 0635 5080Department of Biomedical Engineering, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu 603203 India
| | - T. Rajalakshmi
- grid.412742.60000 0004 0635 5080Department of Electronics and Communication Engineering, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur, India
| | - Shabnam Mohamed Aslam
- grid.449051.d0000 0004 0441 5633Department of Information Technology, College of Computer and Information Sciences, Majmaah University, Al Majmaah, 11952 Saudi Arabia
| |
Collapse
|
109
|
Xing H, Zhang X, Nie Y, Wang S, Wang T, Jing H, Li F. A deep learning-based post-processing method for automated pulmonary lobe and airway trees segmentation using chest CT images in PET/CT. Quant Imaging Med Surg 2022; 12:4747-4757. [PMID: 36185049 PMCID: PMC9511416 DOI: 10.21037/qims-21-1116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 07/17/2022] [Indexed: 11/30/2022]
Abstract
Background The proposed algorithm could support accurate localization of lung disease. To develop and validate an automated deep learning model combined with a post-processing algorithm to segment six pulmonary anatomical regions in chest computed tomography (CT) images acquired during positron emission tomography/computed tomography (PET/CT) scans. The pulmonary regions have five pulmonary lobes and airway trees. Methods Patients who underwent both PET/CT imaging with an extra chest CT scan were retrospectively enrolled. The pulmonary segmentation of six regions in CT was performed via a convolutional neural network (CNN) of DenseVNet architecture with some post-processing algorithms. Three evaluation metrics were used to assess the performance of this method, which combined deep learning and the post-processing method. The agreement between the combined model and ground truth segmentations in the test set was analyzed. Results A total of 640 cases were enrolled. The combined model, which involved deep learning and post-processing methods, had a higher performance than the single deep learning model. In the test set, the all-lobes overall Dice coefficient, Hausdorff distance, and Jaccard coefficient were 0.972, 12.025 mm, and 0.948, respectively. The airway-tree Dice coefficient, Hausdorff distance, and Jaccard coefficient were 0.849, 32.076 mm, and 0.815, respectively. A good agreement was observed between our segmentation in every plot. Conclusions The proposed model combining two methods can automatically segment five pulmonary lobes and airway trees on chest CT imaging in PET/CT. The performance of the combined model was higher than the single deep learning model in each region in the test set.
Collapse
Affiliation(s)
- Haiqun Xing
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Beijing, China
| | | | | | | | - Tong Wang
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Beijing, China
| | - Hongli Jing
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Beijing, China
| | - Fang Li
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Beijing, China
| |
Collapse
|
110
|
Ma J, Zhang Y, Gu S, Zhu C, Ge C, Zhang Y, An X, Wang C, Wang Q, Liu X, Cao S, Zhang Q, Liu S, Wang Y, Li Y, He J, Yang X. AbdomenCT-1K: Is Abdominal Organ Segmentation a Solved Problem? IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:6695-6714. [PMID: 34314356 DOI: 10.1109/tpami.2021.3100536] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
With the unprecedented developments in deep learning, automatic segmentation of main abdominal organs seems to be a solved problem as state-of-the-art (SOTA) methods have achieved comparable results with inter-rater variability on many benchmark datasets. However, most of the existing abdominal datasets only contain single-center, single-phase, single-vendor, or single-disease cases, and it is unclear whether the excellent performance can generalize on diverse datasets. This paper presents a large and diverse abdominal CT organ segmentation dataset, termed AbdomenCT-1K, with more than 1000 (1K) CT scans from 12 medical centers, including multi-phase, multi-vendor, and multi-disease cases. Furthermore, we conduct a large-scale study for liver, kidney, spleen, and pancreas segmentation and reveal the unsolved segmentation problems of the SOTA methods, such as the limited generalization ability on distinct medical centers, phases, and unseen diseases. To advance the unsolved problems, we further build four organ segmentation benchmarks for fully supervised, semi-supervised, weakly supervised, and continual learning, which are currently challenging and active research topics. Accordingly, we develop a simple and effective method for each benchmark, which can be used as out-of-the-box methods and strong baselines. We believe the AbdomenCT-1K dataset will promote future in-depth research towards clinical applicable abdominal organ segmentation methods.
Collapse
|
111
|
Frueh M, Kuestner T, Nachbar M, Thorwarth D, Schilling A, Gatidis S. Self-supervised learning for automated anatomical tracking in medical image data with minimal human labeling effort. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:107085. [PMID: 36044801 DOI: 10.1016/j.cmpb.2022.107085] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 08/02/2022] [Accepted: 08/23/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Tracking of anatomical structures in time-resolved medical image data plays an important role for various tasks such as volume change estimation or treatment planning. State-of-the-art deep learning techniques for automated tracking, while providing accurate results, require large amounts of human-labeled training data making their wide-spread use time- and resource-intensive. Our contribution in this work is the implementation and adaption of a self-supervised learning (SSL) framework that addresses this bottleneck of training data generation. METHODS To this end we adapted and implemented an SSL framework that allows for automated anatomical tracking without the necessity for human-labeled training data. We evaluated this method by comparison to conventional- and deep learning optical flow (OF)-based tracking methods. We applied all methods on three different time-resolved medical image datasets (abdominal MRI, cardiac MRI, and echocardiography) and assessed their accuracy regarding tracking of pre-defined anatomical structures within and across individuals. RESULTS We found that SSL-based tracking as well as OF-based methods provide accurate results for simple, rigid and smooth motion patterns. However, regarding more complex motion, e.g. non-rigid or discontinuous motion patterns in the cardiac region, and for cross-subject anatomical matching, SSL-based tracking showed markedly superior performance. CONCLUSION We conclude that automated tracking of anatomical structures on time-resolved medical image data with minimal human labeling effort is feasible using SSL and can provide superior results compared to conventional and deep learning OF-based methods.
Collapse
Affiliation(s)
- Marcel Frueh
- University Hospital Tuebingen, Department of Radiology, University of Tuebingen, Hoppe-Seyler-Straße 3 Tuebingen 72076, Germany; University of Tuebingen, Institute for Visual Computing, Department of Computer Science, Sand 14 Tuebingen 72076, Germany
| | - Thomas Kuestner
- University Hospital Tuebingen, Department of Radiology, University of Tuebingen, Hoppe-Seyler-Straße 3 Tuebingen 72076, Germany
| | - Marcel Nachbar
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tuebingen, Hoppe-Seyler-Straße 3 Tuebingen 72076, Germany
| | - Daniela Thorwarth
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tuebingen, Hoppe-Seyler-Straße 3 Tuebingen 72076, Germany
| | - Andreas Schilling
- University of Tuebingen, Institute for Visual Computing, Department of Computer Science, Sand 14 Tuebingen 72076, Germany
| | - Sergios Gatidis
- University Hospital Tuebingen, Department of Radiology, University of Tuebingen, Hoppe-Seyler-Straße 3 Tuebingen 72076, Germany; Max Planck Institute for Intelligent Systems, Empirical Inference Department, Max-Planck-Ring 4 Tuebingen 72076, Germany.
| |
Collapse
|
112
|
Senthilvelan J, Jamshidi N. A pipeline for automated deep learning liver segmentation (PADLLS) from contrast enhanced CT exams. Sci Rep 2022; 12:15794. [PMID: 36138084 PMCID: PMC9500060 DOI: 10.1038/s41598-022-20108-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 09/08/2022] [Indexed: 11/23/2022] Open
Abstract
Multiple studies have created state-of-the-art liver segmentation models using Deep Convolutional Neural Networks (DCNNs) such as the V-net and H-DenseUnet. Oversegmentation however continues to be a problem. We set forth to address these limitations by developing a an automated workflow that leverages the strengths of different DCNN architectures, resulting in a pipeline that enables fully automated liver segmentation. A Pipeline for Automated Deep Learning Liver Segmentation (PADLLS) was developed and implemented that cascades multiple DCNNs that were trained on more than 200 CT scans. First, a V-net is used to create a rough liver, spleen, and stomach mask. After stomach and spleen pixels are removed using their respective masks and ascites is removed using a morphological algorithm, the scan is passed to a H-DenseUnet to yield the final segmentation. The segmentation accuracy of the pipleline was compared to the H-DenseUnet and the V-net using the SLIVER07 and 3DIRCADb datasets as benchmarks. The PADLLS Dice score for the SLIVER07 dataset was calculated to be 0.957 ± 0.033 and was significantly better than the H-DenseUnet's score of 0.927 ± 0.044 (p = 0.0219) and the V-net's score of 0.872 ± 0.121 (p = 0.0067). The PADLLS Dice score for the 3DIRCADb dataset was 0.965 ± 0.016 and was significantly better than the H-DenseUnet's score of 0.930 ± 0.041 (p = 0.0014) the V-net's score of 0.874 ± 0.060 (p < 0.001). In conclusion, our pipeline (PADLLS) outperforms existing liver segmentation models, serves as a valuable tool for image-based analysis, and is freely available for download and use.
Collapse
Affiliation(s)
- Jayasuriya Senthilvelan
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, 757 Westwood Ave, Suite 2125, Los Angeles, CA, 90095, USA
| | - Neema Jamshidi
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, 757 Westwood Ave, Suite 2125, Los Angeles, CA, 90095, USA.
| |
Collapse
|
113
|
Rickmann AM, Senapati J, Kovalenko O, Peters A, Bamberg F, Wachinger C. AbdomenNet: deep neural network for abdominal organ segmentation in epidemiologic imaging studies. BMC Med Imaging 2022; 22:168. [PMID: 36115938 PMCID: PMC9482195 DOI: 10.1186/s12880-022-00893-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 09/08/2022] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND Whole-body imaging has recently been added to large-scale epidemiological studies providing novel opportunities for investigating abdominal organs. However, the segmentation of these organs is required beforehand, which is time consuming, particularly on such a large scale. METHODS We introduce AbdomentNet, a deep neural network for the automated segmentation of abdominal organs on two-point Dixon MRI scans. A pre-processing pipeline enables to process MRI scans from different imaging studies, namely the German National Cohort, UK Biobank, and Kohorte im Raum Augsburg. We chose a total of 61 MRI scans across the three studies for training an ensemble of segmentation networks, which segment eight abdominal organs. Our network presents a novel combination of octave convolutions and squeeze and excitation layers, as well as training with stochastic weight averaging. RESULTS Our experiments demonstrate that it is beneficial to combine data from different imaging studies to train deep neural networks in contrast to training separate networks. Combining the water and opposed-phase contrasts of the Dixon sequence as input channels, yields the highest segmentation accuracy, compared to single contrast inputs. The mean Dice similarity coefficient is above 0.9 for larger organs liver, spleen, and kidneys, and 0.71 and 0.74 for gallbladder and pancreas, respectively. CONCLUSIONS Our fully automated pipeline provides high-quality segmentations of abdominal organs across population studies. In contrast, a network that is only trained on a single dataset does not generalize well to other datasets.
Collapse
Affiliation(s)
- Anne-Marie Rickmann
- Lab for Artificial Intelligence in Medical Imaging, Department of Medicine, Ludwig Maximilians University Munich, Munich, Germany.
- Lab for Artificial Intelligence in Medical Imaging, Institute of Diagnostic and Interventional Radiology, Technical University of Munich, Munich, Germany.
| | - Jyotirmay Senapati
- Lab for Artificial Intelligence in Medical Imaging, Department of Medicine, Ludwig Maximilians University Munich, Munich, Germany
| | - Oksana Kovalenko
- Lab for Artificial Intelligence in Medical Imaging, Institute of Diagnostic and Interventional Radiology, Technical University of Munich, Munich, Germany
| | - Annette Peters
- Institute of Epidemiology, Helmholtz Zentrum Munich, Munich, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center-University of Freiburg, Faculty of Medicine, University Freiburg, Freiburg, Germany
| | - Christian Wachinger
- Lab for Artificial Intelligence in Medical Imaging, Department of Medicine, Ludwig Maximilians University Munich, Munich, Germany
- Lab for Artificial Intelligence in Medical Imaging, Institute of Diagnostic and Interventional Radiology, Technical University of Munich, Munich, Germany
| |
Collapse
|
114
|
Liu Y, Gargesha M, Scott B, Tchilibou Wane AO, Wilson DL. Deep learning multi-organ segmentation for whole mouse cryo-images including a comparison of 2D and 3D deep networks. Sci Rep 2022; 12:15161. [PMID: 36071089 PMCID: PMC9452525 DOI: 10.1038/s41598-022-19037-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 08/23/2022] [Indexed: 11/25/2022] Open
Abstract
Cryo-imaging provided 3D whole-mouse microscopic color anatomy and fluorescence images that enables biotechnology applications (e.g., stem cells and metastatic cancer). In this report, we compared three methods of organ segmentation: 2D U-Net with 2D-slices and 3D U-Net with either 3D-whole-mouse or 3D-patches. We evaluated the brain, thymus, lung, heart, liver, stomach, spleen, left and right kidney, and bladder. Training with 63 mice, 2D-slices had the best performance, with median Dice scores of > 0.9 and median Hausdorff distances of < 1.2 mm in eightfold cross validation for all organs, except bladder, which is a problem organ due to variable filling and poor contrast. Results were comparable to those for a second analyst on the same data. Regression analyses were performed to fit learning curves, which showed that 2D-slices can succeed with fewer samples. Review and editing of 2D-slices segmentation results reduced human operator time from ~ 2-h to ~ 25-min, with reduced inter-observer variability. As demonstrations, we used organ segmentation to evaluate size changes in liver disease and to quantify the distribution of therapeutic mesenchymal stem cells in organs. With a 48-GB GPU, we determined that extra GPU RAM improved the performance of 3D deep learning because we could train at a higher resolution.
Collapse
Affiliation(s)
- Yiqiao Liu
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
| | | | - Bryan Scott
- BioInVision Inc, Suite E 781 Beta Drive, Cleveland, OH, 44143, USA
| | - Arthure Olivia Tchilibou Wane
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
| | - David L Wilson
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA. .,BioInVision Inc, Suite E 781 Beta Drive, Cleveland, OH, 44143, USA. .,Department of Radiology, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.
| |
Collapse
|
115
|
Cai B, Xiong C, Sun Z, Liang P, Wang K, Guo Y, Niu C, Song B, Cheng E, Luo X. Accurate preoperative path planning with coarse-to-refine segmentation for image guided deep brain stimulation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
116
|
WORD: A large scale dataset, benchmark and clinical applicable study for abdominal organ segmentation from CT image. Med Image Anal 2022; 82:102642. [DOI: 10.1016/j.media.2022.102642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 08/18/2022] [Accepted: 09/20/2022] [Indexed: 11/22/2022]
|
117
|
Song J, Chen X, Zhu Q, Shi F, Xiang D, Chen Z, Fan Y, Pan L, Zhu W. Global and Local Feature Reconstruction for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2273-2284. [PMID: 35324437 DOI: 10.1109/tmi.2022.3162111] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Learning how to capture long-range dependencies and restore spatial information of down-sampled feature maps are the basis of the encoder-decoder structure networks in medical image segmentation. U-Net based methods use feature fusion to alleviate these two problems, but the global feature extraction ability and spatial information recovery ability of U-Net are still insufficient. In this paper, we propose a Global Feature Reconstruction (GFR) module to efficiently capture global context features and a Local Feature Reconstruction (LFR) module to dynamically up-sample features, respectively. For the GFR module, we first extract the global features with category representation from the feature map, then use the different level global features to reconstruct features at each location. The GFR module establishes a connection for each pair of feature elements in the entire space from a global perspective and transfers semantic information from the deep layers to the shallow layers. For the LFR module, we use low-level feature maps to guide the up-sampling process of high-level feature maps. Specifically, we use local neighborhoods to reconstruct features to achieve the transfer of spatial information. Based on the encoder-decoder architecture, we propose a Global and Local Feature Reconstruction Network (GLFRNet), in which the GFR modules are applied as skip connections and the LFR modules constitute the decoder path. The proposed GLFRNet is applied to four different medical image segmentation tasks and achieves state-of-the-art performance.
Collapse
|
118
|
Burrows L, Chen K, Guo W, Hossack M, McWilliams RG, Torella F. Evaluation of a hybrid pipeline for automated segmentation of solid lesions based on mathematical algorithms and deep learning. Sci Rep 2022; 12:14216. [PMID: 35987824 PMCID: PMC9392778 DOI: 10.1038/s41598-022-18173-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 08/05/2022] [Indexed: 01/10/2023] Open
Abstract
We evaluate the accuracy of an original hybrid segmentation pipeline, combining variational and deep learning methods, in the segmentation of CT scans of stented aortic aneurysms, abdominal organs and brain lesions. The hybrid pipeline is trained on 50 aortic CT scans and tested on 10. Additionally, we trained and tested the hybrid pipeline on publicly available datasets of CT scans of abdominal organs and MR scans of brain tumours. We tested the accuracy of the hybrid pipeline against a gold standard (manual segmentation) and compared its performance to that of a standard automated segmentation method with commonly used metrics, including the DICE and JACCARD and volumetric similarity (VS) coefficients, and the Hausdorff Distance (HD). Results. The hybrid pipeline produced very accurate segmentations of the aorta, with mean DICE, JACCARD and VS coefficients of: 0.909, 0.837 and 0.972 in thrombus segmentation and 0.937, 0.884 and 0.970 for stent and lumen segmentation. It consistently outperformed the standard automated method. Similar results were observed when the hybrid pipeline was trained and tested on publicly available datasets, with mean DICE scores of: 0.832 on brain tumour segmentation, and 0.894/0.841/0.853/0.847/0.941 on left kidney/right kidney/spleen/aorta/liver organ segmentation.
Collapse
Affiliation(s)
- Liam Burrows
- Centre for Mathematical Imaging Techniques and Department of Mathematical Sciences, University of Liverpool, Liverpool, L69 7ZL, UK.
| | - Ke Chen
- Centre for Mathematical Imaging Techniques and Department of Mathematical Sciences, University of Liverpool, Liverpool, L69 7ZL, UK.
| | - Weihong Guo
- Department of Mathematics, Applied Mathematics and Statistics, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Martin Hossack
- Liverpool Vascular and Endovascular Service, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
| | | | - Francesco Torella
- Liverpool Vascular and Endovascular Service, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
| |
Collapse
|
119
|
Dourthe B, Shaikh N, Pai S A, Fels S, Brown SHM, Wilson DR, Street J, Oxland TR. Automated Segmentation of Spinal Muscles From Upright Open MRI Using a Multiscale Pyramid 2D Convolutional Neural Network. Spine (Phila Pa 1976) 2022; 47:1179-1186. [PMID: 34919072 DOI: 10.1097/brs.0000000000004308] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/29/2021] [Indexed: 02/01/2023]
Abstract
STUDY DESIGN Randomized trial. OBJECTIVE To implement an algorithm enabling the automated segmentation of spinal muscles from open magnetic resonance images in healthy volunteers and patients with adult spinal deformity (ASD). SUMMARY OF BACKGROUND DATA Understanding spinal muscle anatomy is critical to diagnosing and treating spinal deformity.Muscle boundaries can be extrapolated from medical images using segmentation, which is usually done manually by clinical experts and remains complicated and time-consuming. METHODS Three groups were examined: two healthy volunteer groups (N = 6 for each group) and one ASD group (N = 8 patients) were imaged at the lumbar and thoracic regions of the spine in an upright open magnetic resonance imaging scanner while maintaining different postures (various seated, standing, and supine). For each group and region, a selection of regions of interest (ROIs) was manually segmented. A multiscale pyramid two-dimensional convolutional neural network was implemented to automatically segment all defined ROIs. A five-fold crossvalidation method was applied and distinct models were trained for each resulting set and group and evaluated using Dice coefficients calculated between the model output and the manually segmented target. RESULTS Good to excellent results were found across all ROIs for the ASD (Dice coefficient >0.76) and healthy (dice coefficient > 0.86) groups. CONCLUSION This study represents a fundamental step toward the development of an automated spinal muscle properties extraction pipeline, which will ultimately allow clinicians to have easier access to patient-specific simulations, diagnosis, and treatment.
Collapse
Affiliation(s)
- Benjamin Dourthe
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
| | - Noor Shaikh
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
- Depart-Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Anoosha Pai S
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Sidney Fels
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, BC, Canada
| | - Stephen H M Brown
- Department of Human Health and Nutritional Sciences, University of Guelph, Guelph, ON, Canada
| | - David R Wilson
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- Centre for Hip Health and Mobility, University of British Columbia, Vancouver, BC, Canada
| | - John Street
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
| | - Thomas R Oxland
- ICORD, Blusson Spinal Cord Centre, University of British Columbia, Vancouver, BC, Canada
- Department of Orthopaedics, University of British Columbia, Vancouver, BC, Canada
- Depart-Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
120
|
Liu J, Cui Z, Desrosiers C, Lu S, Zhou Y. Grayscale self-adjusting network with weak feature enhancement for 3D lumbar anatomy segmentation. Med Image Anal 2022; 81:102567. [PMID: 35994969 DOI: 10.1016/j.media.2022.102567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 07/11/2022] [Accepted: 08/04/2022] [Indexed: 11/15/2022]
Abstract
The automatic segmentation of lumbar anatomy is a fundamental problem for the diagnosis and treatment of lumbar disease. The recent development of deep learning techniques has led to remarkable progress in this task, including the possible segmentation of nerve roots, intervertebral discs, and dural sac in a single step. Despite these advances, lumbar anatomy segmentation remains a challenging problem due to the weak contrast and noise of input images, as well as the variability of intensities and size in lumbar structures across different subjects. To overcome these challenges, we propose a coarse-to-fine deep neural network framework for lumbar anatomy segmentation, which obtains a more accurate segmentation using two strategies. First, a progressive refinement process is employed to correct low-confidence regions by enhancing the feature representation in these regions. Second, a grayscale self-adjusting network (GSA-Net) is proposed to optimize the distribution of intensities dynamically. Experiments on datasets comprised of 3D computed tomography (CT) and magnetic resonance (MR) images show the advantage of our method over current segmentation approaches and its potential for diagnosing and lumbar disease treatment.
Collapse
Affiliation(s)
- Jinhua Liu
- School of Software, Shandong University, Jinan, China
| | - Zhiming Cui
- Department of Computer Science, The University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Christian Desrosiers
- Software and IT Engineering Department, École de technologie supérieure, Montreal, Canada
| | - Shuyi Lu
- School of Software, Shandong University, Jinan, China
| | - Yuanfeng Zhou
- School of Software, Shandong University, Jinan, China.
| |
Collapse
|
121
|
Laino ME, Ammirabile A, Lofino L, Mannelli L, Fiz F, Francone M, Chiti A, Saba L, Orlandi MA, Savevski V. Artificial Intelligence Applied to Pancreatic Imaging: A Narrative Review. Healthcare (Basel) 2022; 10:1511. [PMID: 36011168 PMCID: PMC9408381 DOI: 10.3390/healthcare10081511] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/31/2022] [Accepted: 08/08/2022] [Indexed: 12/19/2022] Open
Abstract
The diagnosis, evaluation, and treatment planning of pancreatic pathologies usually require the combined use of different imaging modalities, mainly, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). Artificial intelligence (AI) has the potential to transform the clinical practice of medical imaging and has been applied to various radiological techniques for different purposes, such as segmentation, lesion detection, characterization, risk stratification, or prediction of response to treatments. The aim of the present narrative review is to assess the available literature on the role of AI applied to pancreatic imaging. Up to now, the use of computer-aided diagnosis (CAD) and radiomics in pancreatic imaging has proven to be useful for both non-oncological and oncological purposes and represents a promising tool for personalized approaches to patients. Although great developments have occurred in recent years, it is important to address the obstacles that still need to be overcome before these technologies can be implemented into our clinical routine, mainly considering the heterogeneity among studies.
Collapse
Affiliation(s)
- Maria Elena Laino
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Angela Ammirabile
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Ludovica Lofino
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | | | - Francesco Fiz
- Nuclear Medicine Unit, Department of Diagnostic Imaging, E.O. Ospedali Galliera, 56321 Genoa, Italy
- Department of Nuclear Medicine and Clinical Molecular Imaging, University Hospital, 72074 Tübingen, Germany
| | - Marco Francone
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Arturo Chiti
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Nuclear Medicine, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Luca Saba
- Department of Radiology, University of Cagliari, 09124 Cagliari, Italy
| | | | - Victor Savevski
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| |
Collapse
|
122
|
Gou S, Xu Y, Yang H, Tong N, Zhang X, Wei L, Zhao L, Zheng M, Liu W. Automated cervical tumor segmentation on MR images using multi-view feature attention network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
123
|
Park S, Chung M. Cardiac segmentation on CT Images through shape-aware contour attentions. Comput Biol Med 2022; 147:105782. [DOI: 10.1016/j.compbiomed.2022.105782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 06/02/2022] [Accepted: 06/19/2022] [Indexed: 11/30/2022]
|
124
|
Im JH, Lee IJ, Choi Y, Sung J, Ha JS, Lee H. Impact of Denoising on Deep-Learning-Based Automatic Segmentation Framework for Breast Cancer Radiotherapy Planning. Cancers (Basel) 2022; 14:3581. [PMID: 35892839 PMCID: PMC9332287 DOI: 10.3390/cancers14153581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 07/08/2022] [Accepted: 07/20/2022] [Indexed: 02/04/2023] Open
Abstract
Objective: This study aimed to investigate the segmentation accuracy of organs at risk (OARs) when denoised computed tomography (CT) images are used as input data for a deep-learning-based auto-segmentation framework. Methods: We used non-contrast enhanced planning CT scans from 40 patients with breast cancer. The heart, lungs, esophagus, spinal cord, and liver were manually delineated by two experienced radiation oncologists in a double-blind manner. The denoised CT images were used as input data for the AccuContourTM segmentation software to increase the signal difference between structures of interest and unwanted noise in non-contrast CT. The accuracy of the segmentation was assessed using the Dice similarity coefficient (DSC), and the results were compared with those of conventional deep-learning-based auto-segmentation without denoising. Results: The average DSC outcomes were higher than 0.80 for all OARs except for the esophagus. AccuContourTM-based and denoising-based auto-segmentation demonstrated comparable performance for the lungs and spinal cord but showed limited performance for the esophagus. Denoising-based auto-segmentation for the liver was minimal but had statistically significantly better DSC than AccuContourTM-based auto-segmentation (p < 0.05). Conclusions: Denoising-based auto-segmentation demonstrated satisfactory performance in automatic liver segmentation from non-contrast enhanced CT scans. Further external validation studies with larger cohorts are needed to verify the usefulness of denoising-based auto-segmentation.
Collapse
Affiliation(s)
- Jung Ho Im
- CHA Bundang Medical Center, Department of Radiation Oncology, CHA University School of Medicine, Seongnam 13496, Korea;
| | - Ik Jae Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Yeonho Choi
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Jiwon Sung
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Jin Sook Ha
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Ho Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| |
Collapse
|
125
|
Computational Methods for Neuron Segmentation in Two-Photon Calcium Imaging Data: A Survey. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12146876] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Calcium imaging has rapidly become a methodology of choice for real-time in vivo neuron analysis. Its application to large sets of data requires automated tools to annotate and segment cells, allowing scalable image segmentation under reproducible criteria. In this paper, we review and summarize the most recent methods for computational segmentation of calcium imaging. The contributions of the paper are three-fold: we provide an overview of the main algorithms taxonomized in three categories (signal processing, matrix factorization and machine learning-based approaches), we highlight the main advantages and disadvantages of each category and we provide a summary of the performance of the methods that have been tested on public benchmarks (with links to the public code when available).
Collapse
|
126
|
Kan H, Shi J, Zhao M, Wang Z, Han W, An H, Wang Z, Wang S. ITUnet: Integration Of Transformers And Unet For Organs-At-Risk Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:2123-2127. [PMID: 36085940 DOI: 10.1109/embc48229.2022.9871945] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Recently, convolutional neural network(CNN) has achieved great success in medical image segmentation. However, due to the limitation of convolutional receptive field, the pure convolutional neural network is difficult to further improve its performance. Given the outstanding ability of transformers in extracting the long-range dependency, some works have successfully applied it to computer vision and achieved better results than CNN in some tasks. Based on transformers could remedy the shortage of CNN, in this paper, we propose ITUnet, a segmentation network using CNN and transformers as features extractor. The combination of CNN and transformers enables the network to learn both short- and long-range dependency of features, which is beneficial to segmentation tasks. We evaluate our method on a head-and-neck CT dataset which has 18 kinds of organs to be segmented. The experimental results demonstrate that our proposed method shows better accuracy and robustness, the proposed methods achieve the Dice score of 77.72 and the 95% Hausdorff Distance of 2.31, outperforming the existing methods.
Collapse
|
127
|
Chen X, Wang X, Zhang K, Fung KM, Thai TC, Moore K, Mannel RS, Liu H, Zheng B, Qiu Y. Recent advances and clinical applications of deep learning in medical image analysis. Med Image Anal 2022; 79:102444. [PMID: 35472844 PMCID: PMC9156578 DOI: 10.1016/j.media.2022.102444] [Citation(s) in RCA: 275] [Impact Index Per Article: 91.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 03/09/2022] [Accepted: 04/01/2022] [Indexed: 02/07/2023]
Abstract
Deep learning has received extensive research interest in developing new medical image processing algorithms, and deep learning based models have been remarkably successful in a variety of medical imaging tasks to support disease detection and diagnosis. Despite the success, the further improvement of deep learning models in medical image analysis is majorly bottlenecked by the lack of large-sized and well-annotated datasets. In the past five years, many studies have focused on addressing this challenge. In this paper, we reviewed and summarized these recent studies to provide a comprehensive overview of applying deep learning methods in various medical image analysis tasks. Especially, we emphasize the latest progress and contributions of state-of-the-art unsupervised and semi-supervised deep learning in medical image analysis, which are summarized based on different application scenarios, including classification, segmentation, detection, and image registration. We also discuss major technical challenges and suggest possible solutions in the future research efforts.
Collapse
Affiliation(s)
- Xuxin Chen
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Ximin Wang
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Ke Zhang
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Kar-Ming Fung
- Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Theresa C Thai
- Department of Radiology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Kathleen Moore
- Department of Obstetrics and Gynecology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Robert S Mannel
- Department of Obstetrics and Gynecology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Hong Liu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA.
| |
Collapse
|
128
|
Baxter JSH, Jannin P. Combining simple interactivity and machine learning: a separable deep learning approach to subthalamic nucleus localization and segmentation in MRI for deep brain stimulation surgical planning. J Med Imaging (Bellingham) 2022; 9:045001. [PMID: 35836671 DOI: 10.1117/1.jmi.9.4.045001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 06/16/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Deep brain stimulation (DBS) is an interventional treatment for some neurological and neurodegenerative diseases. For example, in Parkinson's disease, DBS electrodes are positioned at particular locations within the basal ganglia to alleviate the patient's motor symptoms. These interventions depend greatly on a preoperative planning stage in which potential targets and electrode trajectories are identified in a preoperative MRI. Due to the small size and low contrast of targets such as the subthalamic nucleus (STN), their segmentation is a difficult task. Machine learning provides a potential avenue for development, but it has difficulty in segmenting such small structures in volumetric images due to additional problems such as segmentation class imbalance. Approach: We present a two-stage separable learning workflow for STN segmentation consisting of a localization step that detects the STN and crops the image to a small region and a segmentation step that delineates the structure within that region. The goal of this decoupling is to improve accuracy and efficiency and to provide an intermediate representation that can be easily corrected by a clinical user. This correction capability was then studied through a human-computer interaction experiment with seven novice participants and one expert neurosurgeon. Results: Our two-step segmentation significantly outperforms the comparative registration-based method currently used in clinic and approaches the fundamental limit on variability due to the image resolution. In addition, the human-computer interaction experiment shows that the additional interaction mechanism allowed by separating STN segmentation into two steps significantly improves the users' ability to correct errors and further improves performance. Conclusions: Our method shows that separable learning not only is feasible for fully automatic STN segmentation but also leads to improved interactivity that can ease its translation into clinical use.
Collapse
Affiliation(s)
- John S H Baxter
- Université de Rennes 1, Laboratoire Traitement du Signal et de l'Image (INSERM UMR 1099), Rennes, France
| | - Pierre Jannin
- Université de Rennes 1, Laboratoire Traitement du Signal et de l'Image (INSERM UMR 1099), Rennes, France
| |
Collapse
|
129
|
Berzoini R, Colombo AA, Bardini S, Conelli A, D'Arnese E, Santambrogio MD. An Optimized U-Net for Unbalanced Multi-Organ Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3764-3767. [PMID: 36085901 DOI: 10.1109/embc48229.2022.9871288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Medical practice is shifting towards the automation and standardization of the most repetitive procedures to speed up the time-to-diagnosis. Semantic segmentation repre-sents a critical stage in identifying a broad spectrum of regions of interest within medical images. Indeed, it identifies relevant objects by attributing to each image pixels a value representing pre-determined classes. Despite the relative ease of visually locating organs in the human body, automated multi-organ segmentation is hindered by the variety of shapes and dimensions of organs and computational resources. Within this context, we propose BIONET, a U-Net-based Fully Convolutional Net-work for efficiently semantically segmenting abdominal organs. BIONET deals with unbalanced data distribution related to the physiological conformation of the considered organs, reaching good accuracy for variable organs dimension with low variance, and a Weighted Global Dice Score score of 93.74 ± 1.1%, and an inference performance of 138 frames per second. Clinical Relevance - This work established a starting point for developing an automatic tool for semantic segmentation of variable-sized organs within the abdomen, reaching considerable accuracy on small and large organs with low variability, reaching a 93.74 ± 1.1 % of Weighted Global Dice Score.
Collapse
|
130
|
Zhuang M, Chen Z, Wang H, Tang H, He J, Qin B, Yang Y, Jin X, Yu M, Jin B, Li T, Kettunen L. AnatomySketch: An Extensible Open-Source Software Platform for Medical Image Analysis Algorithm Development. J Digit Imaging 2022; 35:1623-1633. [PMID: 35768752 DOI: 10.1007/s10278-022-00660-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Revised: 05/07/2022] [Accepted: 05/18/2022] [Indexed: 11/25/2022] Open
Abstract
The development of medical image analysis algorithm is a complex process including the multiple sub-steps of model training, data visualization, human-computer interaction and graphical user interface (GUI) construction. To accelerate the development process, algorithm developers need a software tool to assist with all the sub-steps so that they can focus on the core function implementation. Especially, for the development of deep learning (DL) algorithms, a software tool supporting training data annotation and GUI construction is highly desired. In this work, we constructed AnatomySketch, an extensible open-source software platform with a friendly GUI and a flexible plugin interface for integrating user-developed algorithm modules. Through the plugin interface, algorithm developers can quickly create a GUI-based software prototype for clinical validation. AnatomySketch supports image annotation using the stylus and multi-touch screen. It also provides efficient tools to facilitate the collaboration between human experts and artificial intelligent (AI) algorithms. We demonstrate four exemplar applications including customized MRI image diagnosis, interactive lung lobe segmentation, human-AI collaborated spine disc segmentation and Annotation-by-iterative-Deep-Learning (AID) for DL model training. Using AnatomySketch, the gap between laboratory prototyping and clinical testing is bridged and the development of MIA algorithms is accelerated. The software is opened at https://github.com/DlutMedimgGroup/AnatomySketch-Software .
Collapse
Affiliation(s)
- Mingrui Zhuang
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China
| | - Zhonghua Chen
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China
- Faculty of Information Technology, University of Jyväskylä, 40100, Jyväskylä, Finland
| | - Hongkai Wang
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China.
- Liaoning Key Laboratory of Integrated Circuit and Biomedical Electronic System, Dalian, 116024, China.
| | - Hong Tang
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China
| | - Jiang He
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China
| | - Bobo Qin
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China
| | - Yuxin Yang
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China
| | - Xiaoxian Jin
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China
| | - Mengzhu Yu
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China
| | - Baitao Jin
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China
| | - Taijing Li
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China
| | - Lauri Kettunen
- Faculty of Information Technology, University of Jyväskylä, 40100, Jyväskylä, Finland
| |
Collapse
|
131
|
A Survey on Deep Learning for Precision Oncology. Diagnostics (Basel) 2022; 12:diagnostics12061489. [PMID: 35741298 PMCID: PMC9222056 DOI: 10.3390/diagnostics12061489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/14/2022] [Accepted: 06/14/2022] [Indexed: 12/27/2022] Open
Abstract
Precision oncology, which ensures optimized cancer treatment tailored to the unique biology of a patient’s disease, has rapidly developed and is of great clinical importance. Deep learning has become the main method for precision oncology. This paper summarizes the recent deep-learning approaches relevant to precision oncology and reviews over 150 articles within the last six years. First, we survey the deep-learning approaches categorized by various precision oncology tasks, including the estimation of dose distribution for treatment planning, survival analysis and risk estimation after treatment, prediction of treatment response, and patient selection for treatment planning. Secondly, we provide an overview of the studies per anatomical area, including the brain, bladder, breast, bone, cervix, esophagus, gastric, head and neck, kidneys, liver, lung, pancreas, pelvis, prostate, and rectum. Finally, we highlight the challenges and discuss potential solutions for future research directions.
Collapse
|
132
|
Chai C, Wu M, Wang H, Cheng Y, Zhang S, Zhang K, Shen W, Liu Z, Xia S. CAU-Net: A Deep Learning Method for Deep Gray Matter Nuclei Segmentation. Front Neurosci 2022; 16:918623. [PMID: 35720705 PMCID: PMC9204516 DOI: 10.3389/fnins.2022.918623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 05/03/2022] [Indexed: 12/04/2022] Open
Abstract
The abnormal iron deposition of the deep gray matter nuclei is related to many neurological diseases. With the quantitative susceptibility mapping (QSM) technique, it is possible to quantitatively measure the brain iron content in vivo. To assess the magnetic susceptibility of the deep gray matter nuclei in the QSM, it is mandatory to segment the nuclei of interest first, and many automatic methods have been proposed in the literature. This study proposed a contrast attention U-Net for nuclei segmentation and evaluated its performance on two datasets acquired using different sequences with different parameters from different MRI devices. Experimental results revealed that our proposed method was superior on both datasets over other commonly adopted network structures. The impacts of training and inference strategies were also discussed, which showed that adopting test time augmentation during the inference stage can impose an obvious improvement. At the training stage, our results indicated that sufficient data augmentation, deep supervision, and nonuniform patch sampling contributed significantly to improving the segmentation accuracy, which indicated that appropriate choices of training and inference strategies were at least as important as designing more advanced network structures.
Collapse
Affiliation(s)
- Chao Chai
- Department of Radiology, Tianjin Institute of Imaging Medicine, Tianjin First Central Hospital, School of Medicine, Nankai University, Tianjin, China
| | - Mengran Wu
- College of Electronic Information and Optical Engineering, Nankai University, Tianjin, China
| | - Huiying Wang
- School of Medicine, Nankai University, Tianjin, China
| | - Yue Cheng
- Department of Radiology, Tianjin Institute of Imaging Medicine, Tianjin First Central Hospital, School of Medicine, Nankai University, Tianjin, China
| | | | - Kun Zhang
- Department of Radiology, Tianjin Institute of Imaging Medicine, Tianjin First Central Hospital, School of Medicine, Nankai University, Tianjin, China
| | - Wen Shen
- Department of Radiology, Tianjin Institute of Imaging Medicine, Tianjin First Central Hospital, School of Medicine, Nankai University, Tianjin, China
| | - Zhiyang Liu
- College of Electronic Information and Optical Engineering, Nankai University, Tianjin, China
- Tianjin Key Laboratory of Optoelectronic Sensor and Sensing Network Technology, Tianjin, China
- *Correspondence: Zhiyang Liu,
| | - Shuang Xia
- Department of Radiology, Tianjin Institute of Imaging Medicine, Tianjin First Central Hospital, School of Medicine, Nankai University, Tianjin, China
- Shuang Xia,
| |
Collapse
|
133
|
Cao J, Lai H, Zhang J, Zhang J, Xie T, Wang H, Bu J, Feng Q, Huang M. 2D-3D cascade network for glioma segmentation in multisequence MRI images using multiscale information. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106894. [PMID: 35613498 DOI: 10.1016/j.cmpb.2022.106894] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 04/21/2022] [Accepted: 05/14/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Glioma segmentation is an important procedure for the treatment plan and follow-up evaluation of patients with glioma. UNet-based networks are widely used in medical image segmentation tasks and have achieved state-of-the-art performance. However, context information along the third dimension is ignored in 2D convolutions, whereas difference between z-axis and in-plane resolutions is large in 3D convolutions. Moreover, an original UNet structure cannot capture fine details because of the reduced resolution of feature maps near bottleneck layers. METHODS To address these issues, a novel 2D-3D cascade network with multiscale information module is proposed for the multiclass segmentation of gliomas in multisequence MRI images. First, a 2D network is applied to fully exploit potential intra-slice features. A variational autoencoder module is incorporated into 2D DenseUNet to regularize a shared encoder, extract useful information, and represent glioma heterogeneity. Second, we integrated 3D DenseUNet with the 2D network in cascade mode to extract useful inter-slice features and alleviate the influence of large difference between z-axis and in-plane resolutions. Moreover, a multiscale information module is used in the 2D and 3D networks to further capture the fine details of gliomas. Finally, the whole 2D-3D cascade network is trained in an end-to-end manner, where the intra-slice and inter-slice features are fused and optimized jointly to take full advantage of 3D image information. RESULTS Our method is evaluated on publicly available and clinical datasets and achieves competitive performance in these two datasets. CONCLUSIONS These results indicate that the proposed method may be a useful tool for glioma segmentation.
Collapse
Affiliation(s)
- Jianyun Cao
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Zhujiang Hospital, Southern Medical University, Guangzhou 510282, China
| | - Haoran Lai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Jiawei Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Junde Zhang
- Zhujiang Hospital, Southern Medical University, Guangzhou 510282, China
| | - Tao Xie
- Zhujiang Hospital, Southern Medical University, Guangzhou 510282, China
| | - Heqing Wang
- Zhujiang Hospital, Southern Medical University, Guangzhou 510282, China
| | - Junguo Bu
- Zhujiang Hospital, Southern Medical University, Guangzhou 510282, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
134
|
Altini N, Prencipe B, Cascarano GD, Brunetti A, Brunetti G, Triggiani V, Carnimeo L, Marino F, Guerriero A, Villani L, Scardapane A, Bevilacqua V. Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.157] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
135
|
Chen A, Chen F, Li X, Zhang Y, Chen L, Chen L, Zhu J. A Feasibility Study of Deep Learning-Based Auto-Segmentation Directly Used in VMAT Planning Design and Optimization for Cervical Cancer. Front Oncol 2022; 12:908903. [PMID: 35719942 PMCID: PMC9198405 DOI: 10.3389/fonc.2022.908903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 05/06/2022] [Indexed: 12/02/2022] Open
Abstract
Purpose To investigate the dosimetric impact on target volumes and organs at risk (OARs) when unmodified auto-segmented OAR contours are directly used in the design of treatment plans. Materials and Methods A total of 127 patients with cervical cancer were collected for retrospective analysis, including 105 patients in the training set and 22 patients in the testing set. The 3D U-net architecture was used for model training and auto-segmentation of nine types of organs at risk. The auto-segmented and manually segmented organ contours were used for treatment plan optimization to obtain the AS-VMAT (automatic segmentations VMAT) plan and the MS-VMAT (manual segmentations VMAT) plan, respectively. Geometric accuracy between the manual and predicted contours were evaluated using the Dice similarity coefficient (DSC), mean distance-to-agreement (MDA), and Hausdorff distance (HD). The dose volume histogram (DVH) and the gamma passing rate were used to identify the dose differences between the AS-VMAT plan and the MS-VMAT plan. Results Average DSC, MDA and HD95 across all OARs were 0.82–0.96, 0.45–3.21 mm, and 2.30–17.31 mm on the testing set, respectively. The D99% in the rectum and the Dmean in the spinal cord were 6.04 Gy (P = 0.037) and 0.54 Gy (P = 0.026) higher, respectively, in the AS-VMAT plans than in the MS-VMAT plans. The V20, V30, and V40 in the rectum increased by 1.35% (P = 0.027), 1.73% (P = 0.021), and 1.96% (P = 0.008), respectively, whereas the V10 in the spinal cord increased by 1.93% (P = 0.011). The differences in other dosimetry parameters were not statistically significant. The gamma passing rates in the clinical target volume (CTV) were 92.72% and 98.77%, respectively, using the 2%/2 mm and 3%/3 mm criteria, which satisfied the clinical requirements. Conclusions The dose distributions of target volumes were unaffected when auto-segmented organ contours were used in the design of treatment plans, whereas the impact of automated segmentation on the doses to OARs was complicated. We suggest that the auto-segmented contours of tissues in close proximity to the target volume need to be carefully checked and corrected when necessary.
Collapse
Affiliation(s)
- Along Chen
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Fei Chen
- School of Biomedical Engineering, Guangzhou Xinhua University, Guangzhou, China
| | - Xiaofang Li
- Department of Radiation Oncology, The Second Affiliated Hospital of Zunyi Medical University, Zunyi, China
| | - Yazhi Zhang
- Department of Oncology and Hematology, The Six People’s Hospital of Huizhou City, Huiyang Hospital Affiliated to Southern Medical University, Huizhou, China
| | - Li Chen
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Lixin Chen
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- *Correspondence: Lixin Chen, ; Jinhan Zhu,
| | - Jinhan Zhu
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- *Correspondence: Lixin Chen, ; Jinhan Zhu,
| |
Collapse
|
136
|
Li C, Mao Y, Guo Y, Li J, Wang Y. Multi-Dimensional Cascaded Net with Uncertain Probability Reduction for Abdominal Multi-Organ Segmentation in CT Sequences. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106887. [PMID: 35597204 DOI: 10.1016/j.cmpb.2022.106887] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 05/03/2022] [Accepted: 05/11/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning abdominal multi-organ segmentation provides preoperative guidance for abdominal surgery. However, due to the large volume of 3D CT sequences, the existing methods cannot balance complete semantic features and high-resolution detail information, which leads to uncertain, rough, and inaccurate segmentation, especially in small and irregular organs. In this paper, we propose a two-stage algorithm named multi-dimensional cascaded net (MDCNet) to solve the above problems and segment multi-organs in CT images, including the spleen, kidney, gallbladder, esophagus, liver, stomach, pancreas, and duodenum. METHODS MDCNet combines the powerful semantic encoder ability of a 3D net and the rich high-resolution information of a 2.5D net. In stage1, a prior-guided shallow-layer-enhanced 3D location net extracts entire semantic features from a downsampled CT volume to perform rough segmentation. Additionally, we use circular inference and parameter Dice loss to alleviate uncertain boundary. The inputs of stage2 are high-resolution slices, which are obtained by the original image and coarse segmentation of stage1. Stage2 offsets the details lost during downsampling, resulting in smooth and accurate refined contours. The 2.5D net from the axial, coronal, and sagittal views also compensates for the missing spatial information of a single view. RESULTS The experiments on the two datasets both obtained the best performance, particularly a higher Dice on small gallbladders and irregular duodenums, which reached 0.85±0.12 and 0.77±0.07 respectively, increasing by 0.02 and 0.03 compared to the state-of-the-art method. CONCLUSION Our method can extract all semantic and high-resolution detail information from a large-volume CT image. It reduces the boundary uncertainty while yielding smoother segmentation edges, indicating good clinical application prospects.
Collapse
Affiliation(s)
- Chengkang Li
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China
| | - Yishen Mao
- Department of Pancreatic Surgery, Pancreatic Disease Institute, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Yi Guo
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China.
| | - Ji Li
- Department of Pancreatic Surgery, Pancreatic Disease Institute, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China.
| | - Yuanyuan Wang
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China.
| |
Collapse
|
137
|
Dirks I, Keyaerts M, Neyns B, Vandemeulebroucke J. Computer-aided detection and segmentation of malignant melanoma lesions on whole-body 18F-FDG PET/CT using an interpretable deep learning approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106902. [PMID: 35636357 DOI: 10.1016/j.cmpb.2022.106902] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 04/27/2022] [Accepted: 05/21/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE In oncology, 18-fluorodeoxyglucose (18F-FDG) positron emission tomography (PET) / computed tomography (CT) is widely used to identify and analyse metabolically-active tumours. The combination of the high sensitivity and specificity from 18F-FDG PET and the high resolution from CT makes accurate assessment of disease status and treatment response possible. Since cancer is a systemic disease, whole-body imaging is of high interest. Moreover, whole-body metabolic tumour burden is emerging as a promising new biomarker predicting outcome for innovative immunotherapy in different tumour types. However, this comes with certain challenges such as the large amount of data for manual reading, different appearance of lesions across the body and cumbersome reporting, hampering its use in clinical routine. Automation of the reading can facilitate the process, maximise the information retrieved from the images and support clinicians in making treatment decisions. METHODS This work proposes a fully automated system for lesion detection and segmentation on whole-body 18F-FDG PET/CT. The novelty of the method stems from the fact that the same two-step approach used when manually reading the images was adopted, consisting of an intensity-based thresholding on PET followed by a classification that specifies which regions represent normal physiological uptake and which are malignant tissue. The dataset contained 69 patients treated for malignant melanoma. Baseline and follow-up scans together offered 267 images for training and testing. RESULTS On an unseen dataset of 53 PET/CT images, a median F1-score of 0.7500 was achieved with, on average, 1.566 false positive lesions per scan. Metabolically-active tumours were segmented with a median dice score of 0.8493 and absolute volume difference of 0.2986 ml. CONCLUSIONS The proposed fully automated method for the segmentation and detection of metabolically-active lesions on whole-body 18F-FDG PET/CT achieved competitive results. Moreover, it was compared to a direct segmentation approach which it outperformed for all metrics.
Collapse
Affiliation(s)
- Ine Dirks
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), Brussels, Belgium; imec, Leuven, Belgium.
| | - Marleen Keyaerts
- Vrije Universiteit Brussel (VUB), Universitair Ziekenhuis Brussel (UZ Brussel), Department of Nuclear Medicine, Brussels, Belgium
| | - Bart Neyns
- Vrije Universiteit Brussel (VUB), Universitair Ziekenhuis Brussel (UZ Brussel), Department of Medical Oncology, Brussels, Belgium
| | - Jef Vandemeulebroucke
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), Brussels, Belgium; imec, Leuven, Belgium; Vrije Universiteit Brussel (VUB), Universitair Ziekenhuis Brussel (UZ Brussel), Department of Radiology, Brussels, Belgium
| |
Collapse
|
138
|
Wang T, Xing H, Li Y, Wang S, Liu L, Li F, Jing H. Deep learning-based automated segmentation of eight brain anatomical regions using head CT images in PET/CT. BMC Med Imaging 2022; 22:99. [PMID: 35614382 PMCID: PMC9134669 DOI: 10.1186/s12880-022-00807-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 04/18/2022] [Indexed: 11/28/2022] Open
Abstract
Objective We aim to propose a deep learning-based method of automated segmentation of eight brain anatomical regions in head computed tomography (CT) images obtained during positron emission tomography/computed tomography (PET/CT) scans. The brain regions include basal ganglia, cerebellum, hemisphere, and hippocampus, all split into left and right. Materials and methods We enrolled patients who underwent both PET/CT imaging (with an extra head CT scan) and magnetic resonance imaging (MRI). The segmentation of eight brain regions in CT was achieved by using convolutional neural networks (CNNs): DenseVNet and 3D U-Net. The same segmentation task in MRI was performed by using BrainSuite13, which was a public atlas label method. The mean Dice scores were used to assess the performance of the CNNs. Then, the agreement and correlation of the volumes of the eight segmented brain regions between CT and MRI methods were analyzed. Results 18 patients were enrolled. Four of the eight brain regions obtained high mean Dice scores (> 0.90): left (0.978) and right (0.912) basal ganglia and left (0.945) and right (0.960) hemisphere. Regarding the agreement and correlation of the brain region volumes between two methods, moderate agreements were observed on the left (ICC: 0.618, 95% CI 0.242, 0.835) and right (ICC: 0.654, 95% CI 0.298, 0.853) hemisphere. Poor agreements were observed on the other regions. A moderate correlation was observed on the right hemisphere (Spearman’s rho 0.68, p = 0.0019). Lower correlations were observed on the other regions. Conclusions The proposed deep learning-based method performed automated segmentation of eight brain anatomical regions on head CT imaging in PET/CT. Some regions obtained high mean Dice scores and the agreement and correlation results of the segmented region volumes between two methods were moderate to poor.
Collapse
Affiliation(s)
- Tong Wang
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Beijing, China
| | - Haiqun Xing
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Beijing, China
| | - Yige Li
- GE Healthcare China, Shanghai, China
| | | | - Ling Liu
- GE Healthcare China, Shanghai, China
| | - Fang Li
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Beijing, China.
| | - Hongli Jing
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Beijing, China.
| |
Collapse
|
139
|
Chouhan MD, Taylor SA, Bhagwanani A, Munday C, Pinnock MA, Parry T, Hu Y, Barratt D, Yu D, Mookerjee RP, Halligan S, Mallett S. Imaging features for the prediction of clinical endpoints in chronic liver disease: a scoping review protocol. BMJ Open 2022; 12:e053204. [PMID: 35501093 PMCID: PMC9062789 DOI: 10.1136/bmjopen-2021-053204] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Accepted: 04/08/2022] [Indexed: 11/04/2022] Open
Abstract
INTRODUCTION Chronic liver disease is a growing cause of morbidity and mortality in the UK. Acute presentation with advanced disease is common and prioritisation of resources to those at highest risk at earlier disease stages is essential to improving patient outcomes. Existing prognostic tools are of limited accuracy and to date no imaging-based tools are used in clinical practice, despite multiple anatomical imaging features that worsen with disease severity.In this paper, we outline our scoping review protocol that aims to provide an overview of existing prognostic factors and models that link anatomical imaging features with clinical endpoints in chronic liver disease. This will provide a summary of the number, type and methods used by existing imaging feature-based prognostic studies and indicate if there are sufficient studies to justify future systematic reviews. METHODS AND ANALYSIS The protocol was developed in accordance with existing scoping review guidelines. Searches of MEDLINE and Embase will be conducted using titles, abstracts and Medical Subject Headings restricted to publications after 1980 to ensure imaging method relevance on OvidSP. Initial screening will be undertaken by two independent reviewers. Full-text data extraction will be undertaken by three pretrained reviewers who have participated in a group data extraction session to ensure reviewer consensus and reduce inter-rater variability. Where needed, data extraction queries will be resolved by reviewer team discussion. Reporting of results will be based on grouping of related factors and their cumulative frequencies. Prognostic anatomical imaging features and clinical endpoints will be reported using descriptive statistics to summarise the number of studies, study characteristics and the statistical methods used. ETHICS AND DISSEMINATION Ethical approval is not required as this study is based on previously published work. Findings will be disseminated by peer-reviewed publication and/or conference presentations.
Collapse
Affiliation(s)
| | | | - Anisha Bhagwanani
- Imaging Department, University College London Hospitals NHS Foundation Trust, London, UK
| | - Charlotte Munday
- Department of Imaging, Royal Free London NHS Foundation Trust, London, UK
| | | | - Tom Parry
- UCL Centre for Medical Imaging, UCL, London, UK
| | - Yipeng Hu
- UCL Centre for Medical Image Computing, UCL, London, UK
| | - Dean Barratt
- UCL Centre for Medical Image Computing, UCL, London, UK
| | - Dominic Yu
- Department of Imaging, Royal Free London NHS Foundation Trust, London, UK
| | | | | | - Sue Mallett
- UCL Centre for Medical Imaging, UCL, London, UK
| |
Collapse
|
140
|
Dai W, Li X, Chiu WHK, Kuo MD, Cheng KT. Adaptive Contrast for Image Regression in Computer-Aided Disease Assessment. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1255-1268. [PMID: 34941504 DOI: 10.1109/tmi.2021.3137854] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Image regression tasks for medical applications, such as bone mineral density (BMD) estimation and left-ventricular ejection fraction (LVEF) prediction, play an important role in computer-aided disease assessment. Most deep regression methods train the neural network with a single regression loss function like MSE or L1 loss. In this paper, we propose the first contrastive learning framework for deep image regression, namely AdaCon, which consists of a feature learning branch via a novel adaptive-margin contrastive loss and a regression prediction branch. Our method incorporates label distance relationships as part of the learned feature representations, which allows for better performance in downstream regression tasks. Moreover, it can be used as a plug-and-play module to improve performance of existing regression methods. We demonstrate the effectiveness of AdaCon on two medical image regression tasks, i.e., bone mineral density estimation from X-ray images and left-ventricular ejection fraction prediction from echocardiogram videos. AdaCon leads to relative improvements of 3.3% and 5.9% in MAE over state-of-the-art BMD estimation and LVEF prediction methods, respectively.
Collapse
|
141
|
Automated MSCT Analysis for Planning Left Atrial Appendage Occlusion Using Artificial Intelligence. J Interv Cardiol 2022; 2022:5797431. [PMID: 35571991 PMCID: PMC9068333 DOI: 10.1155/2022/5797431] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 03/29/2022] [Indexed: 11/20/2022] Open
Abstract
Background The number of multislice computed tomography (MSCT) analyses performed for planning structural heart interventions is rapidly increasing. Further automation is required to save time, increase standardization, and reduce the learning curve. Objective The purpose of this study was to investigate the feasibility of a fully automated artificial intelligence (AI)-based MSCT analysis for planning structural heart interventions, focusing on left atrial appendage occlusion (LAAO) as the selected use case. Methods Different deep learning models were trained, validated, and tested using a cohort of 583 patients for which manually annotated data were available. These models were used independently or in combination to detect the anatomical ostium, the landing zone, the mitral valve annulus, and the fossa ovalis and to segment the left atrium (LA) and left atrial appendage (LAA). The accuracy of the models was evaluated through comparison with the manually annotated data. Results The automated analysis was performed on 25 randomly selected patients of the test cohort. The results were compared to the manually identified landmarks. The predicted segmentation of the LA(A) was similar to the manual segmentation (dice score of 0.94 ± 0.02). The difference between the automatically predicted and manually measured perimeter-based diameter was −0.8 ± 1.3 mm (anatomical ostium), −1.0 ± 1.5 mm (Amulet landing zone), and −0.1 ± 1.3 mm (Watchman FLX landing zone), which is similar to the operator variability on these measurements. Finally, the detected mitral valve annulus and fossa ovalis were close to the manual detection of these landmarks, as shown by the Hausdorff distance (3.9 ± 1.2 mm and 4.8 ± 1.8 mm, respectively). The average runtime of the complete workflow, including data pre- and postprocessing, was 57.5 ± 34.5 seconds. Conclusions A fast and accurate AI-based workflow is proposed to automatically analyze MSCT images for planning LAAO. The approach, which can be easily extended toward other structural heart interventions, may help to handle the rapidly increasing volumes of patients.
Collapse
|
142
|
Ranjbar S, Singleton KW, Curtin L, Rickertsen CR, Paulson LE, Hu LS, Mitchell JR, Swanson KR. Weakly Supervised Skull Stripping of Magnetic Resonance Imaging of Brain Tumor Patients. FRONTIERS IN NEUROIMAGING 2022; 1:832512. [PMID: 37555156 PMCID: PMC10406204 DOI: 10.3389/fnimg.2022.832512] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 02/21/2022] [Indexed: 08/10/2023]
Abstract
Automatic brain tumor segmentation is particularly challenging on magnetic resonance imaging (MRI) with marked pathologies, such as brain tumors, which usually cause large displacement, abnormal appearance, and deformation of brain tissue. Despite an abundance of previous literature on learning-based methodologies for MRI segmentation, few works have focused on tackling MRI skull stripping of brain tumor patient data. This gap in literature can be associated with the lack of publicly available data (due to concerns about patient identification) and the labor-intensive nature of generating ground truth labels for model training. In this retrospective study, we assessed the performance of Dense-Vnet in skull stripping brain tumor patient MRI trained on our large multi-institutional brain tumor patient dataset. Our data included pretreatment MRI of 668 patients from our in-house institutional review board-approved multi-institutional brain tumor repository. Because of the absence of ground truth, we used imperfect automatically generated training labels using SPM12 software. We trained the network using common MRI sequences in oncology: T1-weighted with gadolinium contrast, T2-weighted fluid-attenuated inversion recovery, or both. We measured model performance against 30 independent brain tumor test cases with available manual brain masks. All images were harmonized for voxel spacing and volumetric dimensions before model training. Model training was performed using the modularly structured deep learning platform NiftyNet that is tailored toward simplifying medical image analysis. Our proposed approach showed the success of a weakly supervised deep learning approach in MRI brain extraction even in the presence of pathology. Our best model achieved an average Dice score, sensitivity, and specificity of, respectively, 94.5, 96.4, and 98.5% on the multi-institutional independent brain tumor test set. To further contextualize our results within existing literature on healthy brain segmentation, we tested the model against healthy subjects from the benchmark LBPA40 dataset. For this dataset, the model achieved an average Dice score, sensitivity, and specificity of 96.2, 96.6, and 99.2%, which are, although comparable to other publications, slightly lower than the performance of models trained on healthy patients. We associate this drop in performance with the use of brain tumor data for model training and its influence on brain appearance.
Collapse
Affiliation(s)
- Sara Ranjbar
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Kyle W. Singleton
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Lee Curtin
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Cassandra R. Rickertsen
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Lisa E. Paulson
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Leland S. Hu
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
- Department of Diagnostic Imaging and Interventional Radiology, Mayo Clinic, Phoenix, AZ, United States
| | - Joseph Ross Mitchell
- Department of Medicine, Faculty of Medicine & Dentistry and the Alberta Machine Intelligence Institute, University of Alberta, Edmonton, AB, Canada
- Provincial Clinical Excellence Portfolio, Alberta Health Services, Edmonton, AB, Canada
| | - Kristin R. Swanson
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| |
Collapse
|
143
|
Artificial intelligence in gastrointestinal and hepatic imaging: past, present and future scopes. Clin Imaging 2022; 87:43-53. [DOI: 10.1016/j.clinimag.2022.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 03/09/2022] [Accepted: 04/11/2022] [Indexed: 11/19/2022]
|
144
|
Sharobeem S, Le Breton H, Lalys F, Lederlin M, Lagorce C, Bedossa M, Boulmier D, Leurent G, Haigron P, Auffret V. Validation of a Whole Heart Segmentation from Computed Tomography Imaging Using a Deep-Learning Approach. J Cardiovasc Transl Res 2022; 15:427-437. [PMID: 34448116 DOI: 10.1007/s12265-021-10166-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 08/09/2021] [Indexed: 11/28/2022]
Abstract
The aim of this study is to develop an automated deep-learning-based whole heart segmentation of ECG-gated computed tomography data. After 21 exclusions, CT acquired before transcatheter aortic valve implantation in 71 patients were reviewed and randomly split in a training (n = 55 patients), validation (n = 8 patients), and a test set (n = 8 patients). A fully automatic deep-learning method combining two convolutional neural networks performed segmentation of 10 cardiovascular structures, which was compared with the manually segmented reference by the Dice index. Correlations and agreement between myocardial volumes and mass were assessed. The algorithm demonstrated high accuracy (Dice score = 0.920; interquartile range: 0.906-0.925) and a low computing time (13.4 s, range 11.9-14.9). Correlations and agreement of volumes and mass were satisfactory for most structures. Six of ten structures were well segmented. Deep-learning-based method allowed automated WHS from ECG-gated CT data with a high accuracy. Challenges remain to improve right-sided structures segmentation and achieve daily clinical application.
Collapse
Affiliation(s)
- Sam Sharobeem
- LTSI - UMR 1099, Inserm, CHU Rennes, Univ Rennes, 35000, Rennes, France
- Service de Cardiologie, CHU Rennes, 35000, Rennes, France
| | - Hervé Le Breton
- LTSI - UMR 1099, Inserm, CHU Rennes, Univ Rennes, 35000, Rennes, France
- Service de Cardiologie, CHU Rennes, 35000, Rennes, France
| | | | - Mathieu Lederlin
- LTSI - UMR 1099, Inserm, CHU Rennes, Univ Rennes, 35000, Rennes, France
- Service de Radiologie, CHU Rennes, 35000, Rennes, France
| | | | - Marc Bedossa
- Service de Cardiologie, CHU Rennes, 35000, Rennes, France
| | - Dominique Boulmier
- LTSI - UMR 1099, Inserm, CHU Rennes, Univ Rennes, 35000, Rennes, France
- Service de Cardiologie, CHU Rennes, 35000, Rennes, France
| | | | - Pascal Haigron
- LTSI - UMR 1099, Inserm, CHU Rennes, Univ Rennes, 35000, Rennes, France
| | - Vincent Auffret
- LTSI - UMR 1099, Inserm, CHU Rennes, Univ Rennes, 35000, Rennes, France.
- Service de Cardiologie, CHU Rennes, 35000, Rennes, France.
- Service de Cardiologie, CHU Pontchaillou, 2 rue Henri Le Guilloux, 35000, Rennes, France.
| |
Collapse
|
145
|
Li A, Zhang X, Singla J, White K, Loconte V, Hu C, Zhang C, Li S, Li W, Francis JP, Wang C, Sali A, Sun L, He X, Stevens RC. Auto-segmentation and time-dependent systematic analysis of mesoscale cellular structure in β-cells during insulin secretion. PLoS One 2022; 17:e0265567. [PMID: 35324950 PMCID: PMC8947144 DOI: 10.1371/journal.pone.0265567] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 03/03/2022] [Indexed: 02/07/2023] Open
Abstract
The mesoscale description of the subcellular organization informs about cellular mechanisms in disease state. However, applications of soft X-ray tomography (SXT), an important approach for characterizing organelle organization, are limited by labor-intensive manual segmentation. Here we report a pipeline for automated segmentation and systematic analysis of SXT tomograms. Our approach combines semantic and first-applied instance segmentation to produce separate organelle masks with high Dice and Recall indexes, followed by analysis of organelle localization based on the radial distribution function. We demonstrated this technique by investigating the organization of INS-1E pancreatic β-cell organization under different treatments at multiple time points. Consistent with a previous analysis of a similar dataset, our results revealed the impact of glucose stimulation on the localization and molecular density of insulin vesicles and mitochondria. This pipeline can be extended to SXT tomograms of any cell type to shed light on the subcellular rearrangements under different drug treatments.
Collapse
Affiliation(s)
- Angdi Li
- iHuman Institute, ShanghaiTech University, Shanghai, China
- School of Life Science and Technology, ShanghaiTech University, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xiangyi Zhang
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Jitin Singla
- Department of Biological Sciences, Bridge Institute, University of Southern California, Los Angeles, CA, United States of America
| | - Kate White
- Department of Biological Sciences, Bridge Institute, University of Southern California, Los Angeles, CA, United States of America
| | - Valentina Loconte
- iHuman Institute, ShanghaiTech University, Shanghai, China
- School of Life Science and Technology, ShanghaiTech University, Shanghai, China
| | - Chuanyang Hu
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Chuyu Zhang
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Shuailin Li
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Weimin Li
- iHuman Institute, ShanghaiTech University, Shanghai, China
- School of Life Science and Technology, ShanghaiTech University, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - John Paul Francis
- Department of Computer Science, Bridge Institute, USC Michelson Center for Convergent Bioscience, University of Southern California, Los Angeles, CA, United States of America
| | - Chenxi Wang
- iHuman Institute, ShanghaiTech University, Shanghai, China
- School of Life Science and Technology, ShanghaiTech University, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Andrej Sali
- California Institute for Quantitative Biosciences, Department of Bioengineering and Therapeutic Sciences, Department of Pharmaceutical Chemistry, University of California, San Francisco, San Francisco, CA, United States of America
| | - Liping Sun
- iHuman Institute, ShanghaiTech University, Shanghai, China
- School of Life Science and Technology, ShanghaiTech University, Shanghai, China
| | - Xuming He
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
- Shanghai Engineering Research Center of Intelligent Vision and Imaging, Shanghai, China
| | - Raymond C. Stevens
- iHuman Institute, ShanghaiTech University, Shanghai, China
- School of Life Science and Technology, ShanghaiTech University, Shanghai, China
- Department of Biological Sciences, Bridge Institute, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
146
|
Amjad A, Xu J, Thill D, Lawton C, Hall W, Awan MJ, Shukla M, Erickson BA, Li XA. General and custom deep learning autosegmentation models for organs in head and neck, abdomen, and male pelvis. Med Phys 2022; 49:1686-1700. [PMID: 35094390 PMCID: PMC8917093 DOI: 10.1002/mp.15507] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 01/19/2022] [Accepted: 01/21/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To reduce workload and inconsistencies in organ segmentation for radiation treatment planning, we developed and evaluated general and custom autosegmentation models on computed tomography (CT) for three major tumor sites using a well-established deep convolutional neural network (DCNN). METHODS Five CT-based autosegmentation models for 42 organs at risk (OARs) in head and neck (HN), abdomen (ABD), and male pelvis (MP) were developed using a full three-dimensional (3D) DCNN architecture. Two types of deep learning (DL) models were separately trained using either general diversified multi-institutional datasets or custom well-controlled single-institution datasets. To improve segmentation accuracy, an adaptive spatial resolution approach for small and/or narrow OARs and a pseudo scan extension approach, when CT scan length is too short to cover entire organs, were implemented. The performance of the obtained models was evaluated based on accuracy and clinical applicability of the autosegmented contours using qualitative visual inspection and quantitative calculation of dice similarity coefficient (DSC), mean distance to agreement (MDA), and time efficiency. RESULTS The five DL autosegmentation models developed for the three anatomical sites were found to have high accuracy (DSC ranging from 0.8 to 0.98) for 74% OARs and marginally acceptable for 26% OARs. The custom models performed slightly better than the general models, even with smaller custom datasets used for the custom model training. The organ-based approaches improved autosegmentation accuracy for small or complex organs (e.g., eye lens, optic nerves, inner ears, and bowels). Compared with traditional manual contouring times, the autosegmentation times, including subsequent manual editing, if necessary, were substantially reduced by 88% for MP, 80% for HN, and 65% for ABD models. CONCLUSIONS The obtained autosegmentation models, incorporating organ-based approaches, were found to be effective and accurate for most OARs in the male pelvis, head and neck, and abdomen. We have demonstrated that our multianatomical DL autosegmentation models are clinically useful for radiation treatment planning.
Collapse
Affiliation(s)
- Asma Amjad
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| | | | | | - Colleen Lawton
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| | - William Hall
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| | - Musaddiq J. Awan
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| | - Monica Shukla
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| | - Beth A. Erickson
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| | - X. Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| |
Collapse
|
147
|
Cao Y, Vassantachart A, Ragab O, Bian S, Mitra P, Xu Z, Gallogly AZ, Cui J, Shen ZL, Balik S, Gribble M, Chang EL, Fan Z, Yang W. Automatic segmentation of high-risk clinical target volume for tandem-and-ovoids brachytherapy patients using an asymmetric dual-path convolutional neural network. Med Phys 2022; 49:1712-1722. [PMID: 35080018 PMCID: PMC9170543 DOI: 10.1002/mp.15490] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 01/06/2022] [Accepted: 01/09/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSES Preimplant diagnostic magnetic resonance imaging is the gold standard for image-guided tandem-and-ovoids (T&O) brachytherapy for cervical cancer. However, high dose rate brachytherapy planning is typically done on postimplant CT-based high-risk clinical target volume (HR-CTVCT ) because the transfer of preimplant Magnetic resonance (MR)-based HR-CTV (HR-CTVMR ) to the postimplant planning CT is difficult due to anatomical changes caused by applicator insertion, vaginal packing, and the filling status of the bladder and rectum. This study aims to train a dual-path convolutional neural network (CNN) for automatic segmentation of HR-CTVCT on postimplant planning CT with guidance from preimplant diagnostic MR. METHODS Preimplant T2-weighted MR and postimplant CT images for 65 (48 for training, eight for validation, and nine for testing) patients were retrospectively solicited from our institutional database. MR was aligned to the corresponding CT using rigid registration. HR-CTVCT and HR-CTVMR were manually contoured on CT and MR by an experienced radiation oncologist. All images were then resampled to a spatial resolution of 0.5 × 0.5 × 1.25 mm. A dual-path 3D asymmetric CNN architecture with two encoding paths was built to extract CT and MR image features. The MR was masked by HR-CTVMR contour while the entire CT volume was included. The network put an asymmetric weighting of 18:6 for CT: MR. Voxel-based dice similarity coefficient (DSCV ), sensitivity, precision, and 95% Hausdorff distance (95-HD) were used to evaluate model performance. Cross-validation was performed to assess model stability. The study cohort was divided into a small tumor group (<20 cc), medium tumor group (20-40 cc), and large tumor group (>40 cc) based on the HR-CTVCT for model evaluation. Single-path CNN models were trained with the same parameters as those in dual-path models. RESULTS For this patient cohort, the dual-path CNN model improved each of our objective findings, including DSCV , sensitivity, and precision, with an average improvement of 8%, 7%, and 12%, respectively. The 95-HD was improved by an average of 1.65 mm compared to the single-path model with only CT images as input. In addition, the area under the curve for different networks was 0.86 (dual-path with CT and MR) and 0.80 (single-path with CT), respectively. The dual-path CNN model with asymmetric weighting achieved the best performance with DSCV of 0.65 ± 0.03 (0.61-0.70), 0.79 ± 0.02 (0.74-0.85), and 0.75 ± 0.04 (0.68-0.79) for small, medium, and large group. 95-HD were 7.34 (5.35-10.45) mm, 5.48 (3.21-8.43) mm, and 6.21 (5.34-9.32) mm for the three size groups, respectively. CONCLUSIONS An asymmetric CNN model with two encoding paths from preimplant MR (masked by HR-CTVMR ) and postimplant CT images was successfully developed for automatic segmentation of HR-CTVCT for T&O brachytherapy patients.
Collapse
Affiliation(s)
- Yufeng Cao
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - April Vassantachart
- Department of Radiation Oncology, LAC+USC Medical Center, Los Angeles, California, USA
| | - Omar Ragab
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Shelly Bian
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Priya Mitra
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Zhengzheng Xu
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Audrey Zhuang Gallogly
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Jing Cui
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Zhilei Liu Shen
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Salim Balik
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Michael Gribble
- Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Eric L. Chang
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Zhaoyang Fan
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Wensha Yang
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
148
|
Pan W, Liu Z, Song W, Zhen X, Yuan K, Xu F, Lin GN. An Integrative Segmentation Framework for Cell Nucleus of Fluorescence Microscopy. Genes (Basel) 2022; 13:genes13030431. [PMID: 35327985 PMCID: PMC8950038 DOI: 10.3390/genes13030431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 02/22/2022] [Accepted: 02/23/2022] [Indexed: 01/27/2023] Open
Abstract
Nucleus segmentation of fluorescence microscopy is a critical step in quantifying measurements in cell biology. Automatic and accurate nucleus segmentation has powerful applications in analyzing intrinsic characterization in nucleus morphology. However, existing methods have limited capacity to perform accurate segmentation in challenging samples, such as noisy images and clumped nuclei. In this paper, inspired by the idea of cascaded U-Net (or W-Net) and its remarkable performance improvement in medical image segmentation, we proposed a novel framework called Attention-enhanced Simplified W-Net (ASW-Net), in which a cascade-like structure with between-net connections was used. Results showed that this lightweight model could reach remarkable segmentation performance in the BBBC039 testing set (aggregated Jaccard index, 0.90). In addition, our proposed framework performed better than the state-of-the-art methods in terms of segmentation performance. Moreover, we further explored the effectiveness of our designed network by visualizing the deep features from the network. Notably, our proposed framework is open source.
Collapse
Affiliation(s)
- Weihao Pan
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; (W.P.); (Z.L.); (W.S.); (X.Z.); (K.Y.)
| | - Zhe Liu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; (W.P.); (Z.L.); (W.S.); (X.Z.); (K.Y.)
| | - Weichen Song
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; (W.P.); (Z.L.); (W.S.); (X.Z.); (K.Y.)
| | - Xuyang Zhen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; (W.P.); (Z.L.); (W.S.); (X.Z.); (K.Y.)
| | - Kai Yuan
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; (W.P.); (Z.L.); (W.S.); (X.Z.); (K.Y.)
| | - Fei Xu
- State Key Laboratory of Functional Materials for Informatics, Shanghai Institute of Microsystem and Information Technology (SIMIT), Chinese Academy of Sciences, Shanghai 200050, China
- College of Science, Donghua University, Shanghai 201620, China
- Correspondence: (F.X.); (G.N.L.)
| | - Guan Ning Lin
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; (W.P.); (Z.L.); (W.S.); (X.Z.); (K.Y.)
- Correspondence: (F.X.); (G.N.L.)
| |
Collapse
|
149
|
Mahmoudi T, Kouzahkanan ZM, Radmard AR, Kafieh R, Salehnia A, Davarpanah AH, Arabalibeik H, Ahmadian A. Segmentation of pancreatic ductal adenocarcinoma (PDAC) and surrounding vessels in CT images using deep convolutional neural networks and texture descriptors. Sci Rep 2022; 12:3092. [PMID: 35197542 PMCID: PMC8866432 DOI: 10.1038/s41598-022-07111-9] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Accepted: 02/14/2022] [Indexed: 12/13/2022] Open
Abstract
Fully automated and volumetric segmentation of critical tumors may play a crucial role in diagnosis and surgical planning. One of the most challenging tumor segmentation tasks is localization of pancreatic ductal adenocarcinoma (PDAC). Exclusive application of conventional methods does not appear promising. Deep learning approaches has achieved great success in the computer aided diagnosis, especially in biomedical image segmentation. This paper introduces a framework based on convolutional neural network (CNN) for segmentation of PDAC mass and surrounding vessels in CT images by incorporating powerful classic features, as well. First, a 3D-CNN architecture is used to localize the pancreas region from the whole CT volume using 3D Local Binary Pattern (LBP) map of the original image. Segmentation of PDAC mass is subsequently performed using 2D attention U-Net and Texture Attention U-Net (TAU-Net). TAU-Net is introduced by fusion of dense Scale-Invariant Feature Transform (SIFT) and LBP descriptors into the attention U-Net. An ensemble model is then used to cumulate the advantages of both networks using a 3D-CNN. In addition, to reduce the effects of imbalanced data, a multi-objective loss function is proposed as a weighted combination of three classic losses including Generalized Dice Loss (GDL), Weighted Pixel-Wise Cross Entropy loss (WPCE) and boundary loss. Due to insufficient sample size for vessel segmentation, we used the above-mentioned pre-trained networks and fine-tuned them. Experimental results show that the proposed method improves the Dice score for PDAC mass segmentation in portal-venous phase by 7.52% compared to state-of-the-art methods in term of DSC. Besides, three dimensional visualization of the tumor and surrounding vessels can facilitate the evaluation of PDAC treatment response.
Collapse
Affiliation(s)
- Tahereh Mahmoudi
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| | | | - Amir Reza Radmard
- Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Raheleh Kafieh
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, UK
| | - Aneseh Salehnia
- Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Amir H Davarpanah
- Department of Radiology and Imaging Sciences, Emory University, School of Medicine, Atlanta, GA, USA
| | - Hossein Arabalibeik
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran.
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran.
| | - Alireza Ahmadian
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran.
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
150
|
Li M, Lian F, Guo S. Multi-scale Selection and Multi-channel Fusion Model for Pancreas Segmentation Using Adversarial Deep Convolutional Nets. J Digit Imaging 2022; 35:47-55. [PMID: 34921356 PMCID: PMC8854512 DOI: 10.1007/s10278-021-00563-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 11/13/2021] [Accepted: 11/16/2021] [Indexed: 02/03/2023] Open
Abstract
Organ segmentation from existing imaging is vital to the medical image analysis and disease diagnosis. However, the boundary shapes and area sizes of the target region tend to be diverse and flexible. And the frequent applications of pooling operations in traditional segmentor result in the loss of spatial information which is advantageous to segmentation. All these issues pose challenges and difficulties for accurate organ segmentation from medical imaging, particularly for organs with small volumes and variable shapes such as the pancreas. To offset aforesaid information loss, we propose a deep convolutional neural network (DCNN) named multi-scale selection and multi-channel fusion segmentation model (MSC-DUnet) for pancreas segmentation. This proposed model contains three stages to collect detailed cues for accurate segmentation: (1) increasing the consistency between the distributions of the output probability maps from the segmentor and the original samples by involving the adversarial mechanism that can capture spatial distributions, (2) gathering global spatial features from several receptive fields via multi-scale field selection (MSFS), and (3) integrating multi-level features located in varying network positions through the multi-channel fusion module (MCFM). Experimental results on the NIH Pancreas-CT dataset show that our proposed MSC-DUnet obtains superior performance to the baseline network by achieving an improvement of 5.1% in index dice similarity coefficient (DSC), which adequately indicates that MSC-DUnet has great potential for pancreas segmentation.
Collapse
Affiliation(s)
- Meiyu Li
- College of Electronic Science and Engineering, Jilin University, Changchun, 130012, China
| | - Fenghui Lian
- School of Aviation Operations and Services, Air Force Aviation University, Changchun, 130000, China
| | - Shuxu Guo
- College of Electronic Science and Engineering, Jilin University, Changchun, 130012, China.
| |
Collapse
|