151
|
Abstract
Artificial intelligence (AI) has illuminated a clear path towards an evolving health-care system replete with enhanced precision and computing capabilities. Medical imaging analysis can be strengthened by machine learning as the multidimensional data generated by imaging naturally lends itself to hierarchical classification. In this Review, we describe the role of machine intelligence in image-based endocrine cancer diagnostics. We first provide a brief overview of AI and consider its intuitive incorporation into the clinical workflow. We then discuss how AI can be applied for the characterization of adrenal, pancreatic, pituitary and thyroid masses in order to support clinicians in their diagnostic interpretations. This Review also puts forth a number of key evaluation criteria for machine learning in medicine that physicians can use in their appraisals of these algorithms. We identify mitigation strategies to address ongoing challenges around data availability and model interpretability in the context of endocrine cancer diagnosis. Finally, we delve into frontiers in systems integration for AI, discussing automated pipelines and evolving computing platforms that leverage distributed, decentralized and quantum techniques.
Collapse
Affiliation(s)
| | - Ihab R Kamel
- Department of Imaging & Imaging Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Harrison X Bai
- Department of Imaging & Imaging Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
152
|
Jordan P, Adamson PM, Bhattbhatt V, Beriwal S, Shen S, Radermecker O, Bose S, Strain LS, Offe M, Fraley D, Principi S, Ye DH, Wang AS, Van Heteren J, Vo NJ, Schmidt TG. Pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert organ contours. Med Phys 2022; 49:3523-3528. [PMID: 35067940 PMCID: PMC9090951 DOI: 10.1002/mp.15485] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 12/26/2021] [Accepted: 12/31/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Organ autosegmentation efforts to date have largely been focused on adult populations, due to limited availability of pediatric training data. Pediatric patients may present additional challenges for organ segmentation. This paper describes a dataset of 359 pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert contours of up to 29 anatomical organ structures to aid in the evaluation and development of autosegmentation algorithms for pediatric CT imaging. ACQUISITION AND VALIDATION METHODS The dataset collection consists of axial CT images in DICOM format of 180 male and 179 female pediatric chest-abdomen-pelvis or abdomen-pelvis exams acquired from one of three CT scanners at Children's Wisconsin. The datasets represent random pediatric cases based upon routine clinical indications. Subjects ranged in age from 5 days to 16 years, with a mean age of seven years. The CT acquisition, contrast, and reconstruction protocols varied across the scanner models and patients, with specifications available in the DICOM headers. Expert contours were manually labeled for up to 29 organ structures per subject. Not all contours are available for all subjects, due to limited field of view or unreliable contouring due to high noise. DATA FORMAT AND USAGE NOTES The data are available on TCIA (https://www.cancerimagingarchive.net/) under the collection Pediatric-CT-SEG. The axial CT image slices for each subject are available in DICOM format. The expert contours are stored in a single DICOM RTSTRUCT file for each subject. The contours are names as listed in Table 2. POTENTIAL APPLICATIONS This dataset will enable the evaluation and development of organ autosegmentation algorithms for pediatric populations, which exhibit variations in organ shape and size across age. Automated organ segmentation from CT images has numerous applications including radiation therapy, diagnostic tasks, surgical planning, and patient-specific organ dose estimation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Michael Offe
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI
| | - David Fraley
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI
| | - Sara Principi
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI
| | - Dong Hye Ye
- Department of Electrical Engineering, Marquette University, Milwaukee, WI
| | - Adam S Wang
- Department of Radiology, Stanford University, Stanford, CA
| | | | - Nghia-Jack Vo
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI
| | - Taly Gilat Schmidt
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI
| |
Collapse
|
153
|
Epistemic and aleatoric uncertainties reduction with rotation variation for medical image segmentation with ConvNets. SN APPLIED SCIENCES 2022. [DOI: 10.1007/s42452-022-04936-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
AbstractThe deep convolutional neural network (ConvNet) achieves significant segmentation performance on medical images of various modalities. However, the isolated errors in a large testing set with various tumor conditions are not acceptable in clinical practice. This is usually caused in inadequate training and noise inherent during data collection, which are recognized as epistemic and aleatoric uncertainties in deep learning-based approaches. In this paper, we analyze the two types of uncertainties in medical image segmentation tasks and propose a reduction method by training models with data augmentation. The shelter zones in images are reduced with 2D imaging on surfaces of different angles from 3D organs. Rotation transformation and noise are estimated by Monte Carlo simulation with prior parameter distributions, and the aleatoric uncertainty is quantized in this process. Experiments on segmentation of computed tomography images demonstrate that overconfident incorrect predictions are reduced through uncertainty reduction and that our method outperforms prediction baselines based on epistemic and aleatoric estimation.
Collapse
|
154
|
Kuang Z, Yan Z, Yu L, Deng X, Hua Y, Li S. Uncertainty-Aware Deep Learning with Cross-Task Supervision for PHE Segmentation on CT Images. IEEE J Biomed Health Inform 2022; 26:2615-2626. [PMID: 34986106 DOI: 10.1109/jbhi.2021.3137603] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Perihematomal edema (PHE) volume, surrounding spontaneous intracerebral hemorrhage (SICH), is an important biomarker for the presence of SICH-associated diseases. However, due to irregular shapes and extremely low contrast of PHE on CT images, manually annotating PHE in pixel-wise is time-consuming and labour intensive even for experienced experts, which makes it almost infeasible to deploy current supervised deep learning approaches for automated PHE segmentation. How to develop annotation-efficient deep learning to achieve accurate PHE segmentation is an open problem. In this paper, we, for the first time, propose a cross-task supervised framework by introducing slice-level PHE labels and pixel-wise SICH annotations, which are more accessible in clinical scenarios compared to pixel-wise PHE annotations. Specifically, we first train a multi-level classifier based on slice-level PHE labels to produce high-quality class activation maps (CAMs) as pseudo PHE annotations. Then, we train a deep learning model to produce accurate PHE segmentation by iteratively refining the pseudo annotations via an uncertainty-aware corrective training strategy for noise removal and a distance-aware loss for background compression. Experimental results demonstrate that, the proposed framework achieves a comparative performance with the fully supervised methods on PHE segmentation, and largely improves the baseline performance where only pseudo PHE labels are used for training. We believe the findings from this study of using cross-task supervision for annotation-efficient deep learning can be applied to other medical image processing applications.
Collapse
|
155
|
Akhavanallaf A, Fayad H, Salimi Y, Aly A, Kharita H, Al Naemi H, Zaidi H. An update on computational anthropomorphic anatomical models. Digit Health 2022; 8:20552076221111941. [PMID: 35847523 PMCID: PMC9277432 DOI: 10.1177/20552076221111941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 06/19/2022] [Indexed: 11/15/2022] Open
Abstract
The prevalent availability of high-performance computing coupled with validated computerized simulation platforms as open-source packages have motivated progress in the development of realistic anthropomorphic computational models of the human anatomy. The main application of these advanced tools focused on imaging physics and computational internal/external radiation dosimetry research. This paper provides an updated review of state-of-the-art developments and recent advances in the design of sophisticated computational models of the human anatomy with a particular focus on their use in radiation dosimetry calculations. The consolidation of flexible and realistic computational models with biological data and accurate radiation transport modeling tools enables the capability to produce dosimetric data reflecting actual setup in clinical setting. These simulation methodologies and results are helpful resources for the medical physics and medical imaging communities and are expected to impact the fields of medical imaging and dosimetry calculations profoundly.
Collapse
Affiliation(s)
- Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hadi Fayad
- Hamad Medical Corporation, Doha, Qatar
- Weill Cornell Medicine, Doha, Qatar
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Antar Aly
- Hamad Medical Corporation, Doha, Qatar
- Weill Cornell Medicine, Doha, Qatar
| | | | - Huda Al Naemi
- Hamad Medical Corporation, Doha, Qatar
- Weill Cornell Medicine, Doha, Qatar
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, Geneva,
Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University
Medical Center Groningen, University of Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark,
Odense, Denmark
| |
Collapse
|
156
|
Chen S, Zhong X, Dorn S, Ravikumar N, Tao Q, Huang X, Lell M, Kachelriess M, Maier A. Improving Generalization Capability of Multiorgan Segmentation Models Using Dual-Energy CT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3055199] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
157
|
Tushar FI, D’Anniballe VM, Hou R, Mazurowski MA, Fu W, Samei E, Rubin GD, Lo JY. Classification of Multiple Diseases on Body CT Scans Using Weakly Supervised Deep Learning. Radiol Artif Intell 2022; 4:e210026. [PMID: 35146433 PMCID: PMC8823458 DOI: 10.1148/ryai.210026] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 10/04/2021] [Accepted: 11/15/2021] [Indexed: 04/14/2023]
Abstract
PURPOSE To design multidisease classifiers for body CT scans for three different organ systems using automatically extracted labels from radiology text reports. MATERIALS AND METHODS This retrospective study included a total of 12 092 patients (mean age, 57 years ± 18 [standard deviation]; 6172 women) for model development and testing. Rule-based algorithms were used to extract 19 225 disease labels from 13 667 body CT scans performed between 2012 and 2017. Using a three-dimensional DenseVNet, three organ systems were segmented: lungs and pleura, liver and gallbladder, and kidneys and ureters. For each organ system, a three-dimensional convolutional neural network classified each as no apparent disease or for the presence of four common diseases, for a total of 15 different labels across all three models. Testing was performed on a subset of 2158 CT volumes relative to 2875 manually derived reference labels from 2133 patients (mean age, 58 years ± 18; 1079 women). Performance was reported as area under the receiver operating characteristic curve (AUC), with 95% CIs calculated using the DeLong method. RESULTS Manual validation of the extracted labels confirmed 91%-99% accuracy across the 15 different labels. AUCs for lungs and pleura labels were as follows: atelectasis, 0.77 (95% CI: 0.74, 0.81); nodule, 0.65 (95% CI: 0.61, 0.69); emphysema, 0.89 (95% CI: 0.86, 0.92); effusion, 0.97 (95% CI: 0.96, 0.98); and no apparent disease, 0.89 (95% CI: 0.87, 0.91). AUCs for liver and gallbladder were as follows: hepatobiliary calcification, 0.62 (95% CI: 0.56, 0.67); lesion, 0.73 (95% CI: 0.69, 0.77); dilation, 0.87 (95% CI: 0.84, 0.90); fatty, 0.89 (95% CI: 0.86, 0.92); and no apparent disease, 0.82 (95% CI: 0.78, 0.85). AUCs for kidneys and ureters were as follows: stone, 0.83 (95% CI: 0.79, 0.87); atrophy, 0.92 (95% CI: 0.89, 0.94); lesion, 0.68 (95% CI: 0.64, 0.72); cyst, 0.70 (95% CI: 0.66, 0.73); and no apparent disease, 0.79 (95% CI: 0.75, 0.83). CONCLUSION Weakly supervised deep learning models were able to classify diverse diseases in multiple organ systems from CT scans.Keywords: CT, Diagnosis/Classification/Application Domain, Semisupervised Learning, Whole-Body Imaging© RSNA, 2022.
Collapse
|
158
|
Chen X, Chen Z, Li J, Zhang YD, Lin X, Qian X. Model-Driven Deep Learning Method for Pancreatic Cancer Segmentation Based on Spiral-Transformation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:75-87. [PMID: 34383646 DOI: 10.1109/tmi.2021.3104460] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Pancreatic cancer is a lethal malignant tumor with one of the worst prognoses. Accurate segmentation of pancreatic cancer is vital in clinical diagnosis and treatment. Due to the unclear boundary and small size of cancers, it is challenging to both manually annotate and automatically segment cancers. Considering 3D information utilization and small sample sizes, we propose a model-driven deep learning method for pancreatic cancer segmentation based on spiral transformation. Specifically, a spiral-transformation algorithm with uniform sampling was developed to map 3D images onto 2D planes while preserving the spatial relationship between textures, thus addressing the challenge in effectively applying 3D contextual information in a 2D model. This study is the first to introduce spiral transformation in a segmentation task to provide effective data augmentation, alleviating the issue of small sample size. Moreover, a transformation-weight-corrected module was embedded into the deep learning model to unify the entire framework. It can achieve 2D segmentation and corresponding 3D rebuilding constraint to overcome non-unique 3D rebuilding results due to the uniform and dense sampling. A smooth regularization based on rebuilding prior knowledge was also designed to optimize segmentation results. The extensive experiments showed that the proposed method achieved a promising segmentation performance on multi-parametric MRIs, where T2, T1, ADC, DWI images obtained the DSC of 65.6%, 64.0%, 64.5%, 65.3%, respectively. This method can provide a novel paradigm to efficiently apply 3D information and augment sample sizes in the development of artificial intelligence for cancer segmentation. Our source codes will be released at https://github.com/SJTUBME-QianLab/ Spiral-Segmentation.
Collapse
|
159
|
Lee CE, Chung M, Shin YG. Voxel-level Siamese Representation Learning for Abdominal Multi-Organ Segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 213:106547. [PMID: 34839269 DOI: 10.1016/j.cmpb.2021.106547] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 10/18/2021] [Accepted: 11/15/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Recent works in medical image segmentation have actively explored various deep learning architectures or objective functions to encode high-level features from volumetric data owing to limited image annotations. However, most existing approaches tend to ignore cross-volume global context and define context relations in the decision space. In this work, we propose a novel voxel-level Siamese representation learning method for abdominal multi-organ segmentation to improve representation space. METHODS The proposed method enforces voxel-wise feature relations in the representation space for leveraging limited datasets more comprehensively to achieve better performance. Inspired by recent progress in contrastive learning, we suppressed voxel-wise relations from the same class to be projected to the same point without using negative samples. Moreover, we introduce a multi-resolution context aggregation method that aggregates features from multiple hidden layers, which encodes both the global and local contexts for segmentation. RESULTS Our experiments on the multi-organ dataset outperformed the existing approaches by 2% in Dice score coefficient. The qualitative visualizations of the representation spaces demonstrate that the improvements were gained primarily by a disentangled feature space. CONCLUSION Our new representation learning method successfully encoded high-level features in the representation space by using a limited dataset, which showed superior accuracy in the medical image segmentation task compared to other contrastive loss-based methods. Moreover, our method can be easily applied to other networks without using additional parameters in the inference.
Collapse
Affiliation(s)
- Chae Eun Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea
| | - Minyoung Chung
- School of Software, Soongsil University, 369 Sangdo-Ro, Dongjak-Gu, Seoul, 06978, Republic of Korea.
| | - Yeong-Gil Shin
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea
| |
Collapse
|
160
|
Sun L, Li C, Ding X, Huang Y, Chen Z, Wang G, Yu Y, Paisley J. Few-shot medical image segmentation using a global correlation network with discriminative embedding. Comput Biol Med 2022; 140:105067. [PMID: 34920364 DOI: 10.1016/j.compbiomed.2021.105067] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Revised: 11/21/2021] [Accepted: 11/21/2021] [Indexed: 01/22/2023]
Abstract
Despite impressive developments in deep convolutional neural networks for medical imaging, the paradigm of supervised learning requires numerous annotations in training to avoid overfitting. In clinical cases, massive semantic annotations are difficult to acquire where biomedical expert knowledge is required. Moreover, it is common when only a few annotated classes are available. In this study, we proposed a new approach to few-shot medical image segmentation, which enables a segmentation model to quickly generalize to an unseen class with few training images. We constructed a few-shot image segmentation mechanism using a deep convolutional network trained episodically. Motivated by the spatial consistency and regularity in medical images, we developed an efficient global correlation module to model the correlation between a support and query image and incorporate it into the deep network. We enhanced the discrimination ability of the deep embedding scheme to encourage clustering of feature domains belonging to the same class while keeping feature domains of different organs far apart. We experimented using anatomical abdomen images from both CT and MRI modalities.
Collapse
Affiliation(s)
- Liyan Sun
- School of Informatics, Xiamen University, Xiamen, 361 005, Fujian, China; School of Electronic Science and Engineering, Xiamen University, Xiamen, 361 005, Fujian, China
| | - Chenxin Li
- School of Informatics, Xiamen University, Xiamen, 361 005, Fujian, China
| | - Xinghao Ding
- School of Informatics, Xiamen University, Xiamen, 361 005, Fujian, China.
| | - Yue Huang
- School of Informatics, Xiamen University, Xiamen, 361 005, Fujian, China
| | - Zhong Chen
- School of Electronic Science and Engineering, Xiamen University, Xiamen, 361 005, Fujian, China
| | - Guisheng Wang
- Department of Radiology, Third Medical Centre, Chinese PLA General Hospital, Beijing, 100 036, China
| | - Yizhou Yu
- Deepwise AI Laboratory, Beijing, 100 125, China
| | - John Paisley
- Department of Electrical Engineering and the Data Science Institute, Columbia University, New York, 10 027, NY, USA
| |
Collapse
|
161
|
RMS-UNet: Residual multi-scale UNet for liver and lesion segmentation. Artif Intell Med 2022; 124:102231. [DOI: 10.1016/j.artmed.2021.102231] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 11/12/2021] [Accepted: 12/17/2021] [Indexed: 12/12/2022]
|
162
|
Chen X, Fu R, Shao Q, Chen Y, Ye Q, Li S, He X, Zhu J. Application of artificial intelligence to pancreatic adenocarcinoma. Front Oncol 2022; 12:960056. [PMID: 35936738 PMCID: PMC9353734 DOI: 10.3389/fonc.2022.960056] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 06/24/2022] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Pancreatic cancer (PC) is one of the deadliest cancers worldwide although substantial advancement has been made in its comprehensive treatment. The development of artificial intelligence (AI) technology has allowed its clinical applications to expand remarkably in recent years. Diverse methods and algorithms are employed by AI to extrapolate new data from clinical records to aid in the treatment of PC. In this review, we will summarize AI's use in several aspects of PC diagnosis and therapy, as well as its limits and potential future research avenues. METHODS We examine the most recent research on the use of AI in PC. The articles are categorized and examined according to the medical task of their algorithm. Two search engines, PubMed and Google Scholar, were used to screen the articles. RESULTS Overall, 66 papers published in 2001 and after were selected. Of the four medical tasks (risk assessment, diagnosis, treatment, and prognosis prediction), diagnosis was the most frequently researched, and retrospective single-center studies were the most prevalent. We found that the different medical tasks and algorithms included in the reviewed studies caused the performance of their models to vary greatly. Deep learning algorithms, on the other hand, produced excellent results in all of the subdivisions studied. CONCLUSIONS AI is a promising tool for helping PC patients and may contribute to improved patient outcomes. The integration of humans and AI in clinical medicine is still in its infancy and requires the in-depth cooperation of multidisciplinary personnel.
Collapse
Affiliation(s)
- Xi Chen
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Ruibiao Fu
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Qian Shao
- Department of Surgical Ward 1, Ningbo Women and Children’s Hospital, Ningbo, China
| | - Yan Chen
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Qinghuang Ye
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Jinhui Zhu
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
- *Correspondence: Jinhui Zhu,
| |
Collapse
|
163
|
He J, Zhou G, Zhou S, Chen Y. Online Hard Patch Mining using Shape Models and Bandit Algorithm for Multi-organ Segmentation. IEEE J Biomed Health Inform 2021; 26:2648-2659. [PMID: 34928809 DOI: 10.1109/jbhi.2021.3136597] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Hard sample selection can effectively improve model convergence by extracting the most representative samples from a training set. However, due to the large capacity of medical images, existing sampling strategies suffer from insufficient exploitation for hard samples or high time cost for sample selection when adopted by 3D patch-based models in the field of multi-organ segmentation. In this paper, we present a novel and effective online hard patch mining (OHPM) algorithm. In our method, an average shape model that can be mapped with all training images is constructed to guide the exploration of hard patches and aggregate feedback from predicted patches. The process of hard mining is formalized as a multi-armed bandit problem and solved with bandit algorithms. With the shape model, OHPM requires negligible time consumption and can intuitively locate difficult anatomical areas during training. The employment of bandit algorithms ensures online and sufficient hard mining. We integrate OHPM with advanced segmentation networks and evaluate them on two datasets containing different anatomical structures. Comparative experiments with other sampling strategies demonstrate the superiority of OHPM in boosting segmentation performance and improving model convergence. The results in each dataset with each network suggest that OHPM significantly outperforms other sampling strategies by nearly 2% average Dice score.
Collapse
|
164
|
Guo B, Zhou F, Liu B, Bai X. Voxel-Wise Adversarial FiboNet for 3D Cerebrovascular Segmentation on Magnetic Resonance Angiography Images. Front Neurosci 2021; 15:756536. [PMID: 34899162 PMCID: PMC8660083 DOI: 10.3389/fnins.2021.756536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 10/13/2021] [Indexed: 11/13/2022] Open
Abstract
Cerebrovascular segmentation is important in various clinical applications, such as surgical planning and computer-aided diagnosis. In order to achieve high segmentation performance, three challenging problems should be taken into consideration: (1) large variations in vascular anatomies and voxel intensities; (2) severe class imbalance between foreground and background voxels; (3) image noise with different magnitudes. Limited accuracy was achieved without considering these challenges in deep learning-based methods for cerebrovascular segmentation. To overcome the limitations, we propose an end-to-end adversarial model called FiboNet-VANGAN. Specifically, our contributions can be summarized as follows: (1) to relieve the first problem mentioned above, a discriminator is proposed to regularize for voxel-wise distribution consistency between the segmentation results and the ground truth; (2) to mitigate the problem of class imbalance, we propose to use the addition of cross-entropy and Dice coefficient as the loss function of the generator. Focal loss is utilized as the loss function of the discriminator; (3) a new feature connection is proposed, based on which a generator called FiboNet is built. By incorporating Dice coefficient in the training of FiboNet, noise robustness can be improved by a large margin. We evaluate our method on a healthy magnetic resonance angiography (MRA) dataset to validate its effectiveness. A brain atrophy MRA dataset is also collected to test the performance of each method on abnormal cases. Results show that the three problems in cerebrovascular segmentation mentioned above can be alleviated and high segmentation accuracy can be achieved on both datasets using our method.
Collapse
Affiliation(s)
- Bin Guo
- Image Processing Center, School of Astronautics, Beihang University, Beijing, China
| | - Fugen Zhou
- Image Processing Center, School of Astronautics, Beihang University, Beijing, China
| | - Bo Liu
- Image Processing Center, School of Astronautics, Beihang University, Beijing, China
| | - Xiangzhi Bai
- Image Processing Center, School of Astronautics, Beihang University, Beijing, China
| |
Collapse
|
165
|
Meddeb A, Kossen T, Bressem KK, Hamm B, Nagel SN. Evaluation of a Deep Learning Algorithm for Automated Spleen Segmentation in Patients with Conditions Directly or Indirectly Affecting the Spleen. Tomography 2021; 7:950-960. [PMID: 34941650 PMCID: PMC8704906 DOI: 10.3390/tomography7040078] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 12/06/2021] [Accepted: 12/07/2021] [Indexed: 12/12/2022] Open
Abstract
The aim of this study was to develop a deep learning-based algorithm for fully automated spleen segmentation using CT images and to evaluate the performance in conditions directly or indirectly affecting the spleen (e.g., splenomegaly, ascites). For this, a 3D U-Net was trained on an in-house dataset (n = 61) including diseases with and without splenic involvement (in-house U-Net), and an open-source dataset from the Medical Segmentation Decathlon (open dataset, n = 61) without splenic abnormalities (open U-Net). Both datasets were split into a training (n = 32.52%), a validation (n = 9.15%) and a testing dataset (n = 20.33%). The segmentation performances of the two models were measured using four established metrics, including the Dice Similarity Coefficient (DSC). On the open test dataset, the in-house and open U-Net achieved a mean DSC of 0.906 and 0.897 respectively (p = 0.526). On the in-house test dataset, the in-house U-Net achieved a mean DSC of 0.941, whereas the open U-Net obtained a mean DSC of 0.648 (p < 0.001), showing very poor segmentation results in patients with abnormalities in or surrounding the spleen. Thus, for reliable, fully automated spleen segmentation in clinical routine, the training dataset of a deep learning-based algorithm should include conditions that directly or indirectly affect the spleen.
Collapse
Affiliation(s)
- Aymen Meddeb
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Klinik für Radiologie, Hindenburgdamm 30, 12203 Berlin, Germany; (K.K.B.); (B.H.); (S.N.N.)
- Correspondence: ; Tel.: +49-30-450-527792
| | - Tabea Kossen
- CLAIM—Charité Lab for AI in Medicine, Charité—Universitätsmedizin Berlin, Augustenburger Platz 1, 13353 Berlin, Germany;
| | - Keno K. Bressem
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Klinik für Radiologie, Hindenburgdamm 30, 12203 Berlin, Germany; (K.K.B.); (B.H.); (S.N.N.)
- Berlin Institute of Health, Charité—Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany
| | - Bernd Hamm
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Klinik für Radiologie, Hindenburgdamm 30, 12203 Berlin, Germany; (K.K.B.); (B.H.); (S.N.N.)
| | - Sebastian N. Nagel
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Klinik für Radiologie, Hindenburgdamm 30, 12203 Berlin, Germany; (K.K.B.); (B.H.); (S.N.N.)
| |
Collapse
|
166
|
Chaichana A, Frey EC, Teyateeti A, Rhoongsittichai K, Tocharoenchai C, Pusuwan P, Jangpatarapongsa K. Automated segmentation of lung, liver, and liver tumors from Tc-99m MAA SPECT/CT images for Y-90 radioembolization using convolutional neural networks. Med Phys 2021; 48:7877-7890. [PMID: 34657293 PMCID: PMC9298038 DOI: 10.1002/mp.15303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 08/10/2021] [Accepted: 10/11/2021] [Indexed: 12/24/2022] Open
Abstract
PURPOSE 90 Y selective internal radiation therapy (SIRT) has become a safe and effective treatment option for liver cancer. However, segmentation of target and organ-at-risks is labor-intensive and time-consuming in 90 Y SIRT planning. In this study, we developed a convolutional neural network (CNN)-based method for automated lungs, liver, and tumor segmentation on 99m Tc-MAA SPECT/CT images for 90 Y SIRT planning. METHODS 99m Tc-MAA SPECT/CT images and corresponding clinical segmentations were retrospectively collected from 56 patients who underwent 90 Y SIRT. The collected data were used to train three CNN-based segmentation algorithms for lungs, liver, and tumor segmentation. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), surface DSC, and average symmetric surface distance (ASSD). Dosimetric parameters (volume, counts, and lung shunt fraction) were measured from the segmentation results and were compared with clinical reference segmentations. RESULTS The evaluation results show that the method can accurately segment lungs, liver, and tumor with median [interquartile range] DSCs of 0.98 [0.97-0.98], 0.91 [0.83-0.93], and 0.85 [0.71-0.88]; surface DSCs of 0.99 [0.97-0.99], 0.86 [0.77-0.93], and 0.85 [0.62-0.93], and ASSDs of 0.91 [0.69-1.5], 4.8 [2.6-8.4], and 4.7 [3.5-9.2] mm, respectively. Dosimetric parameters from the three segmentation networks show relationship with those from the reference segmentations. The overall segmentation took about 1 min per patient on an NVIDIA RTX-2080Ti GPU. CONCLUSION This work presents CNN-based algorithms to segment lungs, liver, and tumor from 99m Tc-MAA SPECT/CT images. The results demonstrated the potential of the proposed CNN-based segmentation method for assisting 90 Y SIRT planning while drastically reducing operator time.
Collapse
Affiliation(s)
- Anucha Chaichana
- Department of Radiological Technology, Faculty of Medical TechnologyMahidol UniversityBangkok10700Thailand
| | - Eric C. Frey
- Johns Hopkins School of MedicineJohns Hopkins UniversityBaltimoreMaryland21218USA
- Radiopharmaceutical Imaging and Dosimetry, LLCLuthervilleMaryland21093USA
| | - Ajalaya Teyateeti
- Department of Radiology, Faculty of Medicine Siriraj HospitalMahidol UniversityBangkok10700Thailand
| | - Kijja Rhoongsittichai
- Department of Radiology, Faculty of Medicine Siriraj HospitalMahidol UniversityBangkok10700Thailand
| | - Chiraporn Tocharoenchai
- Department of Radiological Technology, Faculty of Medical TechnologyMahidol UniversityBangkok10700Thailand
| | - Pawana Pusuwan
- Department of Radiology, Faculty of Medicine Siriraj HospitalMahidol UniversityBangkok10700Thailand
| | | |
Collapse
|
167
|
Nam JG, Witanto JN, Park SJ, Yoo SJ, Goo JM, Yoon SH. Automatic pulmonary vessel segmentation on noncontrast chest CT: deep learning algorithm developed using spatiotemporally matched virtual noncontrast images and low-keV contrast-enhanced vessel maps. Eur Radiol 2021; 31:9012-9021. [PMID: 34009411 PMCID: PMC8131193 DOI: 10.1007/s00330-021-08036-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 03/03/2021] [Accepted: 05/03/2021] [Indexed: 12/17/2022]
Abstract
OBJECTIVES To develop a deep learning-based pulmonary vessel segmentation algorithm (DLVS) from noncontrast chest CT and to investigate its clinical implications in assessing vascular remodeling of chronic obstructive lung disease (COPD) patients. METHODS For development, 104 pulmonary CT angiography scans (49,054 slices) using a dual-source CT were collected, and spatiotemporally matched virtual noncontrast and 50-keV images were generated. Vessel maps were extracted from the 50-keV images. The 3-dimensional U-Net-based DLVS was trained to segment pulmonary vessels (with a vessel map as the output) from virtual noncontrast images (as the input). For external validation, vendor-independent noncontrast CT images (n = 14) and the VESSEL 12 challenge open dataset (n = 3) were used. For each case, 200 points were selected including 20 intra-lesional points, and the probability value for each point was extracted. For clinical validation, we included 281 COPD patients with low-dose noncontrast CTs. The DLVS-calculated volume of vessels with a cross-sectional area < 5 mm2 (PVV5) and the PVV5 divided by total vessel volume (%PVV5) were measured. RESULTS DLVS correctly segmented 99.1% of the intravascular points (1,387/1,400) and 93.1% of the extravascular points (1,309/1,400). The areas-under-the receiver-operating characteristic curve (AUROCs) were 0.977 and 0.969 for the two external validation datasets. For the COPD patients, both PPV5 and %PPV5 successfully differentiated severe patients whose FEV1 < 50 (AUROCs; 0.715 and 0.804) and were significantly correlated with the emphysema index (Ps < .05). CONCLUSIONS DLVS successfully segmented pulmonary vessels on noncontrast chest CT by utilizing spatiotemporally matched 50-keV images from a dual-source CT scanner and showed promising clinical applicability in COPD. KEY POINTS • We developed a deep learning pulmonary vessel segmentation algorithm using virtual noncontrast images and 50-keV enhanced images produced by a dual-source CT scanner. • Our algorithm successfully segmented vessels on diseased lungs. • Our algorithm showed promising results in assessing the loss of small vessel density in COPD patients.
Collapse
Affiliation(s)
- Ju Gang Nam
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
- Seoul National University College of Medicine, Seoul, 03080, Republic of Korea
| | | | - Sang Joon Park
- Seoul National University College of Medicine, Seoul, 03080, Republic of Korea
- MedicalIp Co., Ltd., Seoul, 03127, Republic of Korea
| | - Seung Jin Yoo
- Department of Radiology, Hanyang University Medical Center and College of Medicine, Seoul, 04763, Republic of Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
- Seoul National University College of Medicine, Seoul, 03080, Republic of Korea
| | - Soon Ho Yoon
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea.
- Seoul National University College of Medicine, Seoul, 03080, Republic of Korea.
| |
Collapse
|
168
|
Song WY, Robar JL, Morén B, Larsson T, Carlsson Tedgren Å, Jia X. Emerging technologies in brachytherapy. Phys Med Biol 2021; 66. [PMID: 34710856 DOI: 10.1088/1361-6560/ac344d] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 10/28/2021] [Indexed: 01/15/2023]
Abstract
Brachytherapy is a mature treatment modality. The literature is abundant in terms of review articles and comprehensive books on the latest established as well as evolving clinical practices. The intent of this article is to part ways and look beyond the current state-of-the-art and review emerging technologies that are noteworthy and perhaps may drive the future innovations in the field. There are plenty of candidate topics that deserve a deeper look, of course, but with practical limits in this communicative platform, we explore four topics that perhaps is worthwhile to review in detail at this time. First, intensity modulated brachytherapy (IMBT) is reviewed. The IMBT takes advantage ofanisotropicradiation profile generated through intelligent high-density shielding designs incorporated onto sources and applicators such to achieve high quality plans. Second, emerging applications of 3D printing (i.e. additive manufacturing) in brachytherapy are reviewed. With the advent of 3D printing, interest in this technology in brachytherapy has been immense and translation swift due to their potential to tailor applicators and treatments customizable to each individual patient. This is followed by, in third, innovations in treatment planning concerning catheter placement and dwell times where new modelling approaches, solution algorithms, and technological advances are reviewed. And, fourth and lastly, applications of a new machine learning technique, called deep learning, which has the potential to improve and automate all aspects of brachytherapy workflow, are reviewed. We do not expect that all ideas and innovations reviewed in this article will ultimately reach clinic but, nonetheless, this review provides a decent glimpse of what is to come. It would be exciting to monitor as IMBT, 3D printing, novel optimization algorithms, and deep learning technologies evolve over time and translate into pilot testing and sensibly phased clinical trials, and ultimately make a difference for cancer patients. Today's fancy is tomorrow's reality. The future is bright for brachytherapy.
Collapse
Affiliation(s)
- William Y Song
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia, United States of America
| | - James L Robar
- Department of Radiation Oncology, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Björn Morén
- Department of Mathematics, Linköping University, Linköping, Sweden
| | - Torbjörn Larsson
- Department of Mathematics, Linköping University, Linköping, Sweden
| | - Åsa Carlsson Tedgren
- Radiation Physics, Department of Medical and Health Sciences, Linköping University, Linköping, Sweden.,Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Stockholm, Sweden.,Department of Oncology Pathology, Karolinska Institute, Stockholm, Sweden
| | - Xun Jia
- Innovative Technology Of Radiotherapy Computations and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, United States of America
| |
Collapse
|
169
|
Hayashi H, Uemura N, Matsumura K, Zhao L, Sato H, Shiraishi Y, Yamashita YI, Baba H. Recent advances in artificial intelligence for pancreatic ductal adenocarcinoma. World J Gastroenterol 2021; 27:7480-7496. [PMID: 34887644 PMCID: PMC8613738 DOI: 10.3748/wjg.v27.i43.7480] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Revised: 08/02/2021] [Accepted: 11/15/2021] [Indexed: 02/06/2023] Open
Abstract
Pancreatic ductal adenocarcinoma (PDAC) remains the most lethal type of cancer. The 5-year survival rate for patients with early-stage diagnosis can be as high as 20%, suggesting that early diagnosis plays a pivotal role in the prognostic improvement of PDAC cases. In the medical field, the broad availability of biomedical data has led to the advent of the "big data" era. To overcome this deadly disease, how to fully exploit big data is a new challenge in the era of precision medicine. Artificial intelligence (AI) is the ability of a machine to learn and display intelligence to solve problems. AI can help to transform big data into clinically actionable insights more efficiently, reduce inevitable errors to improve diagnostic accuracy, and make real-time predictions. AI-based omics analyses will become the next alterative approach to overcome this poor-prognostic disease by discovering biomarkers for early detection, providing molecular/genomic subtyping, offering treatment guidance, and predicting recurrence and survival. Advances in AI may therefore improve PDAC survival outcomes in the near future. The present review mainly focuses on recent advances of AI in PDAC for clinicians. We believe that breakthroughs will soon emerge to fight this deadly disease using AI-navigated precision medicine.
Collapse
Affiliation(s)
- Hiromitsu Hayashi
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Norio Uemura
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Kazuki Matsumura
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Liu Zhao
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Hiroki Sato
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Yuta Shiraishi
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Yo-ichi Yamashita
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Hideo Baba
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| |
Collapse
|
170
|
Yao S, Ye Z, Wei Y, Jiang HY, Song B. Radiomics in hepatocellular carcinoma: A state-of-the-art review. World J Gastrointest Oncol 2021; 13:1599-1615. [PMID: 34853638 PMCID: PMC8603458 DOI: 10.4251/wjgo.v13.i11.1599] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 04/22/2021] [Accepted: 08/20/2021] [Indexed: 02/06/2023] Open
Abstract
Hepatocellular carcinoma (HCC) is the most common cancer and the second major contributor to cancer-related mortality. Radiomics, a burgeoning technology that can provide invisible high-dimensional quantitative and mineable data derived from routine-acquired images, has enormous potential for HCC management from diagnosis to prognosis as well as providing contributions to the rapidly developing deep learning methodology. This article aims to review the radiomics approach and its current state-of-the-art clinical application scenario in HCC. The limitations, challenges, and thoughts on future directions are also summarized.
Collapse
Affiliation(s)
- Shan Yao
- Department of Radiology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Zheng Ye
- Department of Radiology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Yi Wei
- Department of Radiology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Han-Yu Jiang
- Department of Radiology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Bin Song
- Department of Radiology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| |
Collapse
|
171
|
Huang B, Wei Z, Tang X, Fujita H, Cai Q, Gao Y, Wu T, Zhou L. Deep learning network for medical volume data segmentation based on multi axial plane fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 212:106480. [PMID: 34736168 DOI: 10.1016/j.cmpb.2021.106480] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Accepted: 10/13/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE High-dimensional data generally contains more accurate information for medical image, e.g., computerized tomography (CT) data can depict the three dimensional structure of organs more precisely. However, the data in high-dimension often needs enormous computation and has high memory requirements in the deep learning convolution networks, while dimensional reduction usually leads to performance degradation. METHODS In this paper, a two-dimensional deep learning segmentation network was proposed for medical volume data based on multi-pinacoidal plane fusion to cover more information under the control of computation.This approach has conducive compatibility while using the model proposed to extract the global information between different inputs layers. RESULTS Our approach has worked in different backbone network. Using the approach, DeepUnet's Dice coefficient (Dice) and Positive Predictive Value (PPV) are 0.883 and 0.982 showing the satisfied progress. Various backbones can enjoy the profit of the method. CONCLUSIONS Through the comparison of different backbones, it can be found that the proposed network with multi-pinacoidal plane fusion can achieve better results both quantitively and qualitatively.
Collapse
Affiliation(s)
- Bo Huang
- Shanghai University of Engineering Science, 333 Longteng Road, Songjiang District, Shanghai, Shanghai, 201620, China.
| | - Ziran Wei
- Shanghai Changzheng Hospital, 415 Fengyang Road, Huangpu District, Shanghai, Shanghai, 200003, China
| | - Xianhua Tang
- Changzhou United Imaging Healthcare Surgical Technology Co.,Ltd, No.5 Longfan Road, Wujin High-Tech Industrial Development Zone, Changzhou, China
| | - Hamido Fujita
- Faculty of Information Technology, Ho Chi Minh City University of Technology(HUTECH), Ho Chi Minh City, Vietnam; i-SOMET.org Incorporated Association, Iwate 020-0104, Japan; Andalusian Research Institute in Data Science and Computational Intelligence(DaSCI), University of Granada, Granada, Spain; College of Mathematical Sciences, Harbin Engineering University, Harbin 150001, China.
| | - Qingping Cai
- Shanghai Changzheng Hospital, 415 Fengyang Road, Huangpu District, Shanghai, Shanghai, 200003, China
| | - Yongbin Gao
- Shanghai University of Engineering Science, 333 Longteng Road, Songjiang District, Shanghai, Shanghai, 201620, China
| | - Tao Wu
- Shanghai University of Medicine & Health Sciences, Shanghai, China
| | - Liang Zhou
- Shanghai University of Medicine & Health Sciences, Shanghai, China
| |
Collapse
|
172
|
Dai X, Lei Y, Wynne J, Janopaul-Naylor J, Wang T, Roper J, Curran WJ, Liu T, Patel P, Yang X. Synthetic CT-aided multiorgan segmentation for CBCT-guided adaptive pancreatic radiotherapy. Med Phys 2021; 48:7063-7073. [PMID: 34609745 PMCID: PMC8595847 DOI: 10.1002/mp.15264] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 09/15/2021] [Accepted: 09/17/2021] [Indexed: 12/19/2022] Open
Abstract
PURPOSE The delineation of organs at risk (OARs) is fundamental to cone-beam CT (CBCT)-based adaptive radiotherapy treatment planning, but is time consuming, labor intensive, and subject to interoperator variability. We investigated a deep learning-based rapid multiorgan delineation method for use in CBCT-guided adaptive pancreatic radiotherapy. METHODS To improve the accuracy of OAR delineation, two innovative solutions have been proposed in this study. First, instead of directly segmenting organs on CBCT images, a pretrained cycle-consistent generative adversarial network (cycleGAN) was applied to generating synthetic CT images given CBCT images. Second, an advanced deep learning model called mask-scoring regional convolutional neural network (MS R-CNN) was applied on those synthetic CT to detect the positions and shapes of multiple organs simultaneously for final segmentation. The OAR contours delineated by the proposed method were validated and compared with expert-drawn contours for geometric agreement using the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS). RESULTS Across eight abdominal OARs including duodenum, large bowel, small bowel, left and right kidneys, liver, spinal cord, and stomach, the geometric comparisons between automated and expert contours are as follows: 0.92 (0.89-0.97) mean DSC, 2.90 mm (1.63-4.19 mm) mean HD95, 0.89 mm (0.61-1.36 mm) mean MSD, and 1.43 mm (0.90-2.10 mm) mean RMS. Compared to the competing methods, our proposed method had significant improvements (p < 0.05) in all the metrics for all the eight organs. Once the model was trained, the contours of eight OARs can be obtained on the order of seconds. CONCLUSIONS We demonstrated the feasibility of a synthetic CT-aided deep learning framework for automated delineation of multiple OARs on CBCT. The proposed method could be implemented in the setting of pancreatic adaptive radiotherapy to rapidly contour OARs with high accuracy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - James Janopaul-Naylor
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
173
|
DV-Net: Accurate liver vessel segmentation via dense connection model with D-BCE loss function. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107471] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
174
|
Kanakatte A, Bhatia D, Ghose A. Heart Region Segmentation using Dense VNet from Multimodality Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3255-3258. [PMID: 34891935 DOI: 10.1109/embc46164.2021.9630303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Cardiovascular diseases (CVD) have been identified as one of the most common causes of death in the world. Advanced development of imaging techniques is allowing timely detection of CVD and helping physicians in providing correct treatment plans in saving lives. Segmentation and Identification of various substructures of the heart are very important in modeling a digital twin of the patient-specific heart. Manual delineation of various substructures of the heart is tedious and time-consuming. Here we have implemented Dense VNet for detecting substructures of the heart from both CT and MRI multimodality data. Due to the limited availability of data we have implemented an on-the-fly elastic deformation data augmentation technique. The result of the proposed has been shown to outperform other methods reported in the literature on both CT and MRI datasets.
Collapse
|
175
|
Shiri I, Arabi H, Sanaat A, Jenabi E, Becker M, Zaidi H. Fully Automated Gross Tumor Volume Delineation From PET in Head and Neck Cancer Using Deep Learning Algorithms. Clin Nucl Med 2021; 46:872-883. [PMID: 34238799 DOI: 10.1097/rlu.0000000000003789] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE The availability of automated, accurate, and robust gross tumor volume (GTV) segmentation algorithms is critical for the management of head and neck cancer (HNC) patients. In this work, we evaluated 3 state-of-the-art deep learning algorithms combined with 8 different loss functions for PET image segmentation using a comprehensive training set and evaluated its performance on an external validation set of HNC patients. PATIENTS AND METHODS 18F-FDG PET/CT images of 470 patients presenting with HNC on which manually defined GTVs serving as standard of reference were used for training (340 patients), evaluation (30 patients), and testing (100 patients from different centers) of these algorithms. PET image intensity was converted to SUVs and normalized in the range (0-1) using the SUVmax of the whole data set. PET images were cropped to 12 × 12 × 12 cm3 subvolumes using isotropic voxel spacing of 3 × 3 × 3 mm3 containing the whole tumor and neighboring background including lymph nodes. We used different approaches for data augmentation, including rotation (-15 degrees, +15 degrees), scaling (-20%, 20%), random flipping (3 axes), and elastic deformation (sigma = 1 and proportion to deform = 0.7) to increase the number of training sets. Three state-of-the-art networks, including Dense-VNet, NN-UNet, and Res-Net, with 8 different loss functions, including Dice, generalized Wasserstein Dice loss, Dice plus XEnt loss, generalized Dice loss, cross-entropy, sensitivity-specificity, and Tversky, were used. Overall, 28 different networks were built. Standard image segmentation metrics, including Dice similarity, image-derived PET metrics, first-order, and shape radiomic features, were used for performance assessment of these algorithms. RESULTS The best results in terms of Dice coefficient (mean ± SD) were achieved by cross-entropy for Res-Net (0.86 ± 0.05; 95% confidence interval [CI], 0.85-0.87), Dense-VNet (0.85 ± 0.058; 95% CI, 0.84-0.86), and Dice plus XEnt for NN-UNet (0.87 ± 0.05; 95% CI, 0.86-0.88). The difference between the 3 networks was not statistically significant (P > 0.05). The percent relative error (RE%) of SUVmax quantification was less than 5% in networks with a Dice coefficient more than 0.84, whereas a lower RE% (0.41%) was achieved by Res-Net with cross-entropy loss. For maximum 3-dimensional diameter and sphericity shape features, all networks achieved a RE ≤ 5% and ≤10%, respectively, reflecting a small variability. CONCLUSIONS Deep learning algorithms exhibited promising performance for automated GTV delineation on HNC PET images. Different loss functions performed competitively when using different networks and cross-entropy for Res-Net, Dense-VNet, and Dice plus XEnt for NN-UNet emerged as reliable networks for GTV delineation. Caution should be exercised for clinical deployment owing to the occurrence of outliers in deep learning-based algorithms.
Collapse
Affiliation(s)
- Isaac Shiri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Elnaz Jenabi
- Research Centre for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | | | | |
Collapse
|
176
|
Lin H, Li Z, Yang Z, Wang Y. Variance-aware attention U-Net for multi-organ segmentation. Med Phys 2021; 48:7864-7876. [PMID: 34716711 DOI: 10.1002/mp.15322] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 10/06/2021] [Accepted: 10/23/2021] [Indexed: 01/20/2023] Open
Abstract
PURPOSE With the continuous development of deep learning based medical image segmentation technology, it is expected to attain more robust and accurate performance for more challenging tasks, such as multi-organs, small/irregular areas, and ambiguous boundary issues. METHODS We propose a variance-aware attention U-Net to solve the problem of multi-organ segmentation. Specifically, a simple yet effective variance-based uncertainty mechanism is devised to evaluate the discrimination of each voxel via its prediction probability. The proposed variance uncertainty is further embedded into an attention architecture, which not only aggregates multi-level deep features in a global-level but also enforces the network to pay extra attention to voxels with uncertain predictions during training. RESULTS Extensive experiments on challenging abdominal multi-organ CT dataset show that our proposed method consistently outperforms cutting-edge attention networks with respect to the evaluation metrics of Dice index (DSC), 95% Hausdorff distance (95HD) and average symmetric surface distance (ASSD). CONCLUSIONS The proposed network provides an accurate and robust solution for multi-organ segmentation and has the potential to be used for improving other segmentation applications.
Collapse
Affiliation(s)
- Haoneng Lin
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.,Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Shenzhen University, Shenzhen, China
| | - Zongshang Li
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.,Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Shenzhen University, Shenzhen, China
| | - Zefan Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.,Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Shenzhen University, Shenzhen, China
| | - Yi Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.,Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Shenzhen University, Shenzhen, China.,Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China.,Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| |
Collapse
|
177
|
Ibanez V, Gunz S, Erne S, Rawdon EJ, Ampanozi G, Franckenberg S, Sieberth T, Affolter R, Ebert LC, Dobay A. RiFNet: Automated rib fracture detection in postmortem computed tomography. Forensic Sci Med Pathol 2021; 18:20-29. [PMID: 34709561 PMCID: PMC8921053 DOI: 10.1007/s12024-021-00431-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/20/2021] [Indexed: 12/31/2022]
Abstract
Imaging techniques are widely used for medical diagnostics. In some cases, a lack of medical practitioners who can manually analyze the images can lead to a bottleneck. Consequently, we developed a custom-made convolutional neural network (RiFNet = Rib Fracture Network) that can detect rib fractures in postmortem computed tomography. In a retrospective cohort study, we retrieved PMCT data from 195 postmortem cases with rib fractures from July 2017 to April 2018 from our database. The computed tomography data were prepared using a plugin in the commercial imaging software Syngo.via whereby the rib cage was unfolded on a single-in-plane image reformation. Out of the 195 cases, a total of 585 images were extracted and divided into two groups labeled "with" and "without" fractures. These two groups were subsequently divided into training, validation, and test datasets to assess the performance of RiFNet. In addition, we explored the possibility of applying transfer learning techniques on our dataset by choosing two independent noncommercial off-the-shelf convolutional neural network architectures (ResNet50 V2 and Inception V3) and compared the performances of those two with RiFNet. When using pre-trained convolutional neural networks, we achieved an F1 score of 0.64 with Inception V3 and an F1 score of 0.61 with ResNet50 V2. We obtained an average F1 score of 0.91 ± 0.04 with RiFNet. RiFNet is efficient in detecting rib fractures on postmortem computed tomography. Transfer learning techniques are not necessarily well adapted to make classifications in postmortem computed tomography.
Collapse
Affiliation(s)
- Victor Ibanez
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Samuel Gunz
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Svenja Erne
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Eric J Rawdon
- Department of Mathematics, University of St. Thomas, St. Paul, Minnesota, 55105-1079, USA
| | - Garyfalia Ampanozi
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Sabine Franckenberg
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland.,Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Till Sieberth
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Raffael Affolter
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Lars C Ebert
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Akos Dobay
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland.
| |
Collapse
|
178
|
Dai X, Lei Y, Wang T, Zhou J, Roper J, McDonald M, Beitler JJ, Curran WJ, Liu T, Yang X. Automated delineation of head and neck organs at risk using synthetic MRI-aided mask scoring regional convolutional neural network. Med Phys 2021; 48:5862-5873. [PMID: 34342878 PMCID: PMC11700377 DOI: 10.1002/mp.15146] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 06/30/2021] [Accepted: 07/25/2021] [Indexed: 01/10/2023] Open
Abstract
PURPOSE Auto-segmentation algorithms offer a potential solution to eliminate the labor-intensive, time-consuming, and observer-dependent manual delineation of organs-at-risk (OARs) in radiotherapy treatment planning. This study aimed to develop a deep learning-based automated OAR delineation method to tackle the current challenges remaining in achieving reliable expert performance with the state-of-the-art auto-delineation algorithms. METHODS The accuracy of OAR delineation is expected to be improved by utilizing the complementary contrasts provided by computed tomography (CT) (bony-structure contrast) and magnetic resonance imaging (MRI) (soft-tissue contrast). Given CT images, synthetic MR images were firstly generated by a pre-trained cycle-consistent generative adversarial network. The features of CT and synthetic MRI were then extracted and combined for the final delineation of organs using mask scoring regional convolutional neural network. Both in-house and public datasets containing CT scans from head-and-neck (HN) cancer patients were adopted to quantitatively evaluate the performance of the proposed method against current state-of-the-art algorithms in metrics including Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS). RESULTS Across all of 18 OARs in our in-house dataset, the proposed method achieved an average DSC, HD95, MSD, and RMS of 0.77 (0.58-0.90), 2.90 mm (1.32-7.63 mm), 0.89 mm (0.42-1.85 mm), and 1.44 mm (0.71-3.15 mm), respectively, outperforming the current state-of-the-art algorithms by 6%, 16%, 25%, and 36%, respectively. On public datasets, for all nine OARs, an average DSC of 0.86 (0.73-0.97) were achieved, 6% better than the competing methods. CONCLUSION We demonstrated the feasibility of a synthetic MRI-aided deep learning framework for automated delineation of OARs in HN radiotherapy treatment planning. The proposed method could be adopted into routine HN cancer radiotherapy treatment planning to rapidly contour OARs with high accuracy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jonathan J Beitler
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
179
|
Cui H, Wei D, Ma K, Gu S, Zheng Y. A Unified Framework for Generalized Low-Shot Medical Image Segmentation With Scarce Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2656-2671. [PMID: 33338014 DOI: 10.1109/tmi.2020.3045775] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Medical image segmentation has achieved remarkable advancements using deep neural networks (DNNs). However, DNNs often need big amounts of data and annotations for training, both of which can be difficult and costly to obtain. In this work, we propose a unified framework for generalized low-shot (one- and few-shot) medical image segmentation based on distance metric learning (DML). Unlike most existing methods which only deal with the lack of annotations while assuming abundance of data, our framework works with extreme scarcity of both, which is ideal for rare diseases. Via DML, the framework learns a multimodal mixture representation for each category, and performs dense predictions based on cosine distances between the pixels' deep embeddings and the category representations. The multimodal representations effectively utilize the inter-subject similarities and intraclass variations to overcome overfitting due to extremely limited data. In addition, we propose adaptive mixing coefficients for the multimodal mixture distributions to adaptively emphasize the modes better suited to the current input. The representations are implicitly embedded as weights of the fc layer, such that the cosine distances can be computed efficiently via forward propagation. In our experiments on brain MRI and abdominal CT datasets, the proposed framework achieves superior performances for low-shot segmentation towards standard DNN-based (3D U-Net) and classical registration-based (ANTs) methods, e.g., achieving mean Dice coefficients of 81%/69% for brain tissue/abdominal multi-organ segmentation using a single training sample, as compared to 52%/31% and 72%/35% by the U-Net and ANTs, respectively.
Collapse
|
180
|
Haghighi F, Taher MRH, Zhou Z, Gotway MB, Liang J. Transferable Visual Words: Exploiting the Semantics of Anatomical Patterns for Self-Supervised Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2857-2868. [PMID: 33617450 PMCID: PMC8516596 DOI: 10.1109/tmi.2021.3060634] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This paper introduces a new concept called "transferable visual words" (TransVW), aiming to achieve annotation efficiency for deep learning in medical image analysis. Medical imaging-focusing on particular parts of the body for defined clinical purposes-generates images of great similarity in anatomy across patients and yields sophisticated anatomical patterns across images, which are associated with rich semantics about human anatomy and which are natural visual words. We show that these visual words can be automatically harvested according to anatomical consistency via self-discovery, and that the self-discovered visual words can serve as strong yet free supervision signals for deep models to learn semantics-enriched generic image representation via self-supervision (self-classification and self-restoration). Our extensive experiments demonstrate the annotation efficiency of TransVW by offering higher performance and faster convergence with reduced annotation cost in several applications. Our TransVW has several important advantages, including (1) TransVW is a fully autodidactic scheme, which exploits the semantics of visual words for self-supervised learning, requiring no expert annotation; (2) visual word learning is an add-on strategy, which complements existing self-supervised methods, boosting their performance; and (3) the learned image representation is semantics-enriched models, which have proven to be more robust and generalizable, saving annotation efforts for a variety of applications through transfer learning. Our code, pre-trained models, and curated visual words are available at https://github.com/JLiangLab/TransVW.
Collapse
|
181
|
Explaining clinical decision support systems in medical imaging using cycle-consistent activation maximization. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.081] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
182
|
Celiac trunk segmentation incorporating with additional contour constraint. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02221-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
183
|
Taylor JC, Sharkey MJ, Metherall P. In-house development, implementation and evaluation of machine learning software for automated clinical scan processing. Nucl Med Commun 2021; 42:1157-1161. [PMID: 34001826 DOI: 10.1097/mnm.0000000000001436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Objectives
Scanning of myocardial perfusion patients on a dedicated cardiac gamma camera (GE Discovery NM 530c) requires careful positioning between stress and rest acquisitions. The offset between scans is routinely measured through image registration and analysis of the transformation matrix. Accurate registration requires a 3D mask to be drawn manually over the left ventricle, excluding any significant extracardiac tracer uptake.
This work sought to automate mask drawing as part of a new, more efficient system for checking relative patient position. Objectives were to
Methods
Algorithm development utilised 9604 manually drawn segmentation masks (10% for validation, 10% for testing). The NiftyNet platform was used to train, optimise and test a convolutional neural network.
The algorithm was packaged as a clinical tool and utilised prospectively alongside the manual technique. The software was evaluated for 343 patients to ensure adequate functioning and to assess performance.
Results
The difference in patient offset measurements between manual and automated methods was small (mean of −0.01 mm (±0.4 mm) in the test dataset, mean difference of −0.05 mm (±0.5 mm) during clinical evaluation). The position-check software was found to be reliable during prospective evaluation, producing segmentations that adequately enclosed the left ventricle in all cases.
Conclusion
This work demonstrates that established machine learning technology and modest hardware can be used to create automated segmentation tools that perform well in the clinic.
Collapse
|
184
|
Yang SD, Zhao YQ, Zhang F, Liao M, Yang Z, Wang YJ, Yu LL. An efficient two-step multi-organ registration on abdominal CT via deep-learning based segmentation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103027] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
185
|
Wang X, Jiang L, Li L, Xu M, Deng X, Dai L, Xu X, Li T, Guo Y, Wang Z, Dragotti PL. Joint Learning of 3D Lesion Segmentation and Classification for Explainable COVID-19 Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2463-2476. [PMID: 33983881 PMCID: PMC8544955 DOI: 10.1109/tmi.2021.3079709] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 03/30/2021] [Accepted: 05/09/2021] [Indexed: 05/13/2023]
Abstract
Given the outbreak of COVID-19 pandemic and the shortage of medical resource, extensive deep learning models have been proposed for automatic COVID-19 diagnosis, based on 3D computed tomography (CT) scans. However, the existing models independently process the 3D lesion segmentation and disease classification, ignoring the inherent correlation between these two tasks. In this paper, we propose a joint deep learning model of 3D lesion segmentation and classification for diagnosing COVID-19, called DeepSC-COVID, as the first attempt in this direction. Specifically, we establish a large-scale CT database containing 1,805 3D CT scans with fine-grained lesion annotations, and reveal 4 findings about lesion difference between COVID-19 and community acquired pneumonia (CAP). Inspired by our findings, DeepSC-COVID is designed with 3 subnets: a cross-task feature subnet for feature extraction, a 3D lesion subnet for lesion segmentation, and a classification subnet for disease diagnosis. Besides, the task-aware loss is proposed for learning the task interaction across the 3D lesion and classification subnets. Different from all existing models for COVID-19 diagnosis, our model is interpretable with fine-grained 3D lesion distribution. Finally, extensive experimental results show that the joint learning framework in our model significantly improves the performance of 3D lesion segmentation and disease classification in both efficiency and efficacy.
Collapse
Affiliation(s)
- Xiaofei Wang
- School of Electronic and Information EngineeringBeihang UniversityBeijing100191China
| | - Lai Jiang
- School of Electronic and Information EngineeringBeihang UniversityBeijing100191China
| | - Liu Li
- Department of ComputingImperial College LondonLondonSW7 2AZU.K.
| | - Mai Xu
- School of Electronic and Information EngineeringBeihang UniversityBeijing100191China
| | - Xin Deng
- School of Cyber Science and TechnologyBeihang UniversityBeijing100191China
| | - Lisong Dai
- Liyuan HospitalHuazhong University of Science and TechnologyWuhan430077China
| | - Xiangyang Xu
- Liyuan HospitalHuazhong University of Science and TechnologyWuhan430077China
| | - Tianyi Li
- School of Electronic and Information EngineeringBeihang UniversityBeijing100191China
| | - Yichen Guo
- School of Electronic and Information EngineeringBeihang UniversityBeijing100191China
| | - Zulin Wang
- School of Electronic and Information EngineeringBeihang UniversityBeijing100191China
| | - Pier Luigi Dragotti
- Department of Electrical and Electronic EngineeringImperial College LondonLondonSW7 2AZU.K.
| |
Collapse
|
186
|
Zhang R, Chung ACS. MedQ: Lossless ultra-low-bit neural network quantization for medical image segmentation. Med Image Anal 2021; 73:102200. [PMID: 34416578 DOI: 10.1016/j.media.2021.102200] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 06/30/2021] [Accepted: 07/26/2021] [Indexed: 10/20/2022]
Abstract
Implementing deep convolutional neural networks (CNNs) with boolean arithmetic is ideal for eliminating the notoriously high computational expense of deep learning models. However, although lossless model compression via weight-only quantization has been achieved in previous works, it is still an open problem about how to reduce the computation precision of CNNs without losing performance, especially for medical image segmentation tasks where data dimension is high and annotation is scarce. This paper presents a novel CNN quantization framework that can squeeze a deep model (both parameters and activation) to extremely low bitwidth, e.g., 1∼2 bits, while maintaining its high performance. In the new method, we first design a strong baseline quantizer with an optimizable quantization range. Then, to relieve the back-propagation difficulty caused by the discontinuous quantization function, we design a radical residual connection scheme that allows gradients to flow through every quantized layer freely. Moreover, a tanh-based derivative function is used to further boost gradient flow and a distributional loss is employed to regularize the model output. Extensive experiments and ablation studies are conducted on two well-established public 3D segmentation datasets, i.e., BRATS2020 and LiTS. Experimental results evidence that our framework not only outperforms state-of-the-art quantization approaches significantly, but also achieves lossless performance on both datasets with ternary (2-bit) quantization.
Collapse
Affiliation(s)
- Rongzhao Zhang
- Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong.
| | - Albert C S Chung
- Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong.
| |
Collapse
|
187
|
Deep learning method for prediction of patient-specific dose distribution in breast cancer. Radiat Oncol 2021; 16:154. [PMID: 34404441 PMCID: PMC8369791 DOI: 10.1186/s13014-021-01864-9] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 07/19/2021] [Indexed: 11/10/2022] Open
Abstract
Background Patient-specific dose prediction improves the efficiency and quality of radiation treatment planning and reduces the time required to find the optimal plan. In this study, a patient-specific dose prediction model was developed for a left-sided breast clinical case using deep learning, and its performance was compared with that of conventional knowledge-based planning using RapidPlan™. Methods Patient-specific dose prediction was performed using a contour image of the planning target volume (PTV) and organs at risk (OARs) with a U-net-based modified dose prediction neural network. A database of 50 volumetric modulated arc therapy (VMAT) plans for left-sided breast cancer patients was utilized to produce training and validation datasets. The dose prediction deep neural network (DpNet) feature weights of the previously learned convolution layers were applied to the test on a cohort of 10 test sets. With the same patient data set, dose prediction was performed for the 10 test sets after training in RapidPlan. The 3D dose distribution, absolute dose difference error, dose-volume histogram, 2D gamma index, and iso-dose dice similarity coefficient were used for quantitative evaluation of the dose prediction. Results The mean absolute error (MAE) and one standard deviation (SD) between the clinical and deep learning dose prediction models were 0.02 ± 0.04%, 0.01 ± 0.83%, 0.16 ± 0.82%, 0.52 ± 0.97, − 0.88 ± 1.83%, − 1.16 ± 2.58%, and − 0.97 ± 1.73% for D95%, Dmean in the PTV, and the OARs of the body, left breast, heart, left lung, and right lung, respectively, and those measured between the clinical and RapidPlan dose prediction models were 0.02 ± 0.14%, 0.87 ± 0.63%, − 0.29 ± 0.98%, 1.30 ± 0.86%, − 0.32 ± 1.10%, 0.12 ± 2.13%, and − 1.74 ± 1.79, respectively. Conclusions In this study, a deep learning method for dose prediction was developed and was demonstrated to accurately predict patient-specific doses for left-sided breast cancer. Using the deep learning framework, the efficiency and accuracy of the dose prediction were compared to those of RapidPlan. The doses predicted by deep learning were superior to the results of the RapidPlan-generated VMAT plan.
Collapse
|
188
|
Weber KA, Abbott R, Bojilov V, Smith AC, Wasielewski M, Hastie TJ, Parrish TB, Mackey S, Elliott JM. Multi-muscle deep learning segmentation to automate the quantification of muscle fat infiltration in cervical spine conditions. Sci Rep 2021; 11:16567. [PMID: 34400672 PMCID: PMC8368246 DOI: 10.1038/s41598-021-95972-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 07/28/2021] [Indexed: 12/23/2022] Open
Abstract
Muscle fat infiltration (MFI) has been widely reported across cervical spine disorders. The quantification of MFI requires time-consuming and rater-dependent manual segmentation techniques. A convolutional neural network (CNN) model was trained to segment seven cervical spine muscle groups (left and right muscles segmented separately, 14 muscles total) from Dixon MRI scans (n = 17, 17 scans < 2 weeks post motor vehicle collision (MVC), and 17 scans 12 months post MVC). The CNN MFI measures demonstrated high test reliability and accuracy in an independent testing dataset (n = 18, 9 scans < 2 weeks post MVC, and 9 scans 12 months post MVC). Using the CNN in 84 participants with scans < 2 weeks post MVC (61 females, 23 males, age = 34.2 ± 10.7 years) differences in MFI between the muscle groups and relationships between MFI and sex, age, and body mass index (BMI) were explored. Averaging across all muscles, females had significantly higher MFI than males (p = 0.026). The deep cervical muscles demonstrated significantly greater MFI than the more superficial muscles (p < 0.001), and only MFI within the deep cervical muscles was moderately correlated to age (r > 0.300, p ≤ 0.001). CNN's allow for the accurate and rapid, quantitative assessment of the composition of the architecturally complex muscles traversing the cervical spine. Acknowledging the wider reports of MFI in cervical spine disorders and the time required to manually segment the individual muscles, this CNN may have diagnostic, prognostic, and predictive value in disorders of the cervical spine.
Collapse
Affiliation(s)
- Kenneth A Weber
- Division of Pain Medicine, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Palo Alto, CA, USA.
| | - Rebecca Abbott
- Department of Physical Therapy and Human Movement Sciences, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Vivie Bojilov
- Division of Pain Medicine, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Andrew C Smith
- Physical Therapy Program, Department of Physical Medicine and Rehabilitation, School of Medicine, University of Colorado, Aurora, CO, USA
| | - Marie Wasielewski
- Department of Physical Therapy and Human Movement Sciences, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Trevor J Hastie
- Statistics Department, Stanford University, Palo Alto, CA, USA
| | - Todd B Parrish
- Department of Radiology, Northwestern University, Chicago, IL, USA
| | - Sean Mackey
- Division of Pain Medicine, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Palo Alto, CA, USA
| | - James M Elliott
- Northern Sydney Local Health District, The Kolling Institute, St. Leonards, NSW, Australia.,The Faculty of Medicine and Health, The University of Sydney, Camperdown, NSW, Australia.,Department of Physical Therapy and Human Movement Sciences, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| |
Collapse
|
189
|
Fu W, Sharma S, Abadi E, Iliopoulos AS, Wang Q, Lo JY, Sun X, Segars WP, Samei E. iPhantom: A Framework for Automated Creation of Individualized Computational Phantoms and Its Application to CT Organ Dosimetry. IEEE J Biomed Health Inform 2021; 25:3061-3072. [PMID: 33651703 DOI: 10.1109/jbhi.2021.3063080] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE This study aims to develop and validate a novel framework, iPhantom, for automated creation of patient-specific phantoms or "digital-twins (DT)" using patient medical images. The framework is applied to assess radiation dose to radiosensitive organs in CT imaging of individual patients. METHOD Given a volume of patient CT images, iPhantom segments selected anchor organs and structures (e.g., liver, bones, pancreas) using a learning-based model developed for multi-organ CT segmentation. Organs which are challenging to segment (e.g., intestines) are incorporated from a matched phantom template, using a diffeomorphic registration model developed for multi-organ phantom-voxels. The resulting digital-twin phantoms are used to assess organ doses during routine CT exams. RESULT iPhantom was validated on both with a set of XCAT digital phantoms (n = 50) and an independent clinical dataset (n = 10) with similar accuracy. iPhantom precisely predicted all organ locations yielding Dice Similarity Coefficients (DSC) 0.6 - 1 for anchor organs and DSC of 0.3-0.9 for all other organs. iPhantom showed <10% errors in estimated radiation dose for the majority of organs, which was notably superior to the state-of-the-art baseline method (20-35% dose errors). CONCLUSION iPhantom enables automated and accurate creation of patient-specific phantoms and, for the first time, provides sufficient and automated patient-specific dose estimates for CT dosimetry. SIGNIFICANCE The new framework brings the creation and application of CHPs (computational human phantoms) to the level of individual CHPs through automation, achieving wide and precise organ localization, paving the way for clinical monitoring, personalized optimization, and large-scale research.
Collapse
|
190
|
Liang D, Wang L, Han D, Qiu J, Yin X, Yang Z, Xing J, Dong J, Ma Z. Semi 3D-TENet: Semi 3D network based on temporal information extraction for coronary artery segmentation from angiography video. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102894] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
191
|
Berbís MA, Aneiros-Fernández J, Mendoza Olivares FJ, Nava E, Luna A. Role of artificial intelligence in multidisciplinary imaging diagnosis of gastrointestinal diseases. World J Gastroenterol 2021; 27:4395-4412. [PMID: 34366612 PMCID: PMC8316909 DOI: 10.3748/wjg.v27.i27.4395] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 04/14/2021] [Accepted: 06/07/2021] [Indexed: 02/06/2023] Open
Abstract
The use of artificial intelligence-based tools is regarded as a promising approach to increase clinical efficiency in diagnostic imaging, improve the interpretability of results, and support decision-making for the detection and prevention of diseases. Radiology, endoscopy and pathology images are suitable for deep-learning analysis, potentially changing the way care is delivered in gastroenterology. The aim of this review is to examine the key aspects of different neural network architectures used for the evaluation of gastrointestinal conditions, by discussing how different models behave in critical tasks, such as lesion detection or characterization (i.e. the distinction between benign and malignant lesions of the esophagus, the stomach and the colon). To this end, we provide an overview on recent achievements and future prospects in deep learning methods applied to the analysis of radiology, endoscopy and histologic whole-slide images of the gastrointestinal tract.
Collapse
Affiliation(s)
| | - José Aneiros-Fernández
- Department of Pathology, Hospital Universitario Clínico San Cecilio, Granada 18012, Spain
| | | | - Enrique Nava
- Department of Communications Engineering, University of Málaga, Malaga 29016, Spain
| | - Antonio Luna
- MRI Unit, Department of Radiology, HT Médica, Jaén 23007, Spain
| |
Collapse
|
192
|
Zhang G, Yang Z, Huo B, Chai S, Jiang S. Multiorgan segmentation from partially labeled datasets with conditional nnU-Net. Comput Biol Med 2021; 136:104658. [PMID: 34311262 DOI: 10.1016/j.compbiomed.2021.104658] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 07/14/2021] [Accepted: 07/15/2021] [Indexed: 11/30/2022]
Abstract
Accurate and robust multiorgan abdominal CT segmentation plays a significant role in numerous clinical applications, such as therapy treatment planning and treatment delivery. Almost all existing segmentation networks rely on fully annotated data with strong supervision. However, annotating fully annotated multiorgan data in CT images is both laborious and time-consuming. In comparison, massive partially labeled datasets are usually easily accessible. In this paper, we propose conditional nnU-Net trained on the union of partially labeled datasets for multiorgan segmentation. The deep model employs the state-of-the-art nnU-Net as the backbone and introduces a conditioning strategy by feeding auxiliary information into the decoder architecture as an additional input layer. This model leverages the prior conditional information to identify the organ class at the pixel-wise level and encourages organs' spatial information recovery. Furthermore, we adopt a deep supervision mechanism to refine the outputs at different scales and apply the combination of Dice loss and Focal loss to optimize the training model. Our proposed method is evaluated on seven publicly available datasets of the liver, pancreas, spleen and kidney, in which promising segmentation performance has been achieved. The proposed conditional nnU-Net breaks down the barriers between nonoverlapping labeled datasets and further alleviates the problem of data hunger in multiorgan segmentation.
Collapse
Affiliation(s)
- Guobin Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Bin Huo
- Department of Oncology, Tianjin Medical University Second Hospital, Tianjin, 300211, China
| | - Shude Chai
- Department of Oncology, Tianjin Medical University Second Hospital, Tianjin, 300211, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China.
| |
Collapse
|
193
|
Li X, Wang S, Niu X, Wang L, Chen P. 3D M-Net: Object-Specific 3D Segmentation Network Based on a Single Projection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:5852595. [PMID: 34335721 PMCID: PMC8292052 DOI: 10.1155/2021/5852595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 06/28/2021] [Indexed: 11/30/2022]
Abstract
The internal assembly correctness of industrial products directly affects their performance and service life. Industrial products are usually protected by opaque housing, so most internal detection methods are based on X-rays. Since the dense structural features of industrial products, it is challenging to detect the occluded parts only from projections. Limited by the data acquisition and reconstruction speeds, CT-based detection methods do not achieve real-time detection. To solve the above problems, we design an end-to-end single-projection 3D segmentation network. For a specific product, the network adopts a single projection as input to segment product components and output 3D segmentation results. In this study, the feasibility of the network was verified against data containing several typical assembly errors. The qualitative and quantitative results reveal that the segmentation results can meet industrial assembly real-time detection requirements and exhibit high robustness to noise and component occlusion.
Collapse
Affiliation(s)
- Xuan Li
- State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Sukai Wang
- State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Xiaodong Niu
- State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Liming Wang
- State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Ping Chen
- State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| |
Collapse
|
194
|
Liang X, Li N, Zhang Z, Xiong J, Zhou S, Xie Y. Incorporating the hybrid deformable model for improving the performance of abdominal CT segmentation via multi-scale feature fusion network. Med Image Anal 2021; 73:102156. [PMID: 34274689 DOI: 10.1016/j.media.2021.102156] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 06/22/2021] [Accepted: 06/28/2021] [Indexed: 01/17/2023]
Abstract
Automated multi-organ abdominal Computed Tomography (CT) image segmentation can assist the treatment planning, diagnosis, and improve many clinical workflows' efficiency. The 3-D Convolutional Neural Network (CNN) recently attained state-of-the-art accuracy, which typically relies on supervised training with many manual annotated data. Many methods used the data augmentation strategy with a rigid or affine spatial transformation to alleviate the over-fitting problem and improve the network's robustness. However, the rigid or affine spatial transformation fails to capture the complex voxel-based deformation in the abdomen, filled with many soft organs. We developed a novel Hybrid Deformable Model (HDM), which consists of the inter-and intra-patient deformation for more effective data augmentation to tackle this issue. The inter-patient deformations were extracted from the learning-based deformable registration between different patients, while the intra-patient deformations were formed using the random 3-D Thin-Plate-Spline (TPS) transformation. Incorporating the HDM enabled the network to capture many of the subtle deformations of abdominal organs. To find a better solution and achieve faster convergence for network training, we fused the pre-trained multi-scale features into the a 3-D attention U-Net. We directly compared the segmentation accuracy of the proposed method to the previous techniques on several centers' datasets via cross-validation. The proposed method achieves the average Dice Similarity Coefficient (DSC) 0.852, which outperformed the other state-of-the-art on multi-organ abdominal CT segmentation results.
Collapse
Affiliation(s)
- Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China; Shenzhen Colleges of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| | - Na Li
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China; Shenzhen Colleges of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Jing Xiong
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Shoujun Zhou
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| |
Collapse
|
195
|
Pancreatic cancer tumor analysis in CT images using patch-based multi-resolution convolutional neural network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102652] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
196
|
Zhou Q, Wang J, Guo J, Huang Z, Ding M, Yuchi M, Zhang X. Anterior chamber angle classification in anterior segment optical coherence tomography images using hybrid attention based pyramidal convolutional network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102686] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
197
|
Saha A, Hosseinzadeh M, Huisman H. End-to-end prostate cancer detection in bpMRI via 3D CNNs: Effects of attention mechanisms, clinical priori and decoupled false positive reduction. Med Image Anal 2021; 73:102155. [PMID: 34245943 DOI: 10.1016/j.media.2021.102155] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2021] [Revised: 05/30/2021] [Accepted: 06/08/2021] [Indexed: 01/22/2023]
Abstract
We present a multi-stage 3D computer-aided detection and diagnosis (CAD) model2 for automated localization of clinically significant prostate cancer (csPCa) in bi-parametric MR imaging (bpMRI). Deep attention mechanisms drive its detection network, targeting salient structures and highly discriminative feature dimensions across multiple resolutions. Its goal is to accurately identify csPCa lesions from indolent cancer and the wide range of benign pathology that can afflict the prostate gland. Simultaneously, a decoupled residual classifier is used to achieve consistent false positive reduction, without sacrificing high sensitivity or computational efficiency. In order to guide model generalization with domain-specific clinical knowledge, a probabilistic anatomical prior is used to encode the spatial prevalence and zonal distinction of csPCa. Using a large dataset of 1950 prostate bpMRI paired with radiologically-estimated annotations, we hypothesize that such CNN-based models can be trained to detect biopsy-confirmed malignancies in an independent cohort. For 486 institutional testing scans, the 3D CAD system achieves 83.69±5.22% and 93.19±2.96% detection sensitivity at 0.50 and 1.46 false positive(s) per patient, respectively, with 0.882±0.030 AUROC in patient-based diagnosis -significantly outperforming four state-of-the-art baseline architectures (U-SEResNet, UNet++, nnU-Net, Attention U-Net) from recent literature. For 296 external biopsy-confirmed testing scans, the ensembled CAD system shares moderate agreement with a consensus of expert radiologists (76.69%; kappa = 0.51±0.04) and independent pathologists (81.08%; kappa = 0.56±0.06); demonstrating strong generalization to histologically-confirmed csPCa diagnosis.
Collapse
Affiliation(s)
- Anindo Saha
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen 6525 GA, the Netherlands.
| | - Matin Hosseinzadeh
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen 6525 GA, the Netherlands
| | - Henkjan Huisman
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen 6525 GA, the Netherlands
| |
Collapse
|
198
|
Paliwal M, Weber KA, Smith AC, Elliott JM, Muhammad F, Dahdaleh NS, Bodurka J, Dhaher Y, Parrish TB, Mackey S, Smith ZA. Fatty infiltration in cervical flexors and extensors in patients with degenerative cervical myelopathy using a multi-muscle segmentation model. PLoS One 2021; 16:e0253863. [PMID: 34170961 PMCID: PMC8232539 DOI: 10.1371/journal.pone.0253863] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Accepted: 06/14/2021] [Indexed: 12/27/2022] Open
Abstract
Background In patients with degenerative cervical myelopathy (DCM) that have spinal cord compression and sensorimotor deficits, surgical decompression is often performed. However, there is heterogeneity in clinical presentation and post-surgical functional recovery. Objectives Primary: a) to assess differences in muscle fat infiltration (MFI) in patients with DCM versus controls, b) to assess association between MFI and clinical disability. Secondary: to assess association between MFI pre-surgery and post-surgical functional recovery. Study design Cross-sectional case control study. Methods Eighteen patients with DCM (58.6 ± 14.2 years, 10 M/8F) and 25 controls (52.6 ± 11.8 years, 13M/12 F) underwent 3D Dixon fat-water imaging. A convolutional neural network (CNN) was used to segment cervical muscles (MFSS- multifidus and semispinalis cervicis, LC- longus capitis/colli) and quantify MFI. Modified Japanese Orthopedic Association (mJOA) and Nurick were collected. Results Patients with DCM had significantly higher MFI in MFSS (20.63 ± 5.43 vs 17.04 ± 5.24, p = 0.043) and LC (18.74 ± 6.7 vs 13.66 ± 4.91, p = 0.021) than controls. Patients with increased MFI in LC and MFSS had higher disability (LC: Nurick (Spearman’s ρ = 0.436, p = 0.003) and mJOA (ρ = -0.399, p = 0.008)). Increased MFI in LC pre-surgery was associated with post-surgical improvement in Nurick (ρ = -0.664, p = 0.026) and mJOA (ρ = -0.603, p = 0.049). Conclusion In DCM, increased muscle adiposity is significantly associated with sensorimotor deficits, clinical disability, and functional recovery after surgery. Accurate and time efficient evaluation of fat infiltration in cervical muscles may be conducted through implementation of CNN models.
Collapse
Affiliation(s)
- Monica Paliwal
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, United States of America
- * E-mail:
| | - Kenneth A. Weber
- Department of Anesthesiology, Systems Neuroscience and Pain Laboratory, Perioperative and Pain Medicine, Stanford University, Palo Alto, California, United States of America
| | - Andrew C. Smith
- Department of Physical Medicine and Rehabilitation, School of Medicine, Physical Therapy Program, Aurora, Colorado, United States of America
| | - James M. Elliott
- Department of Physical Therapy and Human Movement Sciences, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States of America
- Faculty of Medicine and Health, University of Sydney, Kolling Institute of Medical Research, St. Leonards, New South Wales, Australia
| | - Fauziyya Muhammad
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, United States of America
| | - Nader S. Dahdaleh
- Department of Neurological Surgery, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States of America
| | - Jerzy Bodurka
- Laureate Institute for Brain Research, Tulsa, Oklahoma, United States of America
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, Oklahoma, United States of America
| | - Yasin Dhaher
- Department of Physical Medicine and Rehabilitation, University of Texas Southwestern Medical Center, Dallas, Texas, United States of America
| | - Todd B. Parrish
- Department of Radiology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States of America
| | - Sean Mackey
- Department of Anesthesiology, Systems Neuroscience and Pain Laboratory, Perioperative and Pain Medicine, Stanford University, Palo Alto, California, United States of America
| | - Zachary A. Smith
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, United States of America
| |
Collapse
|
199
|
Enriquez JS, Chu Y, Pudakalakatti S, Hsieh KL, Salmon D, Dutta P, Millward NZ, Lurie E, Millward S, McAllister F, Maitra A, Sen S, Killary A, Zhang J, Jiang X, Bhattacharya PK, Shams S. Hyperpolarized Magnetic Resonance and Artificial Intelligence: Frontiers of Imaging in Pancreatic Cancer. JMIR Med Inform 2021; 9:e26601. [PMID: 34137725 PMCID: PMC8277399 DOI: 10.2196/26601] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 02/24/2021] [Accepted: 04/03/2021] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND There is an unmet need for noninvasive imaging markers that can help identify the aggressive subtype(s) of pancreatic ductal adenocarcinoma (PDAC) at diagnosis and at an earlier time point, and evaluate the efficacy of therapy prior to tumor reduction. In the past few years, there have been two major developments with potential for a significant impact in establishing imaging biomarkers for PDAC and pancreatic cancer premalignancy: (1) hyperpolarized metabolic (HP)-magnetic resonance (MR), which increases the sensitivity of conventional MR by over 10,000-fold, enabling real-time metabolic measurements; and (2) applications of artificial intelligence (AI). OBJECTIVE Our objective of this review was to discuss these two exciting but independent developments (HP-MR and AI) in the realm of PDAC imaging and detection from the available literature to date. METHODS A systematic review following the PRISMA extension for Scoping Reviews (PRISMA-ScR) guidelines was performed. Studies addressing the utilization of HP-MR and/or AI for early detection, assessment of aggressiveness, and interrogating the early efficacy of therapy in patients with PDAC cited in recent clinical guidelines were extracted from the PubMed and Google Scholar databases. The studies were reviewed following predefined exclusion and inclusion criteria, and grouped based on the utilization of HP-MR and/or AI in PDAC diagnosis. RESULTS Part of the goal of this review was to highlight the knowledge gap of early detection in pancreatic cancer by any imaging modality, and to emphasize how AI and HP-MR can address this critical gap. We reviewed every paper published on HP-MR applications in PDAC, including six preclinical studies and one clinical trial. We also reviewed several HP-MR-related articles describing new probes with many functional applications in PDAC. On the AI side, we reviewed all existing papers that met our inclusion criteria on AI applications for evaluating computed tomography (CT) and MR images in PDAC. With the emergence of AI and its unique capability to learn across multimodal data, along with sensitive metabolic imaging using HP-MR, this knowledge gap in PDAC can be adequately addressed. CT is an accessible and widespread imaging modality worldwide as it is affordable; because of this reason alone, most of the data discussed are based on CT imaging datasets. Although there were relatively few MR-related papers included in this review, we believe that with rapid adoption of MR imaging and HP-MR, more clinical data on pancreatic cancer imaging will be available in the near future. CONCLUSIONS Integration of AI, HP-MR, and multimodal imaging information in pancreatic cancer may lead to the development of real-time biomarkers of early detection, assessing aggressiveness, and interrogating early efficacy of therapy in PDAC.
Collapse
Affiliation(s)
- José S Enriquez
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Yan Chu
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Shivanand Pudakalakatti
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Kang Lin Hsieh
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Duncan Salmon
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, United States
| | - Prasanta Dutta
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Niki Zacharias Millward
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Urology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Eugene Lurie
- Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Steven Millward
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Florencia McAllister
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Clinical Cancer Prevention, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Anirban Maitra
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Subrata Sen
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Ann Killary
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Jian Zhang
- Division of Computer Science and Engineering, Louisiana State University, Baton Rouge, LA, United States
| | - Xiaoqian Jiang
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Pratip K Bhattacharya
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Shayan Shams
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
200
|
Baxter JSH, Bui QA, Maguet E, Croci S, Delmas A, Lefaucheur JP, Bredoux L, Jannin P. Automatic cortical target point localisation in MRI for transcranial magnetic stimulation via a multi-resolution convolutional neural network. Int J Comput Assist Radiol Surg 2021; 16:1077-1087. [PMID: 34089439 DOI: 10.1007/s11548-021-02386-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/23/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE Transcranial magnetic stimulation (TMS) is a growing therapy for a variety of psychiatric and neurological disorders that arise from or are modulated by cortical regions of the brain represented by singular 3D target points. These target points are often determined manually with assistance from a pre-operative T1-weighted MRI, although there is growing interest in automatic target point localisation using an atlas. However, both approaches can be time-consuming which has an effect on the clinical workflow, and the latter does not take into account patient variability such as the varying number of cortical gyri where these targets are located. METHODS This paper proposes a multi-resolution convolutional neural network for point localisation in MR images for a priori defined points in increasingly finely resolved versions of the input image. This approach is both fast and highly memory efficient, allowing it to run in high-throughput centres, and has the capability of distinguishing between patients with high levels of anatomical variability. RESULTS Preliminary experiments have found the accuracy of this network to be [Formula: see text] mm, compared to [Formula: see text] mm for deformable registration and [Formula: see text] mm for a human expert. For most treatment points, the human expert and proposed CNN statistically significantly outperform registration, but neither statistically significantly outperforms the other, suggesting that the proposed network has human-level performance. CONCLUSIONS The human-level performance of this network indicates that it can improve TMS planning by automatically localising target points in seconds, avoiding more time-consuming registration or manual point localisation processes. This is particularly beneficial for out-of-hospital centres with limited computational resources where TMS is increasingly being administered.
Collapse
Affiliation(s)
- John S H Baxter
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Université de Rennes 1, Rennes, France.
| | - Quoc Anh Bui
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Université de Rennes 1, Rennes, France
| | - Ehouarn Maguet
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Université de Rennes 1, Rennes, France
| | | | | | - Jean-Pascal Lefaucheur
- ENT Team, EA4391, Faculty of Medicine, Paris Est Créteil University, Créteil, France.,Clinical Neurophysiology Unit, Department of Physiology, Henri Mondor Hospital, Assistance Publique - Hôpitaux de Paris, Créteil, France
| | | | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Université de Rennes 1, Rennes, France
| |
Collapse
|