51
|
Artificial intelligence in oncology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00018-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
52
|
A 3D-2D Hybrid U-Net Convolutional Neural Network Approach to Prostate Organ Segmentation of Multiparametric MRI. AJR Am J Roentgenol 2020; 216:111-116. [PMID: 32812797 DOI: 10.2214/ajr.19.22168] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
OBJECTIVE Prostate cancer is the most commonly diagnosed cancer in men in the United States with more than 200,000 new cases in 2018. Multiparametric MRI (mpMRI) is increasingly used for prostate cancer evaluation. Prostate organ segmentation is an essential step of surgical planning for prostate fusion biopsies. Deep learning convolutional neural networks (CNNs) are the predominant method of machine learning for medical image recognition. In this study, we describe a deep learning approach, a subset of artificial intelligence, for automatic localization and segmentation of prostates from mpMRI. MATERIALS AND METHODS This retrospective study included patients who underwent prostate MRI and ultrasound-MRI fusion transrectal biopsy between September 2014 and December 2016. Axial T2-weighted images were manually segmented by two abdominal radiologists, which served as ground truth. These manually segmented images were used for training on a customized hybrid 3D-2D U-Net CNN architecture in a fivefold cross-validation paradigm for neural network training and validation. The Dice score, a measure of overlap between manually segmented and automatically derived segmentations, and Pearson linear correlation coefficient of prostate volume were used for statistical evaluation. RESULTS The CNN was trained on 299 MRI examinations (total number of MR images = 7774) of 287 patients. The customized hybrid 3D-2D U-Net had a mean Dice score of 0.898 (range, 0.890-0.908) and a Pearson correlation coefficient for prostate volume of 0.974. CONCLUSION A deep learning CNN can automatically segment the prostate organ from clinical MR images. Further studies should examine developing pattern recognition for lesion localization and quantification.
Collapse
|
53
|
Losnegård A, Reisæter LAR, Halvorsen OJ, Jurek J, Assmus J, Arnes JB, Honoré A, Monssen JA, Andersen E, Haldorsen IS, Lundervold A, Beisland C. Magnetic resonance radiomics for prediction of extraprostatic extension in non-favorable intermediate- and high-risk prostate cancer patients. Acta Radiol 2020; 61:1570-1579. [PMID: 32108505 DOI: 10.1177/0284185120905066] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
BACKGROUND To investigate whether magnetic resonance (MR) radiomic features combined with machine learning may aid in predicting extraprostatic extension (EPE) in high- and non-favorable intermediate-risk patients with prostate cancer. PURPOSE To investigate the diagnostic performance of radiomics to detect EPE. MATERIAL AND METHODS MR radiomic features were extracted from 228 patients, of whom 86 were diagnosed with EPE, using prostate and lesion segmentations. Prediction models were built using Random Forest. Further, EPE was also predicted using a clinical nomogram and routine radiological interpretation and diagnostic performance was assessed for individual and combined models. RESULTS The MR radiomic model with features extracted from the manually delineated lesions performed best among the radiomic models with an area under the curve (AUC) of 0.74. Radiology interpretation yielded an AUC of 0.75 and the clinical nomogram (MSKCC) an AUC of 0.67. A combination of the three prediction models gave the highest AUC of 0.79. CONCLUSION Radiomic analysis combined with radiology interpretation aid the MSKCC nomogram in predicting EPE in high- and non-favorable intermediate-risk patients.
Collapse
Affiliation(s)
- Are Losnegård
- Department of Radiology, Haukeland University Hospital, Bergen, Norway
- Department of Clinical Medicine, University of Bergen, Norway
| | - Lars A. R. Reisæter
- Department of Radiology, Haukeland University Hospital, Bergen, Norway
- Department of Clinical Medicine, University of Bergen, Norway
| | - Ole J. Halvorsen
- Department of Pathology, Haukeland University Hospital, Bergen, Norway
- Centre for Cancer Biomarkers CCBIO, Department of Clinical Medicine, University of Bergen, Norway
| | - Jakub Jurek
- Institute of Electronics, Technical University of Lodz, Poland
| | - Jörg Assmus
- Centre for Clinical Research, Haukeland University Hospital, Norway
| | - Jarle B. Arnes
- Department of Pathology, Haukeland University Hospital, Bergen, Norway
| | - Alfred Honoré
- Department of Urology, Haukeland University Hospital, Bergen, Norway
| | - Jan A. Monssen
- Department of Radiology, Haukeland University Hospital, Bergen, Norway
| | - Erling Andersen
- Department of Clinical Engineering, Haukeland University Hospital, Norway
| | - Ingfrid S. Haldorsen
- Department of Radiology, Haukeland University Hospital, Bergen, Norway
- Department of Clinical Medicine, University of Bergen, Norway
| | - Arvid Lundervold
- Department of Radiology, Haukeland University Hospital, Bergen, Norway
- Department of Biomedicine, University of Bergen, Norway
| | - Christian Beisland
- Department of Clinical Medicine, University of Bergen, Norway
- Department of Urology, Haukeland University Hospital, Bergen, Norway
| |
Collapse
|
54
|
Nie D, Shen D. Adversarial Confidence Learning for Medical Image Segmentation and Synthesis. Int J Comput Vis 2020; 128:2494-2513. [PMID: 34149167 PMCID: PMC8211108 DOI: 10.1007/s11263-020-01321-2] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Accepted: 03/10/2020] [Indexed: 10/24/2022]
Abstract
Generative adversarial networks (GAN) are widely used in medical image analysis tasks, such as medical image segmentation and synthesis. In these works, adversarial learning is directly applied to the original supervised segmentation (synthesis) networks. The usage of adversarial learning is effective in improving visual perception performance since adversarial learning works as realistic regularization for supervised generators. However, the quantitative performance often cannot improve as much as the qualitative performance, and it can even become worse in some cases. In this paper, we explore how we can take better advantage of adversarial learning in supervised segmentation (synthesis) models and propose an adversarial confidence learning framework to better model these problems. We analyze the roles of discriminator in the classic GANs and compare them with those in supervised adversarial systems. Based on this analysis, we propose adversarial confidence learning, i.e., besides the adversarial learning for emphasizing visual perception, we use the confidence information provided by the adversarial network to enhance the design of supervised segmentation (synthesis) network. In particular, we propose using a fully convolutional adversarial network for confidence learning to provide voxel-wise and region-wise confidence information for the segmentation (synthesis) network. With these settings, we propose a difficulty-aware attention mechanism to properly handle hard samples or regions by taking structural information into consideration so that we can better deal with the irregular distribution of medical data. Furthermore, we investigate the loss functions of various GANs and propose using the binary cross entropy loss to train the proposed adversarial system so that we can retain the unlimited modeling capacity of the discriminator. Experimental results on clinical and challenge datasets show that our proposed network can achieve state-of-the-art segmentation (synthesis) accuracy. Further analysis also indicates that adversarial confidence learning can both improve the visual perception performance and the quantitative performance.
Collapse
Affiliation(s)
- Dong Nie
- Department of Computer Science, University of North Carolina at Chapel Hill, NC 27514, USA
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27514, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27514, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
55
|
Sartor H, Minarik D, Enqvist O, Ulén J, Wittrup A, Bjurberg M, Trägårdh E. Auto-segmentations by convolutional neural network in cervical and anorectal cancer with clinical structure sets as the ground truth. Clin Transl Radiat Oncol 2020; 25:37-45. [PMID: 33005756 PMCID: PMC7519211 DOI: 10.1016/j.ctro.2020.09.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 09/08/2020] [Accepted: 09/08/2020] [Indexed: 11/16/2022] Open
Abstract
The network provided auto-segmentations with high overlap with ground truth volumes. Evaluation of femoral heads/bladder auto-segmentations showed highest overlap. Annotated structure sets from daily clinical practice is feasible as ground truth.
Background It is time-consuming for oncologists to delineate volumes for radiotherapy treatment in computer tomography (CT) images. Automatic delineation based on image processing exists, but with varied accuracy and moderate time savings. Using convolutional neural network (CNN), delineations of volumes are faster and more accurate. We have used CTs with the annotated structure sets to train and evaluate a CNN. Material and methods The CNN is a standard segmentation network modified to minimize memory usage. We used CTs and structure sets from 75 cervical cancers and 191 anorectal cancers receiving radiation therapy at Skåne University Hospital 2014-2018. Five structures were investigated: left/right femoral heads, bladder, bowel bag, and clinical target volume of lymph nodes (CTVNs). Dice score and mean surface distance (MSD) (mm) evaluated accuracy, and one oncologist qualitatively evaluated auto-segmentations. Results Median Dice/MSD scores for anorectal cancer: 0.91–0.92/1.93–1.86 femoral heads, 0.94/2.07 bladder, and 0.83/6.80 bowel bag. Median Dice scores for cervical cancer were 0.93–0.94/1.42–1.49 femoral heads, 0.84/3.51 bladder, 0.88/5.80 bowel bag, and 0.82/3.89 CTVNs. With qualitative evaluation, performance on femoral heads and bladder auto-segmentations was mostly excellent, but CTVN auto-segmentations were not acceptable to a larger extent. Discussion It is possible to train a CNN with high overlap using structure sets as ground truth. Manually delineated pelvic volumes from structure sets do not always strictly follow volume boundaries and are sometimes inaccurately defined, which leads to similar inaccuracies in the CNN output. More data that is consistently annotated is needed to achieve higher CNN accuracy and to enable future clinical implementation.
Collapse
Affiliation(s)
- Hanna Sartor
- Diagnostic Radiology, Department of Translational Medicine, Lund University, Skåne University Hospital, Lund, Sweden
| | - David Minarik
- Radiation Physics, Department of Translational Medicine, Lund University, Skåne University Hospital, Malmö, Sweden
| | | | | | - Anders Wittrup
- Department of Hematology, Oncology and Radiation Physics, Skåne University Hospital and Department of Clinical Sciences, Lund University, Lund, Sweden.,Wallenberg Centre for Molecular Medicine, Lund University, Lund, Sweden
| | - Maria Bjurberg
- Department of Hematology, Oncology and Radiation Physics, Skåne University Hospital and Department of Clinical Sciences, Lund University, Lund, Sweden
| | - Elin Trägårdh
- Wallenberg Centre for Molecular Medicine, Lund University, Lund, Sweden.,Department of Clinical Physiology and Nuclear Medicine, Department of Translational Medicine, Lund University, Skåne University Hospital, Malmö, Sweden
| |
Collapse
|
56
|
Zavala-Romero O, Breto AL, Xu IR, Chang YCC, Gautney N, Dal Pra A, Abramowitz MC, Pollack A, Stoyanova R. Segmentation of prostate and prostate zones using deep learning : A multi-MRI vendor analysis. Strahlenther Onkol 2020; 196:932-942. [PMID: 32221622 PMCID: PMC8418872 DOI: 10.1007/s00066-020-01607-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 03/10/2020] [Indexed: 11/25/2022]
Abstract
PURPOSE Develop a deep-learning-based segmentation algorithm for prostate and its peripheral zone (PZ) that is reliable across multiple MRI vendors. METHODS This is a retrospective study. The dataset consisted of 550 MRIs (Siemens-330, General Electric[GE]-220). A multistream 3D convolutional neural network is used for automatic segmentation of the prostate and its PZ using T2-weighted (T2-w) MRI. Prostate and PZ were manually contoured on axial T2‑w. The network uses axial, coronal, and sagittal T2‑w series as input. The preprocessing of the input data includes bias correction, resampling, and image normalization. A dataset from two MRI vendors (Siemens and GE) is used to test the proposed network. Six different models were trained, three for the prostate and three for the PZ. Of the three, two were trained on data from each vendor separately, and a third (Combined) on the aggregate of the datasets. The Dice coefficient (DSC) is used to compare the manual and predicted segmentation. RESULTS For prostate segmentation, the Combined model obtained DSCs of 0.893 ± 0.036 and 0.825 ± 0.112 (mean ± standard deviation) on Siemens and GE, respectively. For PZ, the best DSCs were from the Combined model: 0.811 ± 0.079 and 0.788 ± 0.093. While the Siemens model underperformed on the GE dataset and vice versa, the Combined model achieved robust performance on both datasets. CONCLUSION The proposed network has a performance comparable to the interexpert variability for segmenting the prostate and its PZ. Combining images from different MRI vendors on the training of the network is of paramount importance for building a universal model for prostate and PZ segmentation.
Collapse
Affiliation(s)
- Olmo Zavala-Romero
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Adrian L Breto
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Isaac R Xu
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | | | - Nicole Gautney
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Alan Dal Pra
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Matthew C Abramowitz
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Alan Pollack
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA.
| |
Collapse
|
57
|
|
58
|
Abstract
Automatic and accurate prostate segmentation is an essential prerequisite for assisting diagnosis and treatment, such as guiding biopsy procedures and radiation therapy. Therefore, this paper proposes a cascaded dual attention network (CDA-Net) for automatic prostate segmentation in MRI scans. The network includes two stages of RAS-FasterRCNN and RAU-Net. Firstly, RAS-FasterRCNN uses improved FasterRCNN and sequence correlation processing to extract regions of interest (ROI) of organs. This ROI extraction serves as a hard attention mechanism to focus the segmentation of the subsequent network on a certain area. Secondly, the addition of residual convolution block and self-attention mechanism in RAU-Net enables the network to gradually focus on the area where the organ exists while making full use of multiscale features. The algorithm was evaluated on the PROMISE12 and ASPS13 datasets and presents the dice similarity coefficient of 92.88% and 92.65%, respectively, surpassing the state-of-the-art algorithms. In a variety of complex slice images, especially for the base and apex of slice sequences, the algorithm also achieved credible segmentation performance.
Collapse
|
59
|
|
60
|
Cem Birbiri U, Hamidinekoo A, Grall A, Malcolm P, Zwiggelaar R. Investigating the Performance of Generative Adversarial Networks for Prostate Tissue Detection and Segmentation. J Imaging 2020; 6:jimaging6090083. [PMID: 34460740 PMCID: PMC8321056 DOI: 10.3390/jimaging6090083] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 08/14/2020] [Accepted: 08/18/2020] [Indexed: 12/24/2022] Open
Abstract
The manual delineation of region of interest (RoI) in 3D magnetic resonance imaging (MRI) of the prostate is time-consuming and subjective. Correct identification of prostate tissue is helpful to define a precise RoI to be used in CAD systems in clinical practice during diagnostic imaging, radiotherapy and monitoring the progress of disease. Conditional GAN (cGAN), cycleGAN and U-Net models and their performances were studied for the detection and segmentation of prostate tissue in 3D multi-parametric MRI scans. These models were trained and evaluated on MRI data from 40 patients with biopsy-proven prostate cancer. Due to the limited amount of available training data, three augmentation schemes were proposed to artificially increase the training samples. These models were tested on a clinical dataset annotated for this study and on a public dataset (PROMISE12). The cGAN model outperformed the U-Net and cycleGAN predictions owing to the inclusion of paired image supervision. Based on our quantitative results, cGAN gained a Dice score of 0.78 and 0.75 on the private and the PROMISE12 public datasets, respectively.
Collapse
Affiliation(s)
- Ufuk Cem Birbiri
- Department of Computer Engineering, Middle East Technical University, Ankara 06800, Turkey;
| | - Azam Hamidinekoo
- Division of Molecular Pathology, Institute of Cancer Research (ICR), London SM2 5NG, UK;
| | | | - Paul Malcolm
- Department of Radiology, Norfolk & Norwich University Hospital, Norwich NR4 7UY, UK;
| | - Reyer Zwiggelaar
- Department of Computer Science, Aberystwyth University, Aberystwyth SY23 3DB, UK
- Correspondence:
| |
Collapse
|
61
|
Zhou Z, Wang K, Folkert M, Liu H, Jiang S, Sher D, Wang J. Multifaceted radiomics for distant metastasis prediction in head & neck cancer. Phys Med Biol 2020; 65:155009. [PMID: 32294632 DOI: 10.1088/1361-6560/ab8956] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Accurately predicting distant metastasis in head & neck cancer has the potential to improve patient survival by allowing early treatment intensification with systemic therapy for high-risk patients. By extracting large amounts of quantitative features and mining them, radiomics has achieved success in predicting treatment outcomes for various diseases. However, there are several challenges associated with conventional radiomic approaches, including: (1) how to optimally combine information extracted from multiple modalities; (2) how to construct models emphasizing different objectives for different clinical applications; and (3) how to utilize and fuse output obtained by multiple classifiers. To overcome these challenges, we propose a unified model termed as multifaceted radiomics (M-radiomics). In M-radiomics, a deep learning with stacked sparse autoencoder is first utilized to fuse features extracted from different modalities into one representation feature set. A multi-objective optimization model is then introduced into M-radiomics where probability-based objective functions are designed to maximize the similarity between the probability output and the true label vector. Finally, M-radiomics employs multiple base classifiers to get a diverse Pareto-optimal model set and then fuses the output probabilities of all the Pareto-optimal models through an evidential reasoning rule fusion (ERRF) strategy in the testing stage to obtain the final output probability. Experimental results show that M-radiomics with the stacked autoencoder outperforms the model without the autoencoder. M-radiomics obtained more accurate results with a better balance between sensitivity and specificity than other single-objective or single-classifier-based models.
Collapse
Affiliation(s)
- Zhiguo Zhou
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, United States of America. School of Computer Science and Mathematics, University of Central Missouri, Warrensburg, MO, United States of America
| | | | | | | | | | | | | |
Collapse
|
62
|
Yang W, Shi Y, Park SH, Yang M, Gao Y, Shen D. An Effective MR-Guided CT Network Training for Segmenting Prostate in CT Images. IEEE J Biomed Health Inform 2020; 24:2278-2291. [DOI: 10.1109/jbhi.2019.2960153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
63
|
Zhang J, Wei K, Deng X. Heuristic algorithms for diversity-aware balanced multi-way number partitioning. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2020.05.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
64
|
Khan Z, Yahya N, Alsaih K, Ali SSA, Meriaudeau F. Evaluation of Deep Neural Networks for Semantic Segmentation of Prostate in T2W MRI. SENSORS 2020; 20:s20113183. [PMID: 32503330 PMCID: PMC7309110 DOI: 10.3390/s20113183] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 04/04/2020] [Accepted: 04/12/2020] [Indexed: 12/23/2022]
Abstract
In this paper, we present an evaluation of four encoder–decoder CNNs in the segmentation of the prostate gland in T2W magnetic resonance imaging (MRI) image. The four selected CNNs are FCN, SegNet, U-Net, and DeepLabV3+, which was originally proposed for the segmentation of road scene, biomedical, and natural images. Segmentation of prostate in T2W MRI images is an important step in the automatic diagnosis of prostate cancer to enable better lesion detection and staging of prostate cancer. Therefore, many research efforts have been conducted to improve the segmentation of the prostate gland in MRI images. The main challenges of prostate gland segmentation are blurry prostate boundary and variability in prostate anatomical structure. In this work, we investigated the performance of encoder–decoder CNNs for segmentation of prostate gland in T2W MRI. Image pre-processing techniques including image resizing, center-cropping and intensity normalization are applied to address the issues of inter-patient and inter-scanner variability as well as the issue of dominating background pixels over prostate pixels. In addition, to enrich the network with more data, to increase data variation, and to improve its accuracy, patch extraction and data augmentation are applied prior to training the networks. Furthermore, class weight balancing is used to avoid having biased networks since the number of background pixels is much higher than the prostate pixels. The class imbalance problem is solved by utilizing weighted cross-entropy loss function during the training of the CNN model. The performance of the CNNs is evaluated in terms of the Dice similarity coefficient (DSC) and our experimental results show that patch-wise DeepLabV3+ gives the best performance with DSC equal to 92.8%. This value is the highest DSC score compared to the FCN, SegNet, and U-Net that also competed the recently published state-of-the-art method of prostate segmentation.
Collapse
Affiliation(s)
- Zia Khan
- Centre for Intelligent Signal and Imaging Research (CISIR), Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia; (Z.K.); (K.A.); (S.S.A.A.)
| | - Norashikin Yahya
- Centre for Intelligent Signal and Imaging Research (CISIR), Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia; (Z.K.); (K.A.); (S.S.A.A.)
- Correspondence:
| | - Khaled Alsaih
- Centre for Intelligent Signal and Imaging Research (CISIR), Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia; (Z.K.); (K.A.); (S.S.A.A.)
| | - Syed Saad Azhar Ali
- Centre for Intelligent Signal and Imaging Research (CISIR), Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia; (Z.K.); (K.A.); (S.S.A.A.)
| | | |
Collapse
|
65
|
Liu QP, Xu X, Zhu FP, Zhang YD, Liu XS. Prediction of prognostic risk factors in hepatocellular carcinoma with transarterial chemoembolization using multi-modal multi-task deep learning. EClinicalMedicine 2020; 23:100379. [PMID: 32548574 PMCID: PMC7284069 DOI: 10.1016/j.eclinm.2020.100379] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Revised: 04/29/2020] [Accepted: 04/30/2020] [Indexed: 02/08/2023] Open
Abstract
BACKGROUND Due to heterogeneity of hepatocellular carcinoma (HCC), outcome assessment of HCC with transarterial chemoembolization (TACE) is challenging. METHODS We built histologic-related scores to determine microvascular invasion (MVI) and Edmondson-Steiner grade by training CT radiomics features using machine learning classifiers in a cohort of 494 HCCs with hepatic resection. Meanwhile, we developed a deep learning (DL)-score for disease-specific survival by training CT imaging using DL networks in a cohort of 243 HCCs with TACE. Then, three newly built imaging hallmarks with clinicoradiologic factors were analyzed with a Cox-Proportional Hazard (Cox-PH) model. FINDINGS In HCCs with hepatic resection, two imaging hallmarks resulted in areas under the curve (AUCs) of 0.79 (95% confidence interval [CI]: 0.71-0.85) and 0.72 (95% CI: 0.64-0.79) for predicting MVI and Edmondson-Steiner grade, respectively, using test data. In HCCs with TACE, higher DL-score (hazard ratio [HR]: 3.01; 95% CI: 2.02-4.50), American Joint Committee on Cancer (AJCC) stage III+IV (HR: 1.71; 95% CI: 1.12-2.61), Response Evaluation Criteria in Solid Tumors (RECIST) with stable disease + progressive disease (HR: 2.72; 95% CI: 1.84-4.01), and TACE-course > 3 (HR: 0.65; 95% CI: 0.45-0.76) were independent prognostic factors. Using these factors via a Cox-PH model resulted in a concordance index of 0.73 (95% CI: 0.71-0.76) for predicting overall survival and AUCs of 0.85 (95% CI: 0.81-0.89), 0.90 (95% CI: 0.86-0.94), and 0.89 (95% CI: 0.84-0.92), respectively, for predicting 3-year, 5-year, and 10-year survival. INTERPRETATION Our study offers a DL-based, noninvasive imaging hallmark to predict outcome of HCCs with TACE. FUNDING This work was supported by the key research and development program of Jiangsu Province (Grant number: BE2017756).
Collapse
Affiliation(s)
| | | | | | - Yu-Dong Zhang
- Corresponding author: Yu-Dong Zhang, Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, P.R. China.
| | - Xi-Sheng Liu
- Xi-Sheng Liu, Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, No. 300, Guangzhou Road, Nanjing, Jiangsu Province, China, 210029.
| |
Collapse
|
66
|
Choi HJ, Jang JW, Shin WG, Park H, Incerti S, Min CH. Development of integrated prompt gamma imaging and positron emission tomography system for in vivo 3-D dose verification: a Monte Carlo study. Phys Med Biol 2020; 65:105005. [PMID: 32235068 DOI: 10.1088/1361-6560/ab857c] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
An accurate knowledge of in vivo proton dose distribution is key to fully utilizing the potential advantages of proton therapy. Two representative indirect methods for in vivo range verification, namely, prompt gamma (PG) imaging and positron emission tomography (PET), are available. This study proposes a PG-PET system that combines the advantages of these two methods and presents detector geometry and background reduction techniques optimized for the PG-PET system. The characteristics of the secondary radiations emitted by a water phantom by interaction with a 150 MeV proton beam were analysed using Geant4.10.00, and the 2-D PG distributions were obtained and assessed for different detector geometries. In addition, the energy window (EW), depth-of-interaction (DOI), and time-of-flight (TOF) techniques are proposed as the background reduction techniques. To evaluate the performance of the PG-PET system, the 3-D dose distribution in the water phantom caused by two proton beams of energies 80 MeV and 100 MeV was verified using 16 optimal detectors. The thickness of the parallel-hole tungsten collimator of pitch 8 mm and width 7 mm was determined as 200 mm, and that of the GAGG scintillator was determined as 30 mm, by an optimization study. Further, 3-7 MeV and 2-7 MeV were obtained as the optimal EWs when the DOI and both the DOI and TOF techniques were applied for data processing, respectively; the detector performances were improved by about 38% and 167%, respectively, compared with that when applying only the 3-5 MeV EW. In this study, we confirmed that the PG distribution can be obtained by simply combining the 2-D parallel hole collimator and the PET detector module. In the future, we will develop an accurate 3-D dose evaluation technique using deep learning algorithms based on the image sets of dose, PG, and PET distributions for various proton energies.
Collapse
Affiliation(s)
- Hyun Joon Choi
- Department of Radiation Convergence Engineering, Yonsei University, Wonju 26493, Republic of Korea
| | | | | | | | | | | |
Collapse
|
67
|
Seo H, Huang C, Bassenne M, Xiao R, Xing L. Modified U-Net (mU-Net) With Incorporation of Object-Dependent High Level Features for Improved Liver and Liver-Tumor Segmentation in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1316-1325. [PMID: 31634827 PMCID: PMC8095064 DOI: 10.1109/tmi.2019.2948320] [Citation(s) in RCA: 180] [Impact Index Per Article: 36.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Segmentation of livers and liver tumors is one of the most important steps in radiation therapy of hepatocellular carcinoma. The segmentation task is often done manually, making it tedious, labor intensive, and subject to intra-/inter- operator variations. While various algorithms for delineating organ-at-risks (OARs) and tumor targets have been proposed, automatic segmentation of livers and liver tumors remains intractable due to their low tissue contrast with respect to the surrounding organs and their deformable shape in CT images. The U-Net has gained increasing popularity recently for image analysis tasks and has shown promising results. Conventional U-Net architectures, however, suffer from three major drawbacks. First, skip connections allow for the duplicated transfer of low resolution information in feature maps to improve efficiency in learning, but this often leads to blurring of extracted image features. Secondly, high level features extracted by the network often do not contain enough high resolution edge information of the input, leading to greater uncertainty where high resolution edge dominantly affects the network's decisions such as liver and liver-tumor segmentation. Thirdly, it is generally difficult to optimize the number of pooling operations in order to extract high level global features, since the number of pooling operations used depends on the object size. To cope with these problems, we added a residual path with deconvolution and activation operations to the skip connection of the U-Net to avoid duplication of low resolution information of features. In the case of small object inputs, features in the skip connection are not incorporated with features in the residual path. Furthermore, the proposed architecture has additional convolution layers in the skip connection in order to extract high level global features of small object inputs as well as high level features of high resolution edge information of large object inputs. Efficacy of the modified U-Net (mU-Net) was demonstrated using the public dataset of Liver tumor segmentation (LiTS) challenge 2017. For liver-tumor segmentation, Dice similarity coefficient (DSC) of 89.72 %, volume of error (VOE) of 21.93 %, and relative volume difference (RVD) of - 0.49 % were obtained. For liver segmentation, DSC of 98.51 %, VOE of 3.07 %, and RVD of 0.26 % were calculated. For the public 3D Image Reconstruction for Comparison of Algorithm Database (3Dircadb), DSCs were 96.01 % for the liver and 68.14 % for liver-tumor segmentations, respectively. The proposed mU-Net outperformed existing state-of-art networks.
Collapse
|
68
|
AKÇAY M, ETİZ D. Radyasyon Onkolojisinde Makine Öğrenmesi. OSMANGAZİ JOURNAL OF MEDICINE 2020. [DOI: 10.20515/otd.691331] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
69
|
Three-Dimensional Convolutional Neural Network for Prostate MRI Segmentation and Comparison of Prostate Volume Measurements by Use of Artificial Neural Network and Ellipsoid Formula. AJR Am J Roentgenol 2020; 214:1229-1238. [PMID: 32208009 DOI: 10.2214/ajr.19.22254] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
OBJECTIVE. The purposes of this study were to assess the performance of a 3D convolutional neural network (CNN) for automatic segmentation of prostates on MR images and to compare the volume estimates from the 3D CNN with those of the ellipsoid formula. MATERIALS AND METHODS. The study included 330 MR image sets that were divided into 260 training sets and 70 test sets for automated segmentation of the entire prostate. Among these, 162 training sets and 50 test sets were used for transition zone segmentation. Assisted by manual segmentation by two radiologists, the following values were obtained: estimates of ground-truth volume (VGT), software-derived volume (VSW), mean of VGT and VSW (VAV), and automatically generated volume from the 3D CNN (VNET). These values were compared with the volume calculated with the ellipsoid formula (VEL). RESULTS. The Dice similarity coefficient for the entire prostate was 87.12% and for the transition zone was 76.48%. There was no significant difference between VNET and VAV (p = 0.689) in the test sets of the entire prostate, whereas a significant difference was found between VEL and VAV (p < 0.001). No significant difference was found among the volume estimates in the test sets of the transition zone. Overall intraclass correlation coefficients between the volume estimates were excellent (0.887-0.995). In the test sets of entire prostate, the mean error between VGT and VNET (2.5) was smaller than that between VGT and VEL (3.3). CONCLUSION. The fully automated network studied provides reliable volume estimates of the entire prostate compared with those obtained with the ellipsoid formula. Fast and accurate volume measurement by use of the 3D CNN may help clinicians evaluate prostate disease.
Collapse
|
70
|
Nelson CR, Ekberg J, Fridell K. Prostate Cancer Detection in Screening Using Magnetic Resonance Imaging and Artificial Intelligence. ACTA ACUST UNITED AC 2020. [DOI: 10.2174/1874061802006010001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Background:
Prostate cancer is a leading cause of death among men who do not participate in a screening programme. MRI forms a possible alternative for prostate analysis of a higher level of sensitivity than the PSA test or biopsy. Magnetic resonance is a non-invasive method and magnetic resonance tomography produces a large amount of data. If a screening programme were implemented, a dramatic increase in radiologist workload and patient waiting time will follow. Computer Aided-Diagnose (CAD) could assist radiologists to decrease reading times and cost, and increase diagnostic effectiveness. CAD mimics radiologist and imaging guidelines to detect prostate cancer.
Aim:
The purpose of this study was to analyse and describe current research in MRI prostate examination with the aid of CAD. The aim was to determine if CAD systems form a reliable method for use in prostate screening.
Methods:
This study was conducted as a systematic literature review of current scientific articles. Selection of articles was carried out using the “Preferred Reporting Items for Systematic Reviews and for Meta-Analysis” (PRISMA). Summaries were created from reviewed articles and were then categorised into relevant data for results.
Results:
CAD has shown that its capability concerning sensitivity or specificity is higher than a radiologist. A CAD system can reach a peak sensitivity of 100% and two CAD systems showed a specificity of 100%. CAD systems are highly specialised and chiefly focus on the peripheral zone, which could mean missing cancer in the transition zone. CAD systems can segment the prostate with the same effectiveness as a radiologist.
Conclusion:
When CAD analysed clinically-significant tumours with a Gleason score greater than 6, CAD outperformed radiologists. However, their focus on the peripheral zone would require the use of more than one CAD system to analyse the entire prostate.
Collapse
|
71
|
Shen C, Nguyen D, Zhou Z, Jiang SB, Dong B, Jia X. An introduction to deep learning in medical physics: advantages, potential, and challenges. Phys Med Biol 2020; 65:05TR01. [PMID: 31972556 PMCID: PMC7101509 DOI: 10.1088/1361-6560/ab6f51] [Citation(s) in RCA: 72] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
As one of the most popular approaches in artificial intelligence, deep learning (DL) has attracted a lot of attention in the medical physics field over the past few years. The goals of this topical review article are twofold. First, we will provide an overview of the method to medical physics researchers interested in DL to help them start the endeavor. Second, we will give in-depth discussions on the DL technology to make researchers aware of its potential challenges and possible solutions. As such, we divide the article into two major parts. The first part introduces general concepts and principles of DL and summarizes major research resources, such as computational tools and databases. The second part discusses challenges faced by DL, present available methods to mitigate some of these challenges, as well as our recommendations.
Collapse
Affiliation(s)
- Chenyang Shen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America. Innovative Technology Of Radiotherapy Computation and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | | | | | | | | | | |
Collapse
|
72
|
Zhu Q, Du B, Yan P. Boundary-Weighted Domain Adaptive Neural Network for Prostate MR Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:753-763. [PMID: 31425022 PMCID: PMC7015773 DOI: 10.1109/tmi.2019.2935018] [Citation(s) in RCA: 74] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Accurate segmentation of the prostate from magnetic resonance (MR) images provides useful information for prostate cancer diagnosis and treatment. However, automated prostate segmentation from 3D MR images faces several challenges. The lack of clear edge between the prostate and other anatomical structures makes it challenging to accurately extract the boundaries. The complex background texture and large variation in size, shape and intensity distribution of the prostate itself make segmentation even further complicated. Recently, as deep learning, especially convolutional neural networks (CNNs), emerging as the best performed methods for medical image segmentation, the difficulty in obtaining large number of annotated medical images for training CNNs has become much more pronounced than ever. Since large-scale dataset is one of the critical components for the success of deep learning, lack of sufficient training data makes it difficult to fully train complex CNNs. To tackle the above challenges, in this paper, we propose a boundary-weighted domain adaptive neural network (BOWDA-Net). To make the network more sensitive to the boundaries during segmentation, a boundary-weighted segmentation loss is proposed. Furthermore, an advanced boundary-weighted transfer leaning approach is introduced to address the problem of small medical imaging datasets. We evaluate our proposed model on three different MR prostate datasets. The experimental results demonstrate that the proposed model is more sensitive to object boundaries and outperformed other state-of-the-art methods.
Collapse
Affiliation(s)
- Qikui Zhu
- School of Computer Science, Wuhan University, Wuhan, China
| | - Bo Du
- co-corresponding authors: B. Du (), P. Yan ()
| | - Pingkun Yan
- co-corresponding authors: B. Du (), P. Yan ()
| |
Collapse
|
73
|
Dai Z, Carver E, Liu C, Lee J, Feldman A, Zong W, Pantelic M, Elshaikh M, Wen N. Segmentation of the Prostatic Gland and the Intraprostatic Lesions on Multiparametic Magnetic Resonance Imaging Using Mask Region-Based Convolutional Neural Networks. Adv Radiat Oncol 2020; 5:473-481. [PMID: 32529143 PMCID: PMC7280293 DOI: 10.1016/j.adro.2020.01.005] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Revised: 12/15/2019] [Accepted: 01/19/2020] [Indexed: 10/25/2022] Open
Abstract
Purpose Accurate delineation of the prostate gland and intraprostatic lesions (ILs) is essential for prostate cancer dose-escalated radiation therapy. The aim of this study was to develop a sophisticated deep neural network approach to magnetic resonance image analysis that will help IL detection and delineation for clinicians. Methods and Materials We trained and evaluated mask region-based convolutional neural networks to perform the prostate gland and IL segmentation. There were 2 cohorts in this study: 78 public patients (cohort 1) and 42 private patients from our institution (cohort 2). Prostate gland segmentation was performed using T2-weighted images (T2WIs), although IL segmentation was performed using T2WIs and coregistered apparent diffusion coefficient maps with prostate patches cropped out. The IL segmentation model was extended to select 5 highly suspicious volumetric lesions within the entire prostate. Results The mask region-based convolutional neural networks model was able to segment the prostate with dice similarity coefficient (DSC) of 0.88 ± 0.04, 0.86 ± 0.04, and 0.82 ± 0.05; sensitivity (Sens.) of 0.93, 0.95, and 0.95; and specificity (Spec.) of 0.98, 0.85, and 0.90. However, ILs were segmented with DSC of 0.62 ± 0.17, 0.59 ± 0.14, and 0.38 ± 0.19; Sens. of 0.55 ± 0.30, 0.63 ± 0.28, and 0.22 ± 0.24; and Spec. of 0.974 ± 0.010, 0.964 ± 0.015, and 0.972 ± 0.015 in public validation/public testing/private testing patients when trained with patients from cohort 1 only. When trained with patients from both cohorts, the values were as follows: DSC of 0.64 ± 0.11, 0.56 ± 0.15, and 0.46 ± 0.15; Sens. of 0.57 ± 0.23, 0.50 ± 0.28, and 0.33 ± 0.17; and Spec. of 0.980 ± 0.009, 0.969 ± 0.016, and 0.977 ± 0.013. Conclusions Our research framework is able to perform as an end-to-end system that automatically segmented the prostate gland and identified and delineated highly suspicious ILs within the entire prostate. Therefore, this system demonstrated the potential for assisting the clinicians in tumor delineation.
Collapse
Affiliation(s)
- Zhenzhen Dai
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Eric Carver
- Department of Diagnostic Radiology, Henry Ford Health System, Detroit, Michigan
| | - Chang Liu
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Joon Lee
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Aharon Feldman
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Weiwei Zong
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Milan Pantelic
- Department of Diagnostic Radiology, Henry Ford Health System, Detroit, Michigan
| | - Mohamed Elshaikh
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Ning Wen
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| |
Collapse
|
74
|
Jia H, Xia Y, Song Y, Zhang D, Huang H, Zhang Y, Cai W. 3D APA-Net: 3D Adversarial Pyramid Anisotropic Convolutional Network for Prostate Segmentation in MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:447-457. [PMID: 31295109 DOI: 10.1109/tmi.2019.2928056] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Accurate and reliable segmentation of the prostate gland using magnetic resonance (MR) imaging has critical importance for the diagnosis and treatment of prostate diseases, especially prostate cancer. Although many automated segmentation approaches, including those based on deep learning have been proposed, the segmentation performance still has room for improvement due to the large variability in image appearance, imaging interference, and anisotropic spatial resolution. In this paper, we propose the 3D adversarial pyramid anisotropic convolutional deep neural network (3D APA-Net) for prostate segmentation in MR images. This model is composed of a generator (i.e., 3D PA-Net) that performs image segmentation and a discriminator (i.e., a six-layer convolutional neural network) that differentiates between a segmentation result and its corresponding ground truth. The 3D PA-Net has an encoder-decoder architecture, which consists of a 3D ResNet encoder, an anisotropic convolutional decoder, and multi-level pyramid convolutional skip connections. The anisotropic convolutional blocks can exploit the 3D context information of the MR images with anisotropic resolution, the pyramid convolutional blocks address both voxel classification and gland localization issues, and the adversarial training regularizes 3D PA-Net and thus enables it to generate spatially consistent and continuous segmentation results. We evaluated the proposed 3D APA-Net against several state-of-the-art deep learning-based segmentation approaches on two public databases and the hybrid of the two. Our results suggest that the proposed model outperforms the compared approaches on three databases and could be used in a routine clinical workflow.
Collapse
|
75
|
A Hybrid End-to-End Approach Integrating Conditional Random Fields into CNNs for Prostate Cancer Detection on MRI. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10010338] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Prostate Cancer (PCa) is the most common oncological disease in Western men. Even though a growing effort has been carried out by the scientific community in recent years, accurate and reliable automated PCa detection methods on multiparametric Magnetic Resonance Imaging (mpMRI) are still a compelling issue. In this work, a Deep Neural Network architecture is developed for the task of classifying clinically significant PCa on non-contrast-enhanced MR images. In particular, we propose the use of Conditional Random Fields as a Recurrent Neural Network (CRF-RNN) to enhance the classification performance of XmasNet, a Convolutional Neural Network (CNN) architecture specifically tailored to the PROSTATEx17 Challenge. The devised approach builds a hybrid end-to-end trainable network, CRF-XmasNet, composed of an initial CNN component performing feature extraction and a CRF-based probabilistic graphical model component for structured prediction, without the need for two separate training procedures. Experimental results show the suitability of this method in terms of classification accuracy and training time, even though the high-variability of the observed results must be reduced before transferring the resulting architecture to a clinical environment. Interestingly, the use of CRFs as a separate postprocessing method achieves significantly lower performance with respect to the proposed hybrid end-to-end approach. The proposed hybrid end-to-end CRF-RNN approach yields excellent peak performance for all the CNN architectures taken into account, but it shows a high-variability, thus requiring future investigation on the integration of CRFs into a CNN.
Collapse
|
76
|
CNN-Based Prostate Zonal Segmentation on T2-Weighted MR Images: A Cross-Dataset Study. NEURAL APPROACHES TO DYNAMICS OF SIGNAL EXCHANGES 2020. [DOI: 10.1007/978-981-13-8950-4_25] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
77
|
Wang Y, Dou H, Hu X, Zhu L, Yang X, Xu M, Qin J, Heng PA, Wang T, Ni D. Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2768-2778. [PMID: 31021793 DOI: 10.1109/tmi.2019.2913184] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Automatic prostate segmentation in transrectal ultrasound (TRUS) images is of essential importance for image-guided prostate interventions and treatment planning. However, developing such automatic solutions remains very challenging due to the missing/ambiguous boundary and inhomogeneous intensity distribution of the prostate in TRUS, as well as the large variability in prostate shapes. This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN). Our attention module utilizes the attention mechanism to selectively leverage the multi-level features integrated from different layers to refine the features at each individual layer, suppressing the non-prostate noise at shallow layers of the CNN and increasing more prostate details into features at deep layers. Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance. The proposed attention mechanism is a general strategy to aggregate multi-level deep features and has the potential to be used for other medical image segmentation tasks. The code is publicly available at https://github.com/wulalago/DAF3D.
Collapse
|
78
|
USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.07.006] [Citation(s) in RCA: 123] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
79
|
Alkadi R, Taher F, El-baz A, Werghi N. A Deep Learning-Based Approach for the Detection and Localization of Prostate Cancer in T2 Magnetic Resonance Images. J Digit Imaging 2019; 32:793-807. [PMID: 30506124 PMCID: PMC6737129 DOI: 10.1007/s10278-018-0160-1] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
We address the problem of prostate lesion detection, localization, and segmentation in T2W magnetic resonance (MR) images. We train a deep convolutional encoder-decoder architecture to simultaneously segment the prostate, its anatomical structure, and the malignant lesions. To incorporate the 3D contextual spatial information provided by the MRI series, we propose a novel 3D sliding window approach, which preserves the 2D domain complexity while exploiting 3D information. Experiments on data from 19 patients provided for the public by the Initiative for Collaborative Computer Vision Benchmarking (I2CVB) show that our approach outperforms traditional pattern recognition and machine learning approaches by a significant margin. Particularly, for the task of cancer detection and localization, the system achieves an average AUC of 0.995, an accuracy of 0.894, and a recall of 0.928. The proposed mono-modal deep learning-based system performs comparably to other multi-modal MR-based systems. It could improve the performance of a radiologist in prostate cancer diagnosis and treatment planning.
Collapse
Affiliation(s)
- Ruba Alkadi
- Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, United Arab Emirates
| | - Fatma Taher
- Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, United Arab Emirates
| | - Ayman El-baz
- University of Louisville, Louisville, KY 40292 USA
| | - Naoufel Werghi
- Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, United Arab Emirates
| |
Collapse
|
80
|
Boldrini L, Bibault JE, Masciocchi C, Shen Y, Bittner MI. Deep Learning: A Review for the Radiation Oncologist. Front Oncol 2019; 9:977. [PMID: 31632910 PMCID: PMC6779810 DOI: 10.3389/fonc.2019.00977] [Citation(s) in RCA: 79] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2018] [Accepted: 09/13/2019] [Indexed: 12/15/2022] Open
Abstract
Introduction: Deep Learning (DL) is a machine learning technique that uses deep neural networks to create a model. The application areas of deep learning in radiation oncology include image segmentation and detection, image phenotyping, and radiomic signature discovery, clinical outcome prediction, image dose quantification, dose-response modeling, radiation adaptation, and image generation. In this review, we explain the methods used in DL and perform a literature review using the Medline database to identify studies using deep learning in radiation oncology. The search was conducted in April 2018, and identified studies published between 1997 and 2018, strongly skewed toward 2015 and later. Methods: A literature review was performed using PubMed/Medline in order to identify important recent publications to be synthesized into a review of the current status of Deep Learning in radiation oncology, directed at a clinically-oriented reader. The search strategy included the search terms "radiotherapy" and "deep learning." In addition, reference lists of selected articles were hand searched for further potential hits of relevance to this review. The search was conducted in April 2018, and identified studies published between 1997 and 2018, strongly skewed toward 2015 and later. Results: Studies using DL for image segmentation were identified in Brain (n = 2), Head and Neck (n = 3), Lung (n = 6), Abdominal (n = 2), and Pelvic (n = 6) cancers. Use of Deep Learning has also been reported for outcome prediction, such as toxicity modeling (n = 3), treatment response and survival (n = 2), or treatment planning (n = 5). Conclusion: Over the past few years, there has been a significant number of studies assessing the performance of DL techniques in radiation oncology. They demonstrate how DL-based systems can aid clinicians in their daily work, be it by reducing the time required for or the variability in segmentation, or by helping to predict treatment outcomes and toxicities. It still remains to be seen when these techniques will be employed in routine clinical practice.
Collapse
Affiliation(s)
- Luca Boldrini
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Jean-Emmanuel Bibault
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique—Hôpitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France
| | - Carlotta Masciocchi
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Yanting Shen
- Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Martin-Immanuel Bittner
- CRUK/MRC Oxford Institute for Radiation Oncology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
81
|
Elguindi S, Zelefsky MJ, Jiang J, Veeraraghavan H, Deasy JO, Hunt MA, Tyagi N. Deep learning-based auto-segmentation of targets and organs-at-risk for magnetic resonance imaging only planning of prostate radiotherapy. Phys Imaging Radiat Oncol 2019; 12:80-86. [PMID: 32355894 PMCID: PMC7192345 DOI: 10.1016/j.phro.2019.11.006] [Citation(s) in RCA: 66] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 11/20/2019] [Accepted: 11/22/2019] [Indexed: 01/06/2023] Open
Abstract
BACKGROUND AND PURPOSE Magnetic resonance (MR) only radiation therapy for prostate treatment provides superior contrast for defining targets and organs-at-risk (OARs). This study aims to develop a deep learning model to leverage this advantage to automate the contouring process. MATERIALS AND METHODS Six structures (bladder, rectum, urethra, penile bulb, rectal spacer, prostate and seminal vesicles) were contoured and reviewed by a radiation oncologist on axial T2-weighted MR image sets from 50 patients, which constituted expert delineations. The data was split into a 40/10 training and validation set to train a two-dimensional fully convolutional neural network, DeepLabV3+, using transfer learning. The T2-weighted image sets were pre-processed to 2D false color images to leverage pre-trained (from natural images) convolutional layers' weights. Independent testing was performed on an additional 50 patient's MR scans. Performance comparison was done against a U-Net deep learning method. Algorithms were evaluated using volumetric Dice similarity coefficient (VDSC) and surface Dice similarity coefficient (SDSC). RESULTS When comparing VDSC, DeepLabV3+ significantly outperformed U-Net for all structures except urethra (P < 0.001). Average VDSC was 0.93 ± 0.04 (bladder), 0.83 ± 0.06 (prostate and seminal vesicles [CTV]), 0.74 ± 0.13 (penile bulb), 0.82 ± 0.05 (rectum), 0.69 ± 0.10 (urethra), and 0.81 ± 0.1 (rectal spacer). Average SDSC was 0.92 ± 0.1 (bladder), 0.85 ± 0.11 (prostate and seminal vesicles [CTV]), 0.80 ± 0.22 (penile bulb), 0.87 ± 0.07 (rectum), 0.85 ± 0.25 (urethra), and 0.83 ± 0.26 (rectal spacer). CONCLUSION A deep learning-based model produced contours that show promise to streamline an MR-only planning workflow in treating prostate cancer.
Collapse
Affiliation(s)
- Sharif Elguindi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Michael J. Zelefsky
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Joseph O. Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Margie A. Hunt
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
82
|
Munir K, Elahi H, Ayub A, Frezza F, Rizzi A. Cancer Diagnosis Using Deep Learning: A Bibliographic Review. Cancers (Basel) 2019; 11:E1235. [PMID: 31450799 PMCID: PMC6770116 DOI: 10.3390/cancers11091235] [Citation(s) in RCA: 137] [Impact Index Per Article: 22.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Revised: 06/30/2019] [Accepted: 08/14/2019] [Indexed: 01/06/2023] Open
Abstract
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann's machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements.
Collapse
Affiliation(s)
- Khushboo Munir
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy.
| | - Hassan Elahi
- Department of Mechanical and Aerospace Engineering (DIMA), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Afsheen Ayub
- Department of Basic and Applied Science for Engineering (SBAI), Sapienza University of Rome, Via Antonio Scarpa 14/16, 00161 Rome, Italy
| | - Fabrizio Frezza
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Antonello Rizzi
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| |
Collapse
|
83
|
Qadri SF, Zhao Z, Ai D, Ahmad M, Wang Y. Vertebrae segmentation via stacked sparse autoencoder from computed tomography images. ELEVENTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2019) 2019:160. [DOI: https:/doi.org/10.1117/12.2540176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
|
84
|
Rubinstein E, Salhov M, Nidam-Leshem M, White V, Golan S, Baniel J, Bernstine H, Groshar D, Averbuch A. Unsupervised tumor detection in Dynamic PET/CT imaging of the prostate. Med Image Anal 2019; 55:27-40. [DOI: 10.1016/j.media.2019.04.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Revised: 03/30/2019] [Accepted: 04/05/2019] [Indexed: 02/07/2023]
|
85
|
Jarrett D, Stride E, Vallis K, Gooding MJ. Applications and limitations of machine learning in radiation oncology. Br J Radiol 2019; 92:20190001. [PMID: 31112393 PMCID: PMC6724618 DOI: 10.1259/bjr.20190001] [Citation(s) in RCA: 86] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Machine learning approaches to problem-solving are growing rapidly within healthcare, and radiation oncology is no exception. With the burgeoning interest in machine learning comes the significant risk of misaligned expectations as to what it can and cannot accomplish. This paper evaluates the role of machine learning and the problems it solves within the context of current clinical challenges in radiation oncology. The role of learning algorithms within the workflow for external beam radiation therapy are surveyed, considering simulation imaging, multimodal fusion, image segmentation, treatment planning, quality assurance, and treatment delivery and adaptation. For each aspect, the clinical challenges faced, the learning algorithms proposed, and the successes and limitations of various approaches are analyzed. It is observed that machine learning has largely thrived on reproducibly mimicking conventional human-driven solutions with more efficiency and consistency. On the other hand, since algorithms are generally trained using expert opinion as ground truth, machine learning is of limited utility where problems or ground truths are not well-defined, or if suitable measures of correctness are not available. As a result, machines may excel at replicating, automating and standardizing human behaviour on manual chores, meanwhile the conceptual clinical challenges relating to definition, evaluation, and judgement remain in the realm of human intelligence and insight.
Collapse
Affiliation(s)
- Daniel Jarrett
- 1 Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, UK.,2 Mirada Medical Ltd, Oxford, UK
| | - Eleanor Stride
- 1 Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, UK
| | - Katherine Vallis
- 3 Department of Oncology, Oxford Institute for Radiation Oncology, University of Oxford, UK
| | | |
Collapse
|
86
|
Jang HJ, Cho KO. Applications of deep learning for the analysis of medical data. Arch Pharm Res 2019; 42:492-504. [PMID: 31140082 DOI: 10.1007/s12272-019-01162-9] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Accepted: 05/20/2019] [Indexed: 02/06/2023]
Abstract
Over the past decade, deep learning has demonstrated superior performances in solving many problems in various fields of medicine compared with other machine learning methods. To understand how deep learning has surpassed traditional machine learning techniques, in this review, we briefly explore the basic learning algorithms underlying deep learning. In addition, the procedures for building deep learning-based classifiers for seizure electroencephalograms and gastric tissue slides are described as examples to demonstrate the simplicity and effectiveness of deep learning applications. Finally, we review the clinical applications of deep learning in radiology, pathology, and drug discovery, where deep learning has been actively adopted. Considering the great advantages of deep learning techniques, deep learning will be increasingly and widely utilized in a wide variety of different areas in medicine in the coming decades.
Collapse
Affiliation(s)
- Hyun-Jong Jang
- Department of Physiology, Department of Biomedicine & Health Sciences, Catholic Neuroscience Institute, College of Medicine, The Catholic University of Korea, Seoul, 06591, South Korea
| | - Kyung-Ok Cho
- Department of Pharmacology, Department of Biomedicine & Health Sciences, Catholic Neuroscience Institute, Institute of Aging and Metabolic Diseases, College of Medicine, The Catholic University of Korea, 222 Banpo-Daero, Seocho-Gu, Seoul, 06591, South Korea.
| |
Collapse
|
87
|
Nie D, Wang L, Gao Y, Lian J, Shen D. STRAINet: Spatially Varying sTochastic Residual AdversarIal Networks for MRI Pelvic Organ Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:1552-1564. [PMID: 30307879 PMCID: PMC6550324 DOI: 10.1109/tnnls.2018.2870182] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Accurate segmentation of pelvic organs is important for prostate radiation therapy. Modern radiation therapy starts to use a magnetic resonance image (MRI) as an alternative to computed tomography image because of its superior soft tissue contrast and also free of risk from radiation exposure. However, segmentation of pelvic organs from MRI is a challenging problem due to inconsistent organ appearance across patients and also large intrapatient anatomical variations across treatment days. To address such challenges, we propose a novel deep network architecture, called "Spatially varying sTochastic Residual AdversarIal Network" (STRAINet), to delineate pelvic organs from MRI in an end-to-end fashion. Compared to the traditional fully convolutional networks (FCN), the proposed architecture has two main contributions: 1) inspired by the recent success of residual learning, we propose an evolutionary version of the residual unit, i.e., stochastic residual unit, and use it to the plain convolutional layers in the FCN. We further propose long-range stochastic residual connections to pass features from shallow layers to deep layers; and 2) we propose to integrate three previously proposed network strategies to form a new network for better medical image segmentation: a) we apply dilated convolution in the smallest resolution feature maps, so that we can gain a larger receptive field without overly losing spatial information; b) we propose a spatially varying convolutional layer that adapts convolutional filters to different regions of interest; and c) an adversarial network is proposed to further correct the segmented organ structures. Finally, STRAINet is used to iteratively refine the segmentation probability maps in an autocontext manner. Experimental results show that our STRAINet achieved the state-of-the-art segmentation accuracy. Further analysis also indicates that our proposed network components contribute most to the performance.
Collapse
Affiliation(s)
- Dong Nie
- Department of Computer Science, Department of Radiology and BRIC, UNC-Chapel Hill
| | - Li Wang
- Department of Radiology and BRIC, UNC-Chapel Hill
| | - Yaozong Gao
- Shanghai United Imaging Intelligence Co., Ltd
| | - Jun Lian
- Department of Radiation Oncology, UNC-Chapel Hill
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC-Chapel Hill, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
88
|
Wang S, He K, Nie D, Zhou S, Gao Y, Shen D. CT male pelvic organ segmentation using fully convolutional networks with boundary sensitive representation. Med Image Anal 2019; 54:168-178. [PMID: 30928830 PMCID: PMC6506162 DOI: 10.1016/j.media.2019.03.003] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 03/17/2019] [Accepted: 03/20/2019] [Indexed: 12/27/2022]
Abstract
Accurate segmentation of the prostate and organs at risk (e.g., bladder and rectum) in CT images is a crucial step for radiation therapy in the treatment of prostate cancer. However, it is a very challenging task due to unclear boundaries, large intra- and inter-patient shape variability, and uncertain existence of bowel gases and fiducial markers. In this paper, we propose a novel automatic segmentation framework using fully convolutional networks with boundary sensitive representation to address this challenging problem. Our novel segmentation framework contains three modules. First, an organ localization model is designed to focus on the candidate segmentation region of each organ for better performance. Then, a boundary sensitive representation model based on multi-task learning is proposed to represent the semantic boundary information in a more robust and accurate manner. Finally, a multi-label cross-entropy loss function combining boundary sensitive representation is introduced to train a fully convolutional network for the organ segmentation. The proposed method is evaluated on a large and diverse planning CT dataset with 313 images from 313 prostate cancer patients. Experimental results show that the performance of our proposed method outperforms the baseline fully convolutional networks, as well as other state-of-the-art methods in CT male pelvic organ segmentation.
Collapse
Affiliation(s)
- Shuai Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Kelei He
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Sihang Zhou
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; School of Computer, National University of Defense Technology, Changsha, China
| | - Yaozong Gao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea.
| |
Collapse
|
89
|
Zhang Z, Sejdić E. Radiological images and machine learning: Trends, perspectives, and prospects. Comput Biol Med 2019; 108:354-370. [PMID: 31054502 PMCID: PMC6531364 DOI: 10.1016/j.compbiomed.2019.02.017] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 01/18/2023]
Abstract
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Collapse
Affiliation(s)
- Zhenwei Zhang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - Ervin Sejdić
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA.
| |
Collapse
|
90
|
Finger-Vein Verification Based on LSTM Recurrent Neural Networks. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9081687] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Finger-vein biometrics has been extensively investigated for personal verification. A challenge is that the finger-vein acquisition is affected by many factors, which results in many ambiguous regions in the finger-vein image. Generally, the separability between vein and background is poor in such regions. Despite recent advances in finger-vein pattern segmentation, current solutions still lack the robustness to extract finger-vein features from raw images because they do not take into account the complex spatial dependencies of vein pattern. This paper proposes a deep learning model to extract vein features by combining the Convolutional Neural Networks (CNN) model and Long Short-Term Memory (LSTM) model. Firstly, we automatically assign the label based on a combination of known state of the art handcrafted finger-vein image segmentation techniques, and generate various sequences for each labeled pixel along different directions. Secondly, several Stacked Convolutional Neural Networks and Long Short-Term Memory (SCNN-LSTM) models are independently trained on the resulting sequences. The outputs of various SCNN-LSTMs form a complementary and over-complete representation and are conjointly put into Probabilistic Support Vector Machine (P-SVM) to predict the probability of each pixel of being foreground (i.e., vein pixel) given several sequences centered on it. Thirdly, we propose a supervised encoding scheme to extract the binary vein texture. A threshold is automatically computed by taking into account the maximal separation between the inter-class distance and the intra-class distance. In our approach, the CNN learns robust features for vein texture pattern representation and LSTM stores the complex spatial dependencies of vein patterns. So, the pixels in any region of a test image can then be classified effectively. In addition, the supervised information is employed to encode the vein patterns, so the resulting encoding images contain more discriminating features. The experimental results on one public finger-vein database show that the proposed approach significantly improves the finger-vein verification accuracy.
Collapse
|
91
|
Yan K, Wang X, Kim J, Khadra M, Fulham M, Feng D. A propagation-DNN: Deep combination learning of multi-level features for MR prostate segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 170:11-21. [PMID: 30712600 DOI: 10.1016/j.cmpb.2018.12.031] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 12/13/2018] [Accepted: 12/28/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Prostate segmentation on Magnetic Resonance (MR) imaging is problematic because disease changes the shape and boundaries of the gland and it can be difficult to separate the prostate from surrounding tissues. We propose an automated model that extracts and combines multi-level features in a deep neural network to segment prostate on MR images. METHODS Our proposed model, the Propagation Deep Neural Network (P-DNN), incorporates the optimal combination of multi-level feature extraction as a single model. High level features from the convolved data using DNN are extracted for prostate localization and shape recognition, while labeling propagation, by low level cues, is embedded into a deep layer to delineate the prostate boundary. RESULTS A well-recognized benchmarking dataset (50 training data and 30 testing data from patients) was used to evaluate the P-DNN. When compared it to existing DNN methods, the P-DNN statistically outperformed the baseline DNN models with an average improvement in the DSC of 3.19%. When compared to the state-of-the-art non-DNN prostate segmentation methods, P-DNN was competitive by achieving 89.9 ± 2.8% DSC and 6.84 ± 2.5 mm HD on training sets and 84.13 ± 5.18% DSC and 9.74 ± 4.21 mm HD on testing sets. CONCLUSION Our results show that P-DNN maximizes multi-level feature extraction for prostate segmentation of MR images.
Collapse
Affiliation(s)
- Ke Yan
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia
| | - Xiuying Wang
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia.
| | - Jinman Kim
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia
| | - Mohamed Khadra
- Department of Urology, Nepean Hospital, Kingswood, Australia
| | - Michael Fulham
- Department of Molecular Imaging, Royal Prince Alfred Hospital, Sydney, Australia
| | - Dagan Feng
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia
| |
Collapse
|
92
|
Zhu X, Suk HI, Shen D. Group sparse reduced rank regression for neuroimaging genetic study. WORLD WIDE WEB 2019; 22:673-688. [PMID: 31607788 PMCID: PMC6788769 DOI: 10.1007/s11280-018-0637-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 07/19/2018] [Accepted: 09/07/2018] [Indexed: 06/10/2023]
Abstract
The neuroimaging genetic study usually needs to deal with high dimensionality of both brain imaging data and genetic data, so that often resulting in the issue of curse of dimensionality. In this paper, we propose a group sparse reduced rank regression model to take the relations of both the phenotypes and the genotypes for the neuroimaging genetic study. Specifically, we propose designing a graph sparsity constraint as well as a reduced rank constraint to simultaneously conduct subspace learning and feature selection. The group sparsity constraint conducts feature selection to identify genotypes highly related to neuroimaging data, while the reduced rank constraint considers the relations among neuroimaging data to conduct subspace learning in the feature selection model. Furthermore, an alternative optimization algorithm is proposed to solve the resulting objective function and is proved to achieve fast convergence. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset showed that the proposed method has superiority on predicting the phenotype data by the genotype data, than the alternative methods under comparison.
Collapse
Affiliation(s)
- Xiaofeng Zhu
- Guangxi Key Lab of Multi-source Information Mining and Security, Guangxi Normal University, Guilin 541004, Guangxi, People’s Republic of China
- Institute of Natural and Mathematical Sciences, Massey University, Auckland 0745, New Zealand
- BRIC Center of the University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
| | - Dinggang Shen
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
- BRIC Center of the University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
93
|
Wang B, Lei Y, Tian S, Wang T, Liu Y, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Deeply supervised 3D fully convolutional networks with group dilated convolution for automatic MRI prostate segmentation. Med Phys 2019; 46:1707-1718. [PMID: 30702759 DOI: 10.1002/mp.13416] [Citation(s) in RCA: 123] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 01/18/2019] [Accepted: 01/24/2019] [Indexed: 12/15/2022] Open
Abstract
PURPOSE Reliable automated segmentation of the prostate is indispensable for image-guided prostate interventions. However, the segmentation task is challenging due to inhomogeneous intensity distributions, variation in prostate anatomy, among other problems. Manual segmentation can be time-consuming and is subject to inter- and intraobserver variation. We developed an automated deep learning-based method to address this technical challenge. METHODS We propose a three-dimensional (3D) fully convolutional networks (FCN) with deep supervision and group dilated convolution to segment the prostate on magnetic resonance imaging (MRI). In this method, a deeply supervised mechanism was introduced into a 3D FCN to effectively alleviate the common exploding or vanishing gradients problems in training deep models, which forces the update process of the hidden layer filters to favor highly discriminative features. A group dilated convolution which aggregates multiscale contextual information for dense prediction was proposed to enlarge the effective receptive field of convolutional neural networks, which improve the prediction accuracy of prostate boundary. In addition, we introduced a combined loss function including cosine and cross entropy, which measures similarity and dissimilarity between segmented and manual contours, to further improve the segmentation accuracy. Prostate volumes manually segmented by experienced physicians were used as a gold standard against which our segmentation accuracy was measured. RESULTS The proposed method was evaluated on an internal dataset comprising 40 T2-weighted prostate MR volumes. Our method achieved a Dice similarity coefficient (DSC) of 0.86 ± 0.04, a mean surface distance (MSD) of 1.79 ± 0.46 mm, 95% Hausdorff distance (95%HD) of 7.98 ± 2.91 mm, and absolute relative volume difference (aRVD) of 15.65 ± 10.82. A public dataset (PROMISE12) including 50 T2-weighted prostate MR volumes was also employed to evaluate our approach. Our method yielded a DSC of 0.88 ± 0.05, MSD of 1.02 ± 0.35 mm, 95% HD of 9.50 ± 5.11 mm, and aRVD of 8.93 ± 7.56. CONCLUSION We developed a novel deeply supervised deep learning-based approach with a group dilated convolution to automatically segment the MRI prostate, demonstrated its clinical feasibility, and validated its accuracy against manual segmentation. The proposed technique could be a useful tool for image-guided interventions in prostate cancer.
Collapse
Affiliation(s)
- Bo Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.,School of Physics and Electronic-Electrical Engineering, Ningxia University, Yinchuan, Ningxia, 750021, P.R. China
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
94
|
He K, Cao X, Shi Y, Nie D, Gao Y, Shen D. Pelvic Organ Segmentation Using Distinctive Curve Guided Fully Convolutional Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:585-595. [PMID: 30176583 PMCID: PMC6392049 DOI: 10.1109/tmi.2018.2867837] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Accurate segmentation of pelvic organs (i.e., prostate, bladder, and rectum) from CT image is crucial for effective prostate cancer radiotherapy. However, it is a challenging task due to: 1) low soft tissue contrast in CT images and 2) large shape and appearance variations of pelvic organs. In this paper, we employ a two-stage deep learning-based method, with a novel distinctive curve-guided fully convolutional network (FCN), to solve the aforementioned challenges. Specifically, the first stage is for fast and robust organ detection in the raw CT images. It is designed as a coarse segmentation network to provide region proposals for three pelvic organs. The second stage is for fine segmentation of each organ, based on the region proposal results. To better identify those indistinguishable pelvic organ boundaries, a novel morphological representation, namely, distinctive curve, is also introduced to help better conduct the precise segmentation. To implement this, in this second stage, a multi-task FCN is initially utilized to learn the distinctive curve and the segmentation map separately and then combine these two tasks to produce accurate segmentation map. The final segmentation results of all three pelvic organs are generated by a weighted max-voting strategy. We have conducted exhaustive experiments on a large and diverse pelvic CT data set for evaluating our proposed method. The experimental results demonstrate that our proposed method is accurate and robust for this challenging segmentation task, by also outperforming the state-of-the-art segmentation methods.
Collapse
|
95
|
Antonelli M, Cardoso MJ, Johnston EW, Appayya MB, Presles B, Modat M, Punwani S, Ourselin S. GAS: A genetic atlas selection strategy in multi-atlas segmentation framework. Med Image Anal 2019; 52:97-108. [PMID: 30476698 DOI: 10.1016/j.media.2018.11.007] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Revised: 11/08/2018] [Accepted: 11/15/2018] [Indexed: 11/15/2022]
Abstract
Multi-Atlas based Segmentation (MAS) algorithms have been successfully applied to many medical image segmentation tasks, but their success relies on a large number of atlases and good image registration performance. Choosing well-registered atlases for label fusion is vital for an accurate segmentation. This choice becomes even more crucial when the segmentation involves organs characterized by a high anatomical and pathological variability. In this paper, we propose a new genetic atlas selection strategy (GAS) that automatically chooses the best subset of atlases to be used for segmenting the target image, on the basis of both image similarity and segmentation overlap. More precisely, the key idea of GAS is that if two images are similar, the performances of an atlas for segmenting each image are similar. Since the ground truth of each atlas is known, GAS first selects a predefined number of similar images to the target, then, for each one of them, finds a near-optimal subset of atlases by means of a genetic algorithm. All these near-optimal subsets are then combined and used to segment the target image. GAS was tested on single-label and multi-label segmentation problems. In the first case, we considered the segmentation of both the whole prostate and of the left ventricle of the heart from magnetic resonance images. Regarding multi-label problems, the zonal segmentation of the prostate into peripheral and transition zone was considered. The results showed that the performance of MAS algorithms statistically improved when GAS is used.
Collapse
Affiliation(s)
- Michela Antonelli
- Centre for Medical Image Computing, University College London, U.K..
| | - M Jorge Cardoso
- Dep. of Medical Physics and Biomedical Engineering, University College London, U.K.; School of Biomedical Engineering and Imaging Science, Kings College London, U.K
| | | | | | - Benoit Presles
- Centre for Medical Image Computing, University College London, U.K
| | - Marc Modat
- Dep. of Medical Physics and Biomedical Engineering, University College London, U.K.; School of Biomedical Engineering and Imaging Science, Kings College London, U.K
| | - Shonit Punwani
- Centre for Medical Imaging, University College London, U.K
| | - Sebastien Ourselin
- Dep. of Medical Physics and Biomedical Engineering, University College London, U.K.; School of Biomedical Engineering and Imaging Science, Kings College London, U.K
| |
Collapse
|
96
|
Tan L, Liang A, Li L, Liu W, Kang H, Chen C. Automatic prostate segmentation based on fusion between deep network and variational methods. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:821-837. [PMID: 31403960 DOI: 10.3233/xst-190524] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
BACKGROUND Segmentation of prostate from magnetic resonance images (MRI) is a critical process for guiding prostate puncture and biopsy. Currently, the best results are obtained by Convolutional Neural Network (CNN). However, challenges still exist when applying CNN to segment prostate, such as data distribution issue caused by insubstantial and inconsistent intensity levels and vague boundaries in MRI. OBJECTIVE To segment prostate gland from a MRI dataset including different prostate images with limited resolution and quality. METHODS We propose and apply a global histogram matching approach to make intensity distribution of the MRI dataset closer to uniformity. To capture the real boundaries and improve segmentation accuracy, we employ a module of variational models to help improve performance. RESULTS Using seven evaluation metrics to quantify improvements of our proposed fusion approach compared with the state of art V-net model resulted in increase in the Dice Coefficient (11.2%), Jaccard Coefficient (13.7%), Volumetric Similarity (12.3%), Adjusted Rand Index (11.1%), Area under ROC Curve (11.6%), and reduction of the Mean Hausdorff Distance (16.1%) and Mahalanobis Distance (2.8%). The 3D reconstruction also validates the advantages of our proposed framework, especially in terms of smoothness, uniformity, and accuracy. In addition, observations from the selected examples of 2D visualization show that our segmentation results are closer to the real boundaries of the prostate, and better represent the prostate shapes. CONCLUSIONS Our proposed approach achieves significant performance improvements compared with the existing methods based on the original CNN or pure variational models.
Collapse
Affiliation(s)
- Lu Tan
- School of Electrical Engineering, Computing and Mathematical Sciences (Computing Discipline), Curtin University, Bentley, Western Australia, Australia
| | - Antoni Liang
- School of Electrical Engineering, Computing and Mathematical Sciences (Computing Discipline), Curtin University, Bentley, Western Australia, Australia
| | - Ling Li
- School of Electrical Engineering, Computing and Mathematical Sciences (Computing Discipline), Curtin University, Bentley, Western Australia, Australia
| | - Wanquan Liu
- School of Electrical Engineering, Computing and Mathematical Sciences (Computing Discipline), Curtin University, Bentley, Western Australia, Australia
| | - Hanwen Kang
- Department of Mechanical and Aerospace Engineering, Monash University, Clayton, VIC, Australia
| | - Chao Chen
- Department of Mechanical and Aerospace Engineering, Monash University, Clayton, VIC, Australia
| |
Collapse
|
97
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2019; 46:e1-e36. [PMID: 30367497 PMCID: PMC9560030 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 389] [Impact Index Per Article: 64.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | - Karen Drukker
- Department of RadiologyUniversity of ChicagoChicagoIL60637USA
| | - Kenny H. Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Ronald M. Summers
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | | |
Collapse
|
98
|
Girum KB, Créhange G, Hussain R, Walker PM, Lalande A. Deep Generative Model-Driven Multimodal Prostate Segmentation in Radiotherapy. ARTIFICIAL INTELLIGENCE IN RADIATION THERAPY 2019. [DOI: 10.1007/978-3-030-32486-5_15] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
99
|
Khosravan N, Celik H, Turkbey B, Jones EC, Wood B, Bagci U. A collaborative computer aided diagnosis (C-CAD) system with eye-tracking, sparse attentional model, and deep learning. Med Image Anal 2019; 51:101-115. [PMID: 30399507 PMCID: PMC6407631 DOI: 10.1016/j.media.2018.10.010] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Revised: 07/27/2018] [Accepted: 10/26/2018] [Indexed: 12/19/2022]
Abstract
Computer aided diagnosis (CAD) tools help radiologists to reduce diagnostic errors such as missing tumors and misdiagnosis. Vision researchers have been analyzing behaviors of radiologists during screening to understand how and why they miss tumors or misdiagnose. In this regard, eye-trackers have been instrumental in understanding visual search processes of radiologists. However, most relevant studies in this aspect are not compatible with realistic radiology reading rooms. In this study, we aim to develop a paradigm shifting CAD system, called collaborative CAD (C-CAD), that unifies CAD and eye-tracking systems in realistic radiology room settings. We first developed an eye-tracking interface providing radiologists with a real radiology reading room experience. Second, we propose a novel algorithm that unifies eye-tracking data and a CAD system. Specifically, we present a new graph based clustering and sparsification algorithm to transform eye-tracking data (gaze) into a graph model to interpret gaze patterns quantitatively and qualitatively. The proposed C-CAD collaborates with radiologists via eye-tracking technology and helps them to improve their diagnostic decisions. The C-CAD uses radiologists' search efficiency by processing their gaze patterns. Furthermore, the C-CAD incorporates a deep learning algorithm in a newly designed multi-task learning platform to segment and diagnose suspicious areas simultaneously. The proposed C-CAD system has been tested in a lung cancer screening experiment with multiple radiologists, reading low dose chest CTs. Promising results support the efficiency, accuracy and applicability of the proposed C-CAD system in a real radiology room setting. We have also shown that our framework is generalizable to more complex applications such as prostate cancer screening with multi-parametric magnetic resonance imaging (mp-MRI).
Collapse
Affiliation(s)
- Naji Khosravan
- Center for Research in Computer Vision, University of Central Florida, FL, United States
| | - Haydar Celik
- Clinical Center, National Institutes of Health, Bethesda, MD, United States
| | - Baris Turkbey
- Clinical Center, National Institutes of Health, Bethesda, MD, United States
| | - Elizabeth C Jones
- Clinical Center, National Institutes of Health, Bethesda, MD, United States
| | - Bradford Wood
- Clinical Center, National Institutes of Health, Bethesda, MD, United States
| | - Ulas Bagci
- Center for Research in Computer Vision, University of Central Florida, FL, United States.
| |
Collapse
|
100
|
Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys 2018; 29:102-127. [PMID: 30553609 DOI: 10.1016/j.zemedi.2018.11.002] [Citation(s) in RCA: 771] [Impact Index Per Article: 110.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 11/19/2018] [Accepted: 11/21/2018] [Indexed: 02/06/2023]
Abstract
What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.
Collapse
Affiliation(s)
- Alexander Selvikvåg Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Department of Computing, Mathematics and Physics, Western Norway University of Applied Sciences, Norway.
| | - Arvid Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Neuroinformatics and Image Analysis Laboratory, Department of Biomedicine, University of Bergen, Norway; Department of Health and Functioning, Western Norway University of Applied Sciences, Norway.
| |
Collapse
|