1
|
Küstner T, Vogel J, Hepp T, Forschner A, Pfannenberg C, Schmidt H, Schwenzer NF, Nikolaou K, la Fougère C, Seith F. Development of a Hybrid-Imaging-Based Prognostic Index for Metastasized-Melanoma Patients in Whole-Body 18F-FDG PET/CT and PET/MRI Data. Diagnostics (Basel) 2022; 12:2102. [PMID: 36140504 PMCID: PMC9498091 DOI: 10.3390/diagnostics12092102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/19/2022] [Accepted: 08/25/2022] [Indexed: 11/17/2022] Open
Abstract
Besides tremendous treatment success in advanced melanoma patients, the rapid development of oncologic treatment options comes with increasingly high costs and can cause severe life-threatening side effects. For this purpose, predictive baseline biomarkers are becoming increasingly important for risk stratification and personalized treatment planning. Thus, the aim of this pilot study was the development of a prognostic tool for the risk stratification of the treatment response and mortality based on PET/MRI and PET/CT, including a convolutional neural network (CNN) for metastasized-melanoma patients before systemic-treatment initiation. The evaluation was based on 37 patients (19 f, 62 ± 13 y/o) with unresectable metastasized melanomas who underwent whole-body 18F-FDG PET/MRI and PET/CT scans on the same day before the initiation of therapy with checkpoint inhibitors and/or BRAF/MEK inhibitors. The overall survival (OS), therapy response, metastatically involved organs, number of lesions, total lesion glycolysis, total metabolic tumor volume (TMTV), peak standardized uptake value (SULpeak), diameter (Dmlesion) and mean apparent diffusion coefficient (ADCmean) were assessed. For each marker, a Kaplan−Meier analysis and the statistical significance (Wilcoxon test, paired t-test and Bonferroni correction) were assessed. Patients were divided into high- and low-risk groups depending on the OS and treatment response. The CNN segmentation and prediction utilized multimodality imaging data for a complementary in-depth risk analysis per patient. The following parameters correlated with longer OS: a TMTV < 50 mL; no metastases in the brain, bone, liver, spleen or pleura; ≤4 affected organ regions; no metastases; a Dmlesion > 37 mm or SULpeak < 1.3; a range of the ADCmean < 600 mm2/s. However, none of the parameters correlated significantly with the stratification of the patients into the high- or low-risk groups. For the CNN, the sensitivity, specificity, PPV and accuracy were 92%, 96%, 92% and 95%, respectively. Imaging biomarkers such as the metastatic involvement of specific organs, a high tumor burden, the presence of at least one large lesion or a high range of intermetastatic diffusivity were negative predictors for the OS, but the identification of high-risk patients was not feasible with the handcrafted parameters. In contrast, the proposed CNN supplied risk stratification with high specificity and sensitivity.
Collapse
Affiliation(s)
- Thomas Küstner
- MIDAS.Lab, Department of Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Jonas Vogel
- Nuclear Medicine and Clinical Molecular Imaging, Department of Radiology, University Hospital Tübingen, 72076 Tubingen, Germany
| | - Tobias Hepp
- MIDAS.Lab, Department of Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Andrea Forschner
- Department of Dermatology, University Hospital of Tübingen, 72070 Tubingen, Germany
| | - Christina Pfannenberg
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Holger Schmidt
- Faculty of Medicine, Eberhard-Karls-University Tübingen, 72076 Tubingen, Germany
- Siemens Healthineers, 91052 Erlangen, Germany
| | - Nina F. Schwenzer
- Faculty of Medicine, Eberhard-Karls-University Tübingen, 72076 Tubingen, Germany
| | - Konstantin Nikolaou
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
- Cluster of Excellence iFIT (EXC 2180) Image-Guided and Functionally Instructed Tumor Therapies, Eberhard Karls University, 72076 Tubingen, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Tübingen, 72076 Tubingen, Germany
| | - Christian la Fougère
- Nuclear Medicine and Clinical Molecular Imaging, Department of Radiology, University Hospital Tübingen, 72076 Tubingen, Germany
- Cluster of Excellence iFIT (EXC 2180) Image-Guided and Functionally Instructed Tumor Therapies, Eberhard Karls University, 72076 Tubingen, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Tübingen, 72076 Tubingen, Germany
| | - Ferdinand Seith
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| |
Collapse
|
2
|
Egger J, Gsaxner C, Pepe A, Pomykala KL, Jonske F, Kurz M, Li J, Kleesiek J. Medical deep learning-A systematic meta-review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106874. [PMID: 35588660 DOI: 10.1016/j.cmpb.2022.106874] [Citation(s) in RCA: 87] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 04/22/2022] [Accepted: 05/10/2022] [Indexed: 05/22/2023]
Abstract
Deep learning has remarkably impacted several different scientific disciplines over the last few years. For example, in image processing and analysis, deep learning algorithms were able to outperform other cutting-edge methods. Additionally, deep learning has delivered state-of-the-art results in tasks like autonomous driving, outclassing previous attempts. There are even instances where deep learning outperformed humans, for example with object recognition and gaming. Deep learning is also showing vast potential in the medical domain. With the collection of large quantities of patient records and data, and a trend towards personalized treatments, there is a great need for automated and reliable processing and analysis of health information. Patient data is not only collected in clinical centers, like hospitals and private practices, but also by mobile healthcare apps or online websites. The abundance of collected patient data and the recent growth in the deep learning field has resulted in a large increase in research efforts. In Q2/2020, the search engine PubMed returned already over 11,000 results for the search term 'deep learning', and around 90% of these publications are from the last three years. However, even though PubMed represents the largest search engine in the medical field, it does not cover all medical-related publications. Hence, a complete overview of the field of 'medical deep learning' is almost impossible to obtain and acquiring a full overview of medical sub-fields is becoming increasingly more difficult. Nevertheless, several review and survey articles about medical deep learning have been published within the last few years. They focus, in general, on specific medical scenarios, like the analysis of medical images containing specific pathologies. With these surveys as a foundation, the aim of this article is to provide the first high-level, systematic meta-review of medical deep learning surveys.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany.
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Frederic Jonske
- Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Manuel Kurz
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147 Essen, Germany
| |
Collapse
|
3
|
Nam S, Kim D, Jung W, Zhu Y. Understanding the Research Landscape of Deep Learning in Biomedical Science: Scientometric Analysis. J Med Internet Res 2022; 24:e28114. [PMID: 35451980 PMCID: PMC9077503 DOI: 10.2196/28114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 05/30/2021] [Accepted: 02/20/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Advances in biomedical research using deep learning techniques have generated a large volume of related literature. However, there is a lack of scientometric studies that provide a bird's-eye view of them. This absence has led to a partial and fragmented understanding of the field and its progress. OBJECTIVE This study aimed to gain a quantitative and qualitative understanding of the scientific domain by analyzing diverse bibliographic entities that represent the research landscape from multiple perspectives and levels of granularity. METHODS We searched and retrieved 978 deep learning studies in biomedicine from the PubMed database. A scientometric analysis was performed by analyzing the metadata, content of influential works, and cited references. RESULTS In the process, we identified the current leading fields, major research topics and techniques, knowledge diffusion, and research collaboration. There was a predominant focus on applying deep learning, especially convolutional neural networks, to radiology and medical imaging, whereas a few studies focused on protein or genome analysis. Radiology and medical imaging also appeared to be the most significant knowledge sources and an important field in knowledge diffusion, followed by computer science and electrical engineering. A coauthorship analysis revealed various collaborations among engineering-oriented and biomedicine-oriented clusters of disciplines. CONCLUSIONS This study investigated the landscape of deep learning research in biomedicine and confirmed its interdisciplinary nature. Although it has been successful, we believe that there is a need for diverse applications in certain areas to further boost the contributions of deep learning in addressing biomedical research problems. We expect the results of this study to help researchers and communities better align their present and future work.
Collapse
Affiliation(s)
- Seojin Nam
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Donghun Kim
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Woojin Jung
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Yongjun Zhu
- Department of Library and Information Science, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
4
|
Chen Z, Chen X, Wang R. Application of SPECT and PET / CT with computer-aided diagnosis in bone metastasis of prostate cancer: a review. Cancer Imaging 2022; 22:18. [PMID: 35428360 PMCID: PMC9013072 DOI: 10.1186/s40644-022-00456-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Accepted: 04/04/2022] [Indexed: 01/05/2023] Open
Abstract
Bone metastasis has a significant influence on the prognosis of prostate cancer(PCa) patients. In this review, we discussed the current application of PCa bone metastasis diagnosis with single-photon emission computed tomography (SPECT) and positron emission tomography/computed tomography (PET/CT) computer-aided diagnosis(CAD) systems. A literature search identified articles concentrated on PCa bone metastasis and PET/CT or SPECT CAD systems using the PubMed database. We summarized the previous studies focused on CAD systems and manual quantitative markers calculation, and the coincidence rate was acceptable. We also analyzed the quantification methods, advantages, and disadvantages of CAD systems. CAD systems can detect abnormal lesions of PCa patients' 99mTc-MDP-SPECT, 18F-FDG-PET/CT, 18F-NaF-PET/CT, and 68 Ga-PSMA PET/CT images automated or semi-automated. CAD systems can also calculate the quantitative markers, which can quantify PCa patients' whole-body bone metastasis tumor burden accurately and quickly and give a standardized and objective result. SPECT and PET/CT CAD systems are potential tools to monitor and quantify bone metastasis lesions of PCa patients simply and accurately, the future clinical application of CAD systems in diagnosing PCa bone metastasis lesions is necessary and feasible.
Collapse
Affiliation(s)
- Zhao Chen
- Department of Nuclear Medicine, Peking University First Hospital, Xicheng District, Beijing, 100034 China
| | - Xueqi Chen
- Department of Nuclear Medicine, Peking University First Hospital, Xicheng District, Beijing, 100034 China
| | - Rongfu Wang
- Department of Nuclear Medicine, Peking University First Hospital, Xicheng District, Beijing, 100034 China
- Department of Nuclear Medicine, Peking University International Hospital, Changping District, Beijing, 102206 China
| |
Collapse
|
5
|
Yi Z, Salem F, Menon MC, Keung K, Xi C, Hultin S, Haroon Al Rasheed MR, Li L, Su F, Sun Z, Wei C, Huang W, Fredericks S, Lin Q, Banu K, Wong G, Rogers NM, Farouk S, Cravedi P, Shingde M, Smith RN, Rosales IA, O'Connell PJ, Colvin RB, Murphy B, Zhang W. Deep learning identified pathological abnormalities predictive of graft loss in kidney transplant biopsies. Kidney Int 2022; 101:288-298. [PMID: 34757124 PMCID: PMC10285669 DOI: 10.1016/j.kint.2021.09.028] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 08/12/2021] [Accepted: 09/09/2021] [Indexed: 10/19/2022]
Abstract
Interstitial fibrosis, tubular atrophy, and inflammation are major contributors to kidney allograft failure. Here we sought an objective, quantitative pathological assessment of these lesions to improve predictive utility and constructed a deep-learning-based pipeline recognizing normal vs. abnormal kidney tissue compartments and mononuclear leukocyte infiltrates. Periodic acid- Schiff stained slides of transplant biopsies (60 training and 33 testing) were used to quantify pathological lesions specific for interstitium, tubules and mononuclear leukocyte infiltration. The pipeline was applied to the whole slide images from 789 transplant biopsies (478 baseline [pre-implantation] and 311 post-transplant 12-month protocol biopsies) in two independent cohorts (GoCAR: 404 patients, AUSCAD: 212 patients) of transplant recipients to correlate composite lesion features with graft loss. Our model accurately recognized kidney tissue compartments and mononuclear leukocytes. The digital features significantly correlated with revised Banff 2007 scores but were more sensitive to subtle pathological changes below the thresholds in the Banff scores. The Interstitial and Tubular Abnormality Score (ITAS) in baseline samples was highly predictive of one-year graft loss, while a Composite Damage Score in 12-month post-transplant protocol biopsies predicted later graft loss. ITASs and Composite Damage Scores outperformed Banff scores or clinical predictors with superior graft loss prediction accuracy. High/intermediate risk groups stratified by ITASs or Composite Damage Scores also demonstrated significantly higher incidence of estimated glomerular filtration rate decline and subsequent graft damage. Thus, our deep-learning approach accurately detected and quantified pathological lesions from baseline or post-transplant biopsies and demonstrated superior ability for prediction of post-transplant graft loss with potential application as a prevention, risk stratification or monitoring tool.
Collapse
Affiliation(s)
- Zhengzi Yi
- Renal Division, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Fadi Salem
- Pathology Division, Department of Molecular and Cell Based Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Madhav C Menon
- Renal Division, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA; Nephrology Division, Department of Medicine, Yale School of Medicine, New Haven, Connecticut, USA
| | - Karen Keung
- Centre for Transplant and Renal Research, Westmead Institute for Medical Research, University of Sydney, Sydney, New South Wales, Australia; Department of Nephrology, Prince of Wales Hospital, Sydney, New South Wales, Australia
| | - Caixia Xi
- Renal Division, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Sebastian Hultin
- Centre for Transplant and Renal Research, Westmead Institute for Medical Research, University of Sydney, Sydney, New South Wales, Australia
| | - M Rizwan Haroon Al Rasheed
- Pathology Division, Department of Molecular and Cell Based Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Li Li
- Pathology Division, Department of Molecular and Cell Based Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Fei Su
- Renal Division, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Zeguo Sun
- Renal Division, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Chengguo Wei
- Renal Division, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Weiqing Huang
- Renal Division, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Samuel Fredericks
- Renal Division, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Qisheng Lin
- Nephrology Division, Department of Medicine, Yale School of Medicine, New Haven, Connecticut, USA
| | - Khadija Banu
- Nephrology Division, Department of Medicine, Yale School of Medicine, New Haven, Connecticut, USA
| | - Germaine Wong
- Centre for Transplant and Renal Research, Westmead Institute for Medical Research, University of Sydney, Sydney, New South Wales, Australia
| | - Natasha M Rogers
- Centre for Transplant and Renal Research, Westmead Institute for Medical Research, University of Sydney, Sydney, New South Wales, Australia
| | - Samira Farouk
- Renal Division, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Paolo Cravedi
- Renal Division, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Meena Shingde
- Centre for Transplant and Renal Research, Westmead Institute for Medical Research, University of Sydney, Sydney, New South Wales, Australia
| | - R Neal Smith
- Department of Pathology, Massachusetts General Hospital, Boston, Massachusetts, USA; Department of Pathology, Harvard Medical School, Boston, Massachusetts, USA
| | - Ivy A Rosales
- Department of Pathology, Massachusetts General Hospital, Boston, Massachusetts, USA; Department of Pathology, Harvard Medical School, Boston, Massachusetts, USA
| | - Philip J O'Connell
- Centre for Transplant and Renal Research, Westmead Institute for Medical Research, University of Sydney, Sydney, New South Wales, Australia; Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia; Department of Nephrology, Westmead Hospital, Sydney, New South Wales, Australia
| | - Robert B Colvin
- Department of Pathology, Massachusetts General Hospital, Boston, Massachusetts, USA; Department of Pathology, Harvard Medical School, Boston, Massachusetts, USA
| | - Barbara Murphy
- Renal Division, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Weijia Zhang
- Renal Division, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
| |
Collapse
|
6
|
Papandrianos N, Papageorgiou E. Automatic Diagnosis of Coronary Artery Disease in SPECT Myocardial Perfusion Imaging Employing Deep Learning. APPLIED SCIENCES 2021; 11:6362. [DOI: 10.3390/app11146362] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Focusing on coronary artery disease (CAD) patients, this research paper addresses the problem of automatic diagnosis of ischemia or infarction using single-photon emission computed tomography (SPECT) (Siemens Symbia S Series) myocardial perfusion imaging (MPI) scans and investigates the capabilities of deep learning and convolutional neural networks. Considering the wide applicability of deep learning in medical image classification, a robust CNN model whose architecture was previously determined in nuclear image analysis is introduced to recognize myocardial perfusion images by extracting the insightful features of an image and use them to classify it correctly. In addition, a deep learning classification approach using transfer learning is implemented to classify cardiovascular images as normal or abnormal (ischemia or infarction) from SPECT MPI scans. The present work is differentiated from other studies in nuclear cardiology as it utilizes SPECT MPI images. To address the two-class classification problem of CAD diagnosis, achieving adequate accuracy, simple, fast and efficient CNN architectures were built based on a CNN exploration process. They were then employed to identify the category of CAD diagnosis, presenting its generalization capabilities. The results revealed that the applied methods are sufficiently accurate and able to differentiate the infarction or ischemia from healthy patients (overall classification accuracy = 93.47% ± 2.81%, AUC score = 0.936). To strengthen the findings of this study, the proposed deep learning approaches were compared with other popular state-of-the-art CNN architectures for the specific dataset. The prediction results show the efficacy of new deep learning architecture applied for CAD diagnosis using SPECT MPI scans over the existing ones in nuclear medicine.
Collapse
Affiliation(s)
- Nikolaos Papandrianos
- Department of Energy Systems, Faculty of Technology, University of Thessaly, Geopolis Campus, Larissa-Trikala Ring Road, 41500 Larissa, Greece
| | - Elpiniki Papageorgiou
- Department of Energy Systems, Faculty of Technology, University of Thessaly, Geopolis Campus, Larissa-Trikala Ring Road, 41500 Larissa, Greece
| |
Collapse
|
7
|
Mazlan AU, Sahabudin NAB, Remli MA, Ismail NSN, Mohamad MS, Warif NBA. Supervised and Unsupervised Machine Learning for Cancer Classification: Recent Development. 2021 IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC CONTROL & INTELLIGENT SYSTEMS (I2CACIS) 2021. [DOI: 10.1109/i2cacis52118.2021.9495888] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
8
|
Pennig L, Shahzad R, Caldeira L, Lennartz S, Thiele F, Goertz L, Zopfs D, Meißner AK, Fürtjes G, Perkuhn M, Kabbasch C, Grau S, Borggrefe J, Laukamp KR. Automated Detection and Segmentation of Brain Metastases in Malignant Melanoma: Evaluation of a Dedicated Deep Learning Model. AJNR Am J Neuroradiol 2021; 42:655-662. [PMID: 33541907 DOI: 10.3174/ajnr.a6982] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Accepted: 10/21/2020] [Indexed: 12/12/2022]
Abstract
BACKGROUND AND PURPOSE Malignant melanoma is an aggressive skin cancer in which brain metastases are common. Our aim was to establish and evaluate a deep learning model for fully automated detection and segmentation of brain metastases in patients with malignant melanoma using clinical routine MR imaging. MATERIALS AND METHODS Sixty-nine patients with melanoma with a total of 135 brain metastases at initial diagnosis and available multiparametric MR imaging datasets (T1-/T2-weighted, T1-weighted gadolinium contrast-enhanced, FLAIR) were included. A previously established deep learning model architecture (3D convolutional neural network; DeepMedic) simultaneously operating on the aforementioned MR images was trained on a cohort of 55 patients with 103 metastases using 5-fold cross-validation. The efficacy of the deep learning model was evaluated using an independent test set consisting of 14 patients with 32 metastases. Manual segmentations of metastases in a voxelwise manner (T1-weighted gadolinium contrast-enhanced imaging) performed by 2 radiologists in consensus served as the ground truth. RESULTS After training, the deep learning model detected 28 of 32 brain metastases (mean volume, 1.0 [SD, 2.4] cm3) in the test cohort correctly (sensitivity of 88%), while false-positive findings of 0.71 per scan were observed. Compared with the ground truth, automated segmentations achieved a median Dice similarity coefficient of 0.75. CONCLUSIONS Deep learning-based automated detection and segmentation of brain metastases in malignant melanoma yields high detection and segmentation accuracy with false-positive findings of <1 per scan.
Collapse
Affiliation(s)
- L Pennig
- From the Institute for Diagnostic and Interventional Radiology (L.P., R.S., L.C., S.L., F.T., D.Z., M.P., C.K., J.B., K.R.L.)
| | - R Shahzad
- From the Institute for Diagnostic and Interventional Radiology (L.P., R.S., L.C., S.L., F.T., D.Z., M.P., C.K., J.B., K.R.L.).,Philips Innovative Technologies (R.S., F.T., M.P.), Aachen, Germany
| | - L Caldeira
- From the Institute for Diagnostic and Interventional Radiology (L.P., R.S., L.C., S.L., F.T., D.Z., M.P., C.K., J.B., K.R.L.)
| | - S Lennartz
- From the Institute for Diagnostic and Interventional Radiology (L.P., R.S., L.C., S.L., F.T., D.Z., M.P., C.K., J.B., K.R.L.)
| | - F Thiele
- From the Institute for Diagnostic and Interventional Radiology (L.P., R.S., L.C., S.L., F.T., D.Z., M.P., C.K., J.B., K.R.L.).,Philips Innovative Technologies (R.S., F.T., M.P.), Aachen, Germany
| | - L Goertz
- Center for Neurosurgery (L.G., G.F., S.G.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - D Zopfs
- From the Institute for Diagnostic and Interventional Radiology (L.P., R.S., L.C., S.L., F.T., D.Z., M.P., C.K., J.B., K.R.L.)
| | - A-K Meißner
- Department of Stereotaxy and Functional Neurosurgery (A.-K.M., G.F.), Center for Neurosurgery, University Hospital Cologne, Cologne, Germany
| | - G Fürtjes
- Center for Neurosurgery (L.G., G.F., S.G.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.,Department of Stereotaxy and Functional Neurosurgery (A.-K.M., G.F.), Center for Neurosurgery, University Hospital Cologne, Cologne, Germany
| | - M Perkuhn
- From the Institute for Diagnostic and Interventional Radiology (L.P., R.S., L.C., S.L., F.T., D.Z., M.P., C.K., J.B., K.R.L.).,Philips Innovative Technologies (R.S., F.T., M.P.), Aachen, Germany
| | - C Kabbasch
- From the Institute for Diagnostic and Interventional Radiology (L.P., R.S., L.C., S.L., F.T., D.Z., M.P., C.K., J.B., K.R.L.)
| | - S Grau
- Center for Neurosurgery (L.G., G.F., S.G.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - J Borggrefe
- From the Institute for Diagnostic and Interventional Radiology (L.P., R.S., L.C., S.L., F.T., D.Z., M.P., C.K., J.B., K.R.L.)
| | - K R Laukamp
- From the Institute for Diagnostic and Interventional Radiology (L.P., R.S., L.C., S.L., F.T., D.Z., M.P., C.K., J.B., K.R.L.) .,Department of Radiology (K.R.L.), University Hospitals Cleveland Medical Center, Cleveland, Ohio.,Department of Radiology (K.R.L.), Case Western Reserve University, Cleveland, Ohio
| |
Collapse
|
9
|
Gillies RJ, Schabath MB. Radiomics Improves Cancer Screening and Early Detection. Cancer Epidemiol Biomarkers Prev 2020; 29:2556-2567. [PMID: 32917666 DOI: 10.1158/1055-9965.epi-20-0075] [Citation(s) in RCA: 86] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Revised: 03/18/2020] [Accepted: 08/31/2020] [Indexed: 11/16/2022] Open
Abstract
Imaging is a key technology in the early detection of cancers, including X-ray mammography, low-dose CT for lung cancer, or optical imaging for skin, esophageal, or colorectal cancers. Historically, imaging information in early detection schema was assessed qualitatively. However, the last decade has seen increased development of computerized tools that convert images into quantitative mineable data (radiomics), and their subsequent analyses with artificial intelligence (AI). These tools are improving diagnostic accuracy of early lesions to define risk and classify malignant/aggressive from benign/indolent disease. The first section of this review will briefly describe the various imaging modalities and their use as primary or secondary screens in an early detection pipeline. The second section will describe specific use cases to illustrate the breadth of imaging modalities as well as the benefits of quantitative image analytics. These will include optical (skin cancer), X-ray CT (pancreatic and lung cancer), X-ray mammography (breast cancer), multiparametric MRI (breast and prostate cancer), PET (pancreatic cancer), and ultrasound elastography (liver cancer). Finally, we will discuss the inexorable improvements in radiomics to build more robust classifier models and the significant limitations to this development, including access to well-annotated databases, and biological descriptors of the imaged feature data.See all articles in this CEBP Focus section, "NCI Early Detection Research Network: Making Cancer Detection Possible."
Collapse
Affiliation(s)
- Robert J Gillies
- Department of Cancer Physiology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida. .,Department of Radiology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida
| | - Matthew B Schabath
- Department of Cancer Epidemiology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida.,Department of Thoracic Oncology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida
| |
Collapse
|
10
|
Papandrianos N, Papageorgiou E, Anagnostis A, Papageorgiou K. Bone metastasis classification using whole body images from prostate cancer patients based on convolutional neural networks application. PLoS One 2020; 15:e0237213. [PMID: 32797099 PMCID: PMC7428190 DOI: 10.1371/journal.pone.0237213] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 07/22/2020] [Indexed: 12/21/2022] Open
Abstract
Bone metastasis is one of the most frequent diseases in prostate cancer; scintigraphy imaging is particularly important for the clinical diagnosis of bone metastasis. Up to date, minimal research has been conducted regarding the application of machine learning with emphasis on modern efficient convolutional neural networks (CNNs) algorithms, for the diagnosis of prostate cancer metastasis from bone scintigraphy images. The advantageous and outstanding capabilities of deep learning, machine learning's groundbreaking technological advancement, have not yet been fully investigated regarding their application in computer-aided diagnosis systems in the field of medical image analysis, such as the problem of bone metastasis classification in whole-body scans. In particular, CNNs are gaining great attention due to their ability to recognize complex visual patterns, in the same way as human perception operates. Considering all these new enhancements in the field of deep learning, a set of simpler, faster and more accurate CNN architectures, designed for classification of metastatic prostate cancer in bones, is explored. This research study has a two-fold goal: to create and also demonstrate a set of simple but robust CNN models for automatic classification of whole-body scans in two categories, malignant (bone metastasis) or healthy, using solely the scans at the input level. Through a meticulous exploration of CNN hyper-parameter selection and fine-tuning, the best architecture is selected with respect to classification accuracy. Thus a CNN model with improved classification capabilities for bone metastasis diagnosis is produced, using bone scans from prostate cancer patients. The achieved classification testing accuracy is 97.38%, whereas the average sensitivity is approximately 95.8%. Finally, the best-performing CNN method is compared to other popular and well-known CNN architectures used for medical imaging, like VGG16, ResNet50, GoogleNet and MobileNet. The classification results show that the proposed CNN-based approach outperforms the popular CNN methods in nuclear medicine for metastatic prostate cancer diagnosis in bones.
Collapse
Affiliation(s)
| | - Elpiniki Papageorgiou
- Faculty of Technology, Dept. of Energy Systems, University of Thessaly, Geopolis Campus, Larisa, Greece
- Institute for Bio-economy and Agri-technology, Center for Research and Technology Hellas, Greece
| | - Athanasios Anagnostis
- Institute for Bio-economy and Agri-technology, Center for Research and Technology Hellas, Greece
- Department of Computer Science and Telecommunications, University of Thessaly, Lamia, Greece
| | | |
Collapse
|
11
|
Meijering E. A bird's-eye view of deep learning in bioimage analysis. Comput Struct Biotechnol J 2020; 18:2312-2325. [PMID: 32994890 PMCID: PMC7494605 DOI: 10.1016/j.csbj.2020.08.003] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/26/2020] [Accepted: 08/01/2020] [Indexed: 02/07/2023] Open
Abstract
Deep learning of artificial neural networks has become the de facto standard approach to solving data analysis problems in virtually all fields of science and engineering. Also in biology and medicine, deep learning technologies are fundamentally transforming how we acquire, process, analyze, and interpret data, with potentially far-reaching consequences for healthcare. In this mini-review, we take a bird's-eye view at the past, present, and future developments of deep learning, starting from science at large, to biomedical imaging, and bioimage analysis in particular.
Collapse
Affiliation(s)
- Erik Meijering
- School of Computer Science and Engineering & Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
12
|
Papandrianos N, Papageorgiou E, Anagnostis A, Papageorgiou K. Efficient Bone Metastasis Diagnosis in Bone Scintigraphy Using a Fast Convolutional Neural Network Architecture. Diagnostics (Basel) 2020; 10:532. [PMID: 32751433 PMCID: PMC7459937 DOI: 10.3390/diagnostics10080532] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 07/24/2020] [Accepted: 07/25/2020] [Indexed: 12/20/2022] Open
Abstract
(1) Background: Bone metastasis is among diseases that frequently appear in breast, lung and prostate cancer; the most popular imaging method of screening in metastasis is bone scintigraphy and presents very high sensitivity (95%). In the context of image recognition, this work investigates convolutional neural networks (CNNs), which are an efficient type of deep neural networks, to sort out the diagnosis problem of bone metastasis on prostate cancer patients; (2) Methods: As a deep learning model, CNN is able to extract the feature of an image and use this feature to classify images. It is widely applied in medical image classification. This study is devoted to developing a robust CNN model that efficiently and fast classifies bone scintigraphy images of patients suffering from prostate cancer, by determining whether or not they develop metastasis of prostate cancer. The retrospective study included 778 sequential male patients who underwent whole-body bone scans. A nuclear medicine physician classified all the cases into three categories: (a) benign, (b) malignant and (c) degenerative, which were used as gold standard; (3) Results: An efficient and fast CNN architecture was built, based on CNN exploration performance, using whole body scintigraphy images for bone metastasis diagnosis, achieving a high prediction accuracy. The results showed that the method is sufficiently precise when it comes to differentiate a bone metastasis case from other either degenerative changes or normal tissue cases (overall classification accuracy = 91.61% ± 2.46%). The accuracy of prostate patient cases identification regarding normal, malignant and degenerative changes was 91.3%, 94.7% and 88.6%, respectively. To strengthen the outcomes of this study the authors further compared the best performing CNN method to other popular CNN architectures for medical imaging, like ResNet50, VGG16, GoogleNet and MobileNet, as clearly reported in the literature; and (4) Conclusions: The remarkable outcome of this study is the ability of the method for an easier and more precise interpretation of whole-body images, with effects on the diagnosis accuracy and decision making on the treatment to be applied.
Collapse
Affiliation(s)
| | - Elpiniki Papageorgiou
- Department of Energy Systems, Faculty of Technology, University of Thessaly, Geopolis Campus, Larissa-Trikala Ring Road, 41500 Larissa, Greece
| | - Athanasios Anagnostis
- Center for Research and Technology—Hellas (CERTH), Institute for Bio-Economy and Agri-Technology (iBO), 57001 Thessaloniki, Greece;
- Department of Computer Science and Telecommunications, University of Thessaly, 35131 Lamia, Greece;
| | | |
Collapse
|
13
|
Pennig L, Hoyer UCI, Goertz L, Shahzad R, Persigehl T, Thiele F, Perkuhn M, Ruge MI, Kabbasch C, Borggrefe J, Caldeira L, Laukamp KR. Primary Central Nervous System Lymphoma: Clinical Evaluation of Automated Segmentation on Multiparametric
MRI
Using Deep Learning. J Magn Reson Imaging 2020; 53:259-268. [DOI: 10.1002/jmri.27288] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 06/25/2020] [Accepted: 06/26/2020] [Indexed: 12/30/2022] Open
Affiliation(s)
- Lenhard Pennig
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne University of Cologne Cologne Germany
| | - Ulrike Cornelia Isabel Hoyer
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne University of Cologne Cologne Germany
| | - Lukas Goertz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne University of Cologne Cologne Germany
- Department of General Neurosurgery, Center for Neurosurgery, Faculty of Medicine and University Hospital Cologne University of Cologne Cologne Germany
| | - Rahil Shahzad
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne University of Cologne Cologne Germany
- Philips GmbH Innovative Technologies Aachen Germany
| | - Thorsten Persigehl
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne University of Cologne Cologne Germany
| | - Frank Thiele
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne University of Cologne Cologne Germany
- Philips GmbH Innovative Technologies Aachen Germany
| | - Michael Perkuhn
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne University of Cologne Cologne Germany
- Philips GmbH Innovative Technologies Aachen Germany
| | - Maximilian I. Ruge
- Department of Stereotaxy and Functional Neurosurgery, Center for Neurosurgery, Faculty of Medicine and University Hospital Cologne University of Cologne Cologne Germany
| | - Christoph Kabbasch
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne University of Cologne Cologne Germany
| | - Jan Borggrefe
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne University of Cologne Cologne Germany
| | - Liliana Caldeira
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne University of Cologne Cologne Germany
| | - Kai Roman Laukamp
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne University of Cologne Cologne Germany
- Department of Radiology University Hospitals Cleveland Medical Center Cleveland Ohio USA
- Department of Radiology Case Western Reserve University Cleveland Cleveland Ohio USA
| |
Collapse
|
14
|
Laukamp KR, Pennig L, Thiele F, Reimer R, Görtz L, Shakirin G, Zopfs D, Timmer M, Perkuhn M, Borggrefe J. Automated Meningioma Segmentation in Multiparametric MRI : Comparable Effectiveness of a Deep Learning Model and Manual Segmentation. Clin Neuroradiol 2020; 31:357-366. [PMID: 32060575 DOI: 10.1007/s00062-020-00884-4] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 01/27/2020] [Indexed: 10/25/2022]
Abstract
PURPOSE Volumetric assessment of meningiomas represents a valuable tool for treatment planning and evaluation of tumor growth as it enables a more precise assessment of tumor size than conventional diameter methods. This study established a dedicated meningioma deep learning model based on routine magnetic resonance imaging (MRI) data and evaluated its performance for automated tumor segmentation. METHODS The MRI datasets included T1-weighted/T2-weighted, T1-weighted contrast-enhanced (T1CE) and FLAIR of 126 patients with intracranial meningiomas (grade I: 97, grade II: 29). For automated segmentation, an established deep learning model architecture (3D deep convolutional neural network, DeepMedic, BioMedIA) operating on all four MR sequences was used. Segmentation included the following two components: (i) contrast-enhancing tumor volume in T1CE and (ii) total lesion volume (union of lesion volume in T1CE and FLAIR, including solid tumor parts and surrounding edema). Preprocessing of imaging data included registration, skull stripping, resampling, and normalization. After training of the deep learning model using manual segmentations by 2 independent readers from 70 patients (training group), the algorithm was evaluated on 56 patients (validation group) by comparing automated to ground truth manual segmentations, which were performed by 2 experienced readers in consensus. RESULTS Of the 56 meningiomas in the validation group 55 were detected by the deep learning model. In these patients the comparison of the deep learning model and manual segmentations revealed average dice coefficients of 0.91 ± 0.08 for contrast-enhancing tumor volume and 0.82 ± 0.12 for total lesion volume. In the training group, interreader variabilities of the 2 manual readers were 0.92 ± 0.07 for contrast-enhancing tumor and 0.88 ± 0.05 for total lesion volume. CONCLUSION Deep learning-based automated segmentation yielded high segmentation accuracy, comparable to manual interreader variability.
Collapse
Affiliation(s)
- Kai Roman Laukamp
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany. .,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA. .,Department of Radiology, Case Western Reserve University Cleveland, Cleveland, OH, USA.
| | - Lenhard Pennig
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany
| | - Frank Thiele
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Philips GmbH Innovative Technologies, Aachen, Germany
| | - Robert Reimer
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany
| | - Lukas Görtz
- Center for Neurosurgery, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Georgy Shakirin
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Philips GmbH Innovative Technologies, Aachen, Germany
| | - David Zopfs
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany
| | - Marco Timmer
- Center for Neurosurgery, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Michael Perkuhn
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Philips GmbH Innovative Technologies, Aachen, Germany
| | - Jan Borggrefe
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany
| |
Collapse
|
15
|
Fernandez-Maloigne C, Guillevin R. L’intelligence artificielle au service de l’imagerie et de la santé des femmes. IMAGERIE DE LA FEMME 2019. [DOI: 10.1016/j.femme.2019.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
16
|
Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09788-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
17
|
Landini L. The Future of Medical Imaging. Curr Pharm Des 2019; 24:5487-5488. [DOI: 10.2174/138161282446190426115124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Affiliation(s)
- Luigi Landini
- Department of Information Engineering, University of Pisa, 56126 Pisa, Italy; Fondazione G. Monasterio, CNR-Regione Toscana, Via Moruzzi 1, 56124 Pisa, Italy
| |
Collapse
|
18
|
William W, Ware A, Basaza-Ejiri AH, Obungoloch J. Cervical cancer classification from Pap-smears using an enhanced fuzzy C-means algorithm. INFORMATICS IN MEDICINE UNLOCKED 2019. [DOI: 10.1016/j.imu.2019.02.001] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|
19
|
Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys 2018; 29:102-127. [PMID: 30553609 DOI: 10.1016/j.zemedi.2018.11.002] [Citation(s) in RCA: 784] [Impact Index Per Article: 112.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 11/19/2018] [Accepted: 11/21/2018] [Indexed: 02/06/2023]
Abstract
What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.
Collapse
Affiliation(s)
- Alexander Selvikvåg Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Department of Computing, Mathematics and Physics, Western Norway University of Applied Sciences, Norway.
| | - Arvid Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Neuroinformatics and Image Analysis Laboratory, Department of Biomedicine, University of Bergen, Norway; Department of Health and Functioning, Western Norway University of Applied Sciences, Norway.
| |
Collapse
|
20
|
Hiraiwa T, Ariji Y, Fukuda M, Kise Y, Nakata K, Katsumata A, Fujita H, Ariji E. A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography. Dentomaxillofac Radiol 2018; 48:20180218. [PMID: 30379570 DOI: 10.1259/dmfr.20180218] [Citation(s) in RCA: 142] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
OBJECTIVES: The distal root of the mandibular first molar occasionally has an extra root, which can directly affect the outcome of endodontic therapy. In this study, we examined the diagnostic performance of a deep learning system for classification of the root morphology of mandibular first molars on panoramic radiographs. Dental cone-beam CT (CBCT) was used as the gold standard. METHODS: CBCT images and panoramic radiographs of 760 mandibular first molars from 400 patients who had not undergone root canal treatments were analyzed. Distal roots were examined on CBCT images to determine the presence of a single or extra root. Image patches of the roots were segmented from panoramic radiographs and applied to a deep learning system, and its diagnostic performance in the classification of root morphplogy was examined. RESULTS: Extra roots were observed in 21.4% of distal roots on CBCT images. The deep learning system had diagnostic accuracy of 86.9% for the determination of whether distal roots were single or had extra roots. CONCLUSIONS: The deep learning system showed high accuracy in the differential diagnosis of a single or extra root in the distal roots of mandibular first molars.
Collapse
Affiliation(s)
- Teruhiko Hiraiwa
- 1 Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry , Nagoya , Japan
| | - Yoshiko Ariji
- 1 Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry , Nagoya , Japan
| | - Motoki Fukuda
- 1 Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry , Nagoya , Japan
| | - Yoshitaka Kise
- 1 Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry , Nagoya , Japan
| | - Kazuhiko Nakata
- 2 Department of Endodontics, Aichi-Gakuin University School of Dentistry , Nagoya , Japan
| | - Akitoshi Katsumata
- 3 Department of Oral Radiology, Asahi University School of Dentistry , Mizuho , Japan
| | - Hiroshi Fujita
- 4 Department of Electrical, Electronic and Computer Faculty of Engineering, Gifu University , Gifu , Japan
| | - Eiichiro Ariji
- 1 Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry , Nagoya , Japan
| |
Collapse
|
21
|
Ariji Y, Fukuda M, Kise Y, Nozawa M, Yanashita Y, Fujita H, Katsumata A, Ariji E. Contrast-enhanced computed tomography image assessment of cervical lymph node metastasis in patients with oral cancer by using a deep learning system of artificial intelligence. Oral Surg Oral Med Oral Pathol Oral Radiol 2018; 127:458-463. [PMID: 30497907 DOI: 10.1016/j.oooo.2018.10.002] [Citation(s) in RCA: 101] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2018] [Revised: 09/26/2018] [Accepted: 10/05/2018] [Indexed: 01/12/2023]
Abstract
OBJECTIVE Although the deep learning system has been applied to interpretation of medical images, its application to the diagnosis of cervical lymph nodes in patients with oral cancer has not yet been reported. The purpose of this study was to evaluate the performance of deep learning image classification for diagnosis of lymph node metastasis. STUDY DESIGN The imaging data used for evaluation consisted of computed tomography (CT) images of 127 histologically proven positive cervical lymph nodes and 314 histologically proven negative lymph nodes from 45 patients with oral squamous cell carcinoma. The performance of a deep learning image classification system for the diagnosis of lymph node metastasis on CT images was compared with the diagnostic interpretations of 2 experienced radiologists by using the Mann-Whitney U test and χ2 analysis. RESULTS The performance of the deep learning image classification system resulted in accuracy of 78.2%, sensitivity of 75.4%, specificity of 81.0%, positive predictive value of 79.9%, negative predictive value of 77.1%, and area under the receiver operating characteristic curve of 0.80. These values were not significantly different from those found by the radiologists. CONCLUSIONS The deep learning system yielded diagnostic results similar to those of the radiologists, which suggests that this system may be valuable for diagnostic support.
Collapse
Affiliation(s)
- Yoshiko Ariji
- Associate Proffessor, Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of dentistry, Nagoya, Japan.
| | - Motoki Fukuda
- Instructor, Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of dentistry, Nagoya, Japan
| | - Yoshitaka Kise
- Assistant Professor, Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of dentistry, Nagoya, Japan
| | - Michihito Nozawa
- Instructor, Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of dentistry, Nagoya, Japan
| | - Yudai Yanashita
- Postgraduate student, Department of Electrical, Electronic and Computer Faculty of Engineering, Gifu University, Gifu, Japan
| | - Hiroshi Fujita
- Professor, Department of Electrical, Electronic and Computer Faculty of Engineering, Gifu University, Gifu, Japan
| | - Akitoshi Katsumata
- Professor, Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan
| | - Eiichiro Ariji
- Associate Proffessor, Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of dentistry, Nagoya, Japan
| |
Collapse
|
22
|
Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI. Eur Radiol 2018; 29:124-132. [PMID: 29943184 PMCID: PMC6291436 DOI: 10.1007/s00330-018-5595-8] [Citation(s) in RCA: 114] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 05/19/2018] [Accepted: 06/05/2018] [Indexed: 12/18/2022]
Abstract
Objectives Magnetic resonance imaging (MRI) is the method of choice for imaging meningiomas. Volumetric assessment of meningiomas is highly relevant for therapy planning and monitoring. We used a multiparametric deep-learning model (DLM) on routine MRI data including images from diverse referring institutions to investigate DLM performance in automated detection and segmentation of meningiomas in comparison to manual segmentations. Methods We included 56 of 136 consecutive preoperative MRI datasets [T1/T2-weighted, T1-weighted contrast-enhanced (T1CE), FLAIR] of meningiomas that were treated surgically at the University Hospital Cologne and graded histologically as tumour grade I (n = 38) or grade II (n = 18). The DLM was trained on an independent dataset of 249 glioma cases and segmented different tumour classes as defined in the brain tumour image segmentation benchmark (BRATS benchmark). The DLM was based on the DeepMedic architecture. Results were compared to manual segmentations by two radiologists in a consensus reading in FLAIR and T1CE. Results The DLM detected meningiomas in 55 of 56 cases. Further, automated segmentations correlated strongly with manual segmentations: average Dice coefficients were 0.81 ± 0.10 (range, 0.46-0.93) for the total tumour volume (union of tumour volume in FLAIR and T1CE) and 0.78 ± 0.19 (range, 0.27-0.95) for contrast-enhancing tumour volume in T1CE. Conclusions The DLM yielded accurate automated detection and segmentation of meningioma tissue despite diverse scanner data and thereby may improve and facilitate therapy planning as well as monitoring of this highly frequent tumour entity. Key Points • Deep learning allows for accurate meningioma detection and segmentation • Deep learning helps clinicians to assess patients with meningiomas • Meningioma monitoring and treatment planning can be improved Electronic supplementary material The online version of this article (10.1007/s00330-018-5595-8) contains supplementary material, which is available to authorized users.
Collapse
|
23
|
Quantitative Phase Imaging for Label-Free Analysis of Cancer Cells—Focus on Digital Holographic Microscopy. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8071027] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|