51
|
Allen B, Dreyer K, Stibolt R, Agarwal S, Coombs L, Treml C, Elkholy M, Brink L, Wald C. Evaluation and Real-World Performance Monitoring of Artificial Intelligence Models in Clinical Practice: Try It, Buy It, Check It. J Am Coll Radiol 2021; 18:1489-1496. [PMID: 34599876 DOI: 10.1016/j.jacr.2021.08.022] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 08/02/2021] [Indexed: 01/16/2023]
Abstract
The pace of regulatory clearance of artificial intelligence (AI) algorithms for radiology continues to accelerate, and numerous algorithms are becoming available for use in clinical practice. End users of AI in radiology should be aware that AI algorithms may not work as expected when used beyond the institutions in which they were trained, and model performance may degrade over time. In this article, we discuss why regulatory clearance alone may not be enough to ensure AI will be safe and effective in all radiological practices and review strategies available resources for evaluating before clinical use and monitoring performance of AI models to ensure efficacy and patient safety.
Collapse
Affiliation(s)
- Bibb Allen
- Chief Medical Officer ACR Data Science Institute; and Department of Radiology, Grandview Medical Center, Birmingham, Alabama.
| | - Keith Dreyer
- Chief Science Officer ACR Data Science Institute; and Massachusetts General Hospital, Boston, Massachusetts
| | - Robert Stibolt
- Diagnostic Radiology, Brookwood Baptist Health, Birmingham, Alabama
| | | | | | - Chris Treml
- ACR Data Science Institute, Reston, Virginia
| | | | - Laura Brink
- ACR Data Science Institute, Reston, Virginia
| | | |
Collapse
|
52
|
Tang MCS, Teoh SS, Ibrahim H, Embong Z. Neovascularization Detection and Localization in Fundus Images Using Deep Learning. SENSORS (BASEL, SWITZERLAND) 2021; 21:5327. [PMID: 34450766 PMCID: PMC8399593 DOI: 10.3390/s21165327] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 08/02/2021] [Accepted: 08/04/2021] [Indexed: 01/12/2023]
Abstract
Proliferative Diabetic Retinopathy (PDR) is a severe retinal disease that threatens diabetic patients. It is characterized by neovascularization in the retina and the optic disk. PDR clinical features contain highly intense retinal neovascularization and fibrous spreads, leading to visual distortion if not controlled. Different image processing techniques have been proposed to detect and diagnose neovascularization from fundus images. Recently, deep learning methods are getting popular in neovascularization detection due to artificial intelligence advancement in biomedical image processing. This paper presents a semantic segmentation convolutional neural network architecture for neovascularization detection. First, image pre-processing steps were applied to enhance the fundus images. Then, the images were divided into small patches, forming a training set, a validation set, and a testing set. A semantic segmentation convolutional neural network was designed and trained to detect the neovascularization regions on the images. Finally, the network was tested using the testing set for performance evaluation. The proposed model is entirely automated in detecting and localizing neovascularization lesions, which is not possible with previously published methods. Evaluation results showed that the model could achieve accuracy, sensitivity, specificity, precision, Jaccard similarity, and Dice similarity of 0.9948, 0.8772, 0.9976, 0.8696, 0.7643, and 0.8466, respectively. We demonstrated that this model could outperform other convolutional neural network models in neovascularization detection.
Collapse
Affiliation(s)
- Michael Chi Seng Tang
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, Nibong Tebal 14300, Malaysia; (M.C.S.T.); (H.I.)
| | - Soo Siang Teoh
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, Nibong Tebal 14300, Malaysia; (M.C.S.T.); (H.I.)
| | - Haidi Ibrahim
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, Nibong Tebal 14300, Malaysia; (M.C.S.T.); (H.I.)
| | - Zunaina Embong
- Department of Ophthalmology, School of Medical Sciences, Health Campus, Universiti Sains Malaysia, Kubang Kerian 16150, Malaysia;
| |
Collapse
|
53
|
Zhang G, Yang Z, Huo B, Chai S, Jiang S. Multiorgan segmentation from partially labeled datasets with conditional nnU-Net. Comput Biol Med 2021; 136:104658. [PMID: 34311262 DOI: 10.1016/j.compbiomed.2021.104658] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 07/14/2021] [Accepted: 07/15/2021] [Indexed: 11/30/2022]
Abstract
Accurate and robust multiorgan abdominal CT segmentation plays a significant role in numerous clinical applications, such as therapy treatment planning and treatment delivery. Almost all existing segmentation networks rely on fully annotated data with strong supervision. However, annotating fully annotated multiorgan data in CT images is both laborious and time-consuming. In comparison, massive partially labeled datasets are usually easily accessible. In this paper, we propose conditional nnU-Net trained on the union of partially labeled datasets for multiorgan segmentation. The deep model employs the state-of-the-art nnU-Net as the backbone and introduces a conditioning strategy by feeding auxiliary information into the decoder architecture as an additional input layer. This model leverages the prior conditional information to identify the organ class at the pixel-wise level and encourages organs' spatial information recovery. Furthermore, we adopt a deep supervision mechanism to refine the outputs at different scales and apply the combination of Dice loss and Focal loss to optimize the training model. Our proposed method is evaluated on seven publicly available datasets of the liver, pancreas, spleen and kidney, in which promising segmentation performance has been achieved. The proposed conditional nnU-Net breaks down the barriers between nonoverlapping labeled datasets and further alleviates the problem of data hunger in multiorgan segmentation.
Collapse
Affiliation(s)
- Guobin Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Bin Huo
- Department of Oncology, Tianjin Medical University Second Hospital, Tianjin, 300211, China
| | - Shude Chai
- Department of Oncology, Tianjin Medical University Second Hospital, Tianjin, 300211, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China.
| |
Collapse
|
54
|
Fehr J, Konigorski S, Olivier S, Gunda R, Surujdeen A, Gareta D, Smit T, Baisley K, Moodley S, Moosa Y, Hanekom W, Koole O, Ndung'u T, Pillay D, Grant AD, Siedner MJ, Lippert C, Wong EB. Computer-aided interpretation of chest radiography reveals the spectrum of tuberculosis in rural South Africa. NPJ Digit Med 2021; 4:106. [PMID: 34215836 PMCID: PMC8253848 DOI: 10.1038/s41746-021-00471-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Accepted: 05/21/2021] [Indexed: 02/01/2023] Open
Abstract
Computer-aided digital chest radiograph interpretation (CAD) can facilitate high-throughput screening for tuberculosis (TB), but its use in population-based active case-finding programs has been limited. In an HIV-endemic area in rural South Africa, we used a CAD algorithm (CAD4TBv5) to interpret digital chest x-rays (CXR) as part of a mobile health screening effort. Participants with TB symptoms or CAD4TBv5 score above the triaging threshold were referred for microbiological sputum assessment. During an initial pilot phase, a low CAD4TBv5 triaging threshold of 25 was selected to maximize TB case finding. We report the performance of CAD4TBv5 in screening 9,914 participants, 99 (1.0%) of whom were found to have microbiologically proven TB. CAD4TBv5 was able to identify TB cases at the same sensitivity but lower specificity as a blinded radiologist, whereas the next generation of the algorithm (CAD4TBv6) achieved comparable sensitivity and specificity to the radiologist. The CXRs of people with microbiologically confirmed TB spanned a range of lung field abnormality, including 19 (19.2%) cases deemed normal by the radiologist. HIV serostatus did not impact CAD4TB's performance. Notably, 78.8% of the TB cases identified during this population-based survey were asymptomatic and therefore triaged for sputum collection on the basis of CAD4TBv5 score alone. While CAD4TBv6 has the potential to replace radiologists for triaging CXRs in TB prevalence surveys, population-specific piloting is necessary to set the appropriate triaging thresholds. Further work on image analysis strategies is needed to identify radiologically subtle active TB.
Collapse
Affiliation(s)
- Jana Fehr
- Africa Health Research Institute, KwaZulu-Natal, South Africa
- Digital Health & Machine Learning, Hasso Plattner Institute for Digital Engineering, Berlin, Germany
| | - Stefan Konigorski
- Digital Health & Machine Learning, Hasso Plattner Institute for Digital Engineering, Berlin, Germany
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Stephen Olivier
- Africa Health Research Institute, KwaZulu-Natal, South Africa
| | - Resign Gunda
- Africa Health Research Institute, KwaZulu-Natal, South Africa
- School of Nursing and Public Health, College of Health Sciences, University of KwaZulu-Natal, Durban, South Africa
- Division of Infection and Immunity, University College London, London, UK
| | | | - Dickman Gareta
- Africa Health Research Institute, KwaZulu-Natal, South Africa
| | - Theresa Smit
- Africa Health Research Institute, KwaZulu-Natal, South Africa
| | - Kathy Baisley
- Africa Health Research Institute, KwaZulu-Natal, South Africa
- London School of Hygiene & Tropical Medicine, London, UK
| | - Sashen Moodley
- Africa Health Research Institute, KwaZulu-Natal, South Africa
| | - Yumna Moosa
- Africa Health Research Institute, KwaZulu-Natal, South Africa
| | - Willem Hanekom
- Africa Health Research Institute, KwaZulu-Natal, South Africa
- Division of Infection and Immunity, University College London, London, UK
| | - Olivier Koole
- Africa Health Research Institute, KwaZulu-Natal, South Africa
- London School of Hygiene & Tropical Medicine, London, UK
| | - Thumbi Ndung'u
- Africa Health Research Institute, KwaZulu-Natal, South Africa
- Division of Infection and Immunity, University College London, London, UK
- HIV Pathogenesis Programme, The Doris Duke Medical Research Institute, University of KwaZulu-Natal, Durban, South Africa
- Ragon Institute of MGH, MIT and Harvard University, Cambridge, MA, USA
- Max Planck Institute for Infection Biology, Berlin, Germany
| | - Deenan Pillay
- Africa Health Research Institute, KwaZulu-Natal, South Africa
- Division of Infection and Immunity, University College London, London, UK
| | - Alison D Grant
- Africa Health Research Institute, KwaZulu-Natal, South Africa
- School of Nursing and Public Health, College of Health Sciences, University of KwaZulu-Natal, Durban, South Africa
- London School of Hygiene & Tropical Medicine, London, UK
- School of Clinical Medicine, College of Health Sciences, University of KwaZulu-Natal, Durban, South Africa
| | - Mark J Siedner
- Africa Health Research Institute, KwaZulu-Natal, South Africa
- School of Clinical Medicine, College of Health Sciences, University of KwaZulu-Natal, Durban, South Africa
- Harvard Medical School, Boston, MA, USA
- Division of Infectious Diseases, Massachusetts General Hospital, Boston, MA, USA
| | - Christoph Lippert
- Digital Health & Machine Learning, Hasso Plattner Institute for Digital Engineering, Berlin, Germany
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Emily B Wong
- Africa Health Research Institute, KwaZulu-Natal, South Africa.
- Harvard Medical School, Boston, MA, USA.
- Division of Infectious Diseases, Massachusetts General Hospital, Boston, MA, USA.
- Division of Infectious Diseases, University of Alabama at Birmingham, Birmingham, AL, USA.
| |
Collapse
|
55
|
Fast fully automatic detection, classification and 3D reconstruction of pulmonary nodules in CT images by local image feature analysis. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102790] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
56
|
Nomura Y, Hanaoka S, Nakao T, Hayashi N, Yoshikawa T, Miki S, Watadani T, Abe O. Performance changes due to differences in training data for cerebral aneurysm detection in head MR angiography images. Jpn J Radiol 2021; 39:1039-1048. [PMID: 34125368 DOI: 10.1007/s11604-021-01153-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 06/08/2021] [Indexed: 01/10/2023]
Abstract
PURPOSE The performance of computer-aided detection (CAD) software depends on the quality and quantity of the dataset used for machine learning. If the data characteristics in development and practical use are different, the performance of CAD software degrades. In this study, we investigated changes in detection performance due to differences in training data for cerebral aneurysm detection software in head magnetic resonance angiography images. MATERIALS AND METHODS We utilized three types of CAD software for cerebral aneurysm detection in MRA images, which were based on 3D local intensity structure analysis, graph-based features, and convolutional neural network. For each type of CAD software, we compared three types of training pattern, which were two types of training using single-site data and one type of training using multisite data. We also carried out internal and external evaluations. RESULTS In training using single-site data, the performance of CAD software largely and unpredictably fluctuated when the training dataset was changed. Training using multisite data did not show the lowest performance among the three training patterns for any CAD software and dataset. CONCLUSION The training of cerebral aneurysm detection software using data collected from multiple sites is desirable to ensure the stable performance of the software.
Collapse
Affiliation(s)
- Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeyuki Watadani
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
57
|
Peters AA, Decasper A, Munz J, Klaus J, Loebelenz LI, Hoffner MKM, Hourscht C, Heverhagen JT, Christe A, Ebner L. Performance of an AI based CAD system in solid lung nodule detection on chest phantom radiographs compared to radiology residents and fellow radiologists. J Thorac Dis 2021; 13:2728-2737. [PMID: 34164165 PMCID: PMC8182550 DOI: 10.21037/jtd-20-3522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Background Despite the decreasing relevance of chest radiography in lung cancer screening, chest radiography is still frequently applied to assess for lung nodules. The aim of the current study was to determine the accuracy of a commercial AI based CAD system for the detection of artificial lung nodules on chest radiograph phantoms and compare the performance to radiologists in training. Methods Sixty-one anthropomorphic lung phantoms were equipped with 140 randomly deployed artificial lung nodules (5, 8, 10, 12 mm). A random generator chose nodule size and distribution before a two-plane chest X-ray (CXR) of each phantom was performed. Seven blinded radiologists in training (2 fellows, 5 residents) with 2 to 5 years of experience in chest imaging read the CXRs on a PACS-workstation independently. Results of the software were recorded separately. McNemar test was used to compare each radiologist’s results to the AI-computer-aided-diagnostic (CAD) software in a per-nodule and a per-phantom approach and Fleiss-Kappa was applied for inter-rater and intra-observer agreements. Results Five out of seven readers showed a significantly higher accuracy than the AI algorithm. The pooled accuracies of the radiologists in a nodule-based and a phantom-based approach were 0.59 and 0.82 respectively, whereas the AI-CAD showed accuracies of 0.47 and 0.67, respectively. Radiologists’ average sensitivity for 10 and 12 mm nodules was 0.80 and dropped to 0.66 for 8 mm (P=0.04) and 0.14 for 5 mm nodules (P<0.001). The radiologists and the algorithm both demonstrated a significant higher sensitivity for peripheral compared to central nodules (0.66 vs. 0.48; P=0.004 and 0.64 vs. 0.094; P=0.025, respectively). Inter-rater agreements were moderate among the radiologists and between radiologists and AI-CAD software (K’=0.58±0.13 and 0.51±0.1). Intra-observer agreement was calculated for two readers and was almost perfect for the phantom-based (K’=0.85±0.05; K’=0.80±0.02); and substantial to almost perfect for the nodule-based approach (K’=0.83±0.02; K’=0.78±0.02). Conclusions The AI based CAD system as a primary reader acts inferior to radiologists regarding lung nodule detection in chest phantoms. Chest radiography has reasonable accuracy in lung nodule detection if read by a radiologist alone and may be further optimized by an AI based CAD system as a second reader.
Collapse
Affiliation(s)
- Alan A Peters
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Amanda Decasper
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jaro Munz
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jeremias Klaus
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Laura I Loebelenz
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Maximilian Korbinian Michael Hoffner
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Cynthia Hourscht
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Johannes T Heverhagen
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Department of BioMedical Research, Experimental Radiology, University of Bern, Bern, Switzerland.,Department of Radiology, The Ohio State University, Columbus, OH, USA
| | - Andreas Christe
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Lukas Ebner
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
58
|
Gaur L, Bhatia U, Jhanjhi NZ, Muhammad G, Masud M. Medical image-based detection of COVID-19 using Deep Convolution Neural Networks. MULTIMEDIA SYSTEMS 2021; 29:1729-1738. [PMID: 33935377 PMCID: PMC8079233 DOI: 10.1007/s00530-021-00794-6] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 04/05/2021] [Indexed: 05/08/2023]
Abstract
The demand for automatic detection of Novel Coronavirus or COVID-19 is increasing across the globe. The exponential rise in cases burdens healthcare facilities, and a vast amount of multimedia healthcare data is being explored to find a solution. This study presents a practical solution to detect COVID-19 from chest X-rays while distinguishing those from normal and impacted by Viral Pneumonia via Deep Convolution Neural Networks (CNN). In this study, three pre-trained CNN models (EfficientNetB0, VGG16, and InceptionV3) are evaluated through transfer learning. The rationale for selecting these specific models is their balance of accuracy and efficiency with fewer parameters suitable for mobile applications. The dataset used for the study is publicly available and compiled from different sources. This study uses deep learning techniques and performance metrics (accuracy, recall, specificity, precision, and F1 scores). The results show that the proposed approach produced a high-quality model, with an overall accuracy of 92.93%, COVID-19, a sensitivity of 94.79%. The work indicates a definite possibility to implement computer vision design to enable effective detection and screening measures.
Collapse
Affiliation(s)
- Loveleen Gaur
- Amity International Business School, Amity University, Noida, India
| | - Ujwal Bhatia
- Amity International Business School, Amity University, Noida, India
| | - N. Z. Jhanjhi
- School of Computer Science and Engineering SCE, Taylor’s University, Subang Jaya, Malaysia
| | - Ghulam Muhammad
- Research Chair of Pervasive and Mobile Computing, King Saud University, Riyadh 11543, Saudi Arabia
- Computer Engineering Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
| | - Mehedi Masud
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif, 21944 Saudi Arabia
| |
Collapse
|
59
|
Kim YJ, Yoo EY, Kim KG. Deep learning based pectoral muscle segmentation on Mammographic Image Analysis Society (MIAS) mammograms. PRECISION AND FUTURE MEDICINE 2021. [DOI: 10.23838/pfm.2020.00170] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
60
|
Radiologic Assessment of Osteosarcoma Lung Metastases: State of the Art and Recent Advances. Cells 2021; 10:cells10030553. [PMID: 33806513 PMCID: PMC7999261 DOI: 10.3390/cells10030553] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 02/28/2021] [Accepted: 03/02/2021] [Indexed: 12/14/2022] Open
Abstract
The lung is the most frequent site of osteosarcoma (OS) metastases, which are a critical point in defining a patient’s prognosis. Chest computed tomography (CT) represents the gold standard for the detection of lung metastases even if its sensitivity widely ranges in the literature since lung localizations are often atypical. ESMO guidelines represent one of the major references for the follow-up program of OS patients. The development of new reconstruction techniques, such as the iterative method and the deep learning-based image reconstruction (DLIR), has led to a significant reduction of the radiation dose with the low-dose CT. The improvement of these techniques has great importance considering the young-onset of the disease and the strict chest surveillance during follow-up programs. The use of 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET)/CT is still controversial, while volume doubling time (VDT) and computer-aided diagnosis (CAD) systems are recent diagnostic tools that could support radiologists for lung nodules evaluation. Their use, well-established for other malignancies, needs to be further evaluated, focusing on OS patients.
Collapse
|
61
|
Chan HP, Hadjiiski LM, Samala RK. Computer-aided diagnosis in the era of deep learning. Med Phys 2021; 47:e218-e227. [PMID: 32418340 DOI: 10.1002/mp.13764] [Citation(s) in RCA: 104] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 05/13/2019] [Accepted: 05/13/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a major field of research for the past few decades. CAD uses machine learning methods to analyze imaging and/or nonimaging patient data and makes assessment of the patient's condition, which can then be used to assist clinicians in their decision-making process. The recent success of the deep learning technology in machine learning spurs new research and development efforts to improve CAD performance and to develop CAD for many other complex clinical tasks. In this paper, we discuss the potential and challenges in developing CAD tools using deep learning technology or artificial intelligence (AI) in general, the pitfalls and lessons learned from CAD in screening mammography and considerations needed for future implementation of CAD or AI in clinical use. It is hoped that the past experiences and the deep learning technology will lead to successful advancement and lasting growth in this new era of CAD, thereby enabling CAD to deliver intelligent aids to improve health care.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| | - Lubomir M Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| | - Ravi K Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| |
Collapse
|
62
|
Shi G, Xiao L, Chen Y, Zhou SK. Marginal loss and exclusion loss for partially supervised multi-organ segmentation. Med Image Anal 2021; 70:101979. [PMID: 33636451 DOI: 10.1016/j.media.2021.101979] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Revised: 12/02/2020] [Accepted: 01/20/2021] [Indexed: 11/25/2022]
Abstract
Annotating multiple organs in medical images is both costly and time-consuming; therefore, existing multi-organ datasets with labels are often low in sample size and mostly partially labeled, that is, a dataset has a few organs labeled but not all organs. In this paper, we investigate how to learn a single multi-organ segmentation network from a union of such datasets. To this end, we propose two types of novel loss function, particularly designed for this scenario: (i) marginal loss and (ii) exclusion loss. Because the background label for a partially labeled image is, in fact, a 'merged' label of all unlabelled organs and 'true' background (in the sense of full labels), the probability of this 'merged' background label is a marginal probability, summing the relevant probabilities before merging. This marginal probability can be plugged into any existing loss function (such as cross entropy loss, Dice loss, etc.) to form a marginal loss. Leveraging the fact that the organs are non-overlapping, we propose the exclusion loss to gauge the dissimilarity between labeled organs and the estimated segmentation of unlabelled organs. Experiments on a union of five benchmark datasets in multi-organ segmentation of liver, spleen, left and right kidneys, and pancreas demonstrate that using our newly proposed loss functions brings a conspicuous performance improvement for state-of-the-art methods without introducing any extra computation.
Collapse
Affiliation(s)
- Gonglei Shi
- Medical Imaging, Robotics, Analytic Computing Laboratory & Engineering (MIRACLE), Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; School of Computer Science and Engineering, Southeast University, Nanjing, 210000, China
| | - Li Xiao
- Medical Imaging, Robotics, Analytic Computing Laboratory & Engineering (MIRACLE), Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China.
| | - Yang Chen
- School of Computer Science and Engineering, Southeast University, Nanjing, 210000, China
| | - S Kevin Zhou
- Medical Imaging, Robotics, Analytic Computing Laboratory & Engineering (MIRACLE), Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; School of Biomedical Engineering & Suzhou Institute For Advanced Research, University of Science and Technology, Suzhou, 215123, China.
| |
Collapse
|
63
|
Liver segmentation in abdominal CT images via auto-context neural network and self-supervised contour attention. Artif Intell Med 2021; 113:102023. [PMID: 33685586 DOI: 10.1016/j.artmed.2021.102023] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 11/13/2020] [Accepted: 01/18/2021] [Indexed: 12/25/2022]
Abstract
OBJECTIVE Accurate image segmentation of the liver is a challenging problem owing to its large shape variability and unclear boundaries. Although the applications of fully convolutional neural networks (CNNs) have shown groundbreaking results, limited studies have focused on the performance of generalization. In this study, we introduce a CNN for liver segmentation on abdominal computed tomography (CT) images that focus on the performance of generalization and accuracy. METHODS To improve the generalization performance, we initially propose an auto-context algorithm in a single CNN. The proposed auto-context neural network exploits an effective high-level residual estimation to obtain the shape prior. Identical dual paths are effectively trained to represent mutual complementary features for an accurate posterior analysis of a liver. Further, we extend our network by employing a self-supervised contour scheme. We trained sparse contour features by penalizing the ground-truth contour to focus more contour attentions on the failures. RESULTS We used 180 abdominal CT images for training and validation. Two-fold cross-validation is presented for a comparison with the state-of-the-art neural networks. The experimental results show that the proposed network results in better accuracy when compared to the state-of-the-art networks by reducing 10.31% of the Hausdorff distance. Novel multiple N-fold cross-validations are conducted to show the best performance of generalization of the proposed network. CONCLUSION AND SIGNIFICANCE The proposed method minimized the error between training and test images more than any other modern neural networks. Moreover, the contour scheme was successfully employed in the network by introducing a self-supervising metric.
Collapse
|
64
|
Barua I, Mori Y, Bretthauer M. Colorectal polyp characterization with endocytoscopy: Ready for widespread implementation with artificial intelligence? Best Pract Res Clin Gastroenterol 2020; 52-53:101721. [PMID: 34172248 DOI: 10.1016/j.bpg.2020.101721] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 12/07/2020] [Accepted: 12/15/2020] [Indexed: 01/31/2023]
Abstract
Endocytoscopy provides an in-vivo visualization of nuclei and micro-vessels at the cellular level in real-time, facilitating so-called "optical biopsy" or "virtual histology" of colorectal polyps/neoplasms. This functionality is enabled by 520-fold magnification power with endocytoscopy and recent breakthroughs in artificial intelligence (AI) allowing a great advance in endocytoscopic imaging; interpretation of images is now fully supported by AI tool which outputs predictions of polyp histopathology during colonoscopy. The advantage of the use of AI during optical biopsy can be appreciated especially by non-expert endoscopists who to increase performance. This paper provides an overview of the latest evidence on colorectal polyp characterization with endocytoscopy combined with AI and identify the barriers to its widespread implementation.
Collapse
Affiliation(s)
- Ishita Barua
- Clinical Effectiveness Research Group, Institute of Health and Society, University of Oslo, and Department of Transplantation Medicine Oslo University Hospital, Oslo, Norway
| | - Yuichi Mori
- Clinical Effectiveness Research Group, Institute of Health and Society, University of Oslo, and Department of Transplantation Medicine Oslo University Hospital, Oslo, Norway; Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan.
| | - Michael Bretthauer
- Clinical Effectiveness Research Group, Institute of Health and Society, University of Oslo, and Department of Transplantation Medicine Oslo University Hospital, Oslo, Norway
| |
Collapse
|
65
|
DISEASE CLASSIFICATION OF MACULAR OPTICAL COHERENCE TOMOGRAPHY SCANS USING DEEP LEARNING SOFTWARE. Retina 2020; 40:1549-1557. [DOI: 10.1097/iae.0000000000002640] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
66
|
Kimura Y, Watanabe A, Yamada T, Watanabe S, Nagaoka T, Nemoto M, Miyazaki K, Hanaoka K, Kaida H, Ishii K. AI approach of cycle-consistent generative adversarial networks to synthesize PET images to train computer-aided diagnosis algorithm for dementia. Ann Nucl Med 2020; 34:512-515. [PMID: 32314148 DOI: 10.1007/s12149-020-01468-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Accepted: 04/08/2020] [Indexed: 11/29/2022]
Abstract
OBJECTIVE An artificial intelligence (AI)-based algorithm typically requires a considerable amount of training data; however, few training images are available for dementia with Lewy bodies and frontotemporal lobar degeneration. Therefore, this study aims to present the potential of cycle-consistent generative adversarial networks (CycleGAN) to obtain enough number of training images for AI-based computer-aided diagnosis (CAD) algorithms for diagnosing dementia. METHODS We trained CycleGAN using 43 amyloid-negative and 45 positive images in slice-by-slice. RESULTS The CycleGAN can be used to synthesize reasonable amyloid-positive images, and the continuity of slices was preserved. DISCUSSION Our results show that CycleGAN has the potential to generate a sufficient number of training images for CAD of dementia.
Collapse
Affiliation(s)
- Yuichi Kimura
- Graduate School of Biology-Oriented Science and Technology, Kindai University, Wakayama, Japan.
- Faculty of Biology-Oriented Science and Technology, Kindai University, Wakayama, Japan.
| | - Aya Watanabe
- Graduate School of Biology-Oriented Science and Technology, Kindai University, Wakayama, Japan
| | - Takahiro Yamada
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, Osaka, Japan
| | - Shogo Watanabe
- Department of Human Health Sciences, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Takashi Nagaoka
- Graduate School of Biology-Oriented Science and Technology, Kindai University, Wakayama, Japan
- Faculty of Biology-Oriented Science and Technology, Kindai University, Wakayama, Japan
| | - Mitsutaka Nemoto
- Graduate School of Biology-Oriented Science and Technology, Kindai University, Wakayama, Japan
| | - Koichi Miyazaki
- Department of Radiology, Faculty of Medicine, Kindai University, Osaka, Japan
| | - Kohei Hanaoka
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, Osaka, Japan
| | - Hayato Kaida
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, Osaka, Japan
- Department of Radiology, Faculty of Medicine, Kindai University, Osaka, Japan
| | - Kazunari Ishii
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, Osaka, Japan
- Department of Radiology, Faculty of Medicine, Kindai University, Osaka, Japan
| |
Collapse
|
67
|
Monshi MMA, Poon J, Chung V. Deep learning in generating radiology reports: A survey. Artif Intell Med 2020; 106:101878. [PMID: 32425358 PMCID: PMC7227610 DOI: 10.1016/j.artmed.2020.101878] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Revised: 04/30/2020] [Accepted: 05/10/2020] [Indexed: 12/27/2022]
Abstract
Substantial progress has been made towards implementing automated radiology reporting models based on deep learning (DL). This is due to the introduction of large medical text/image datasets. Generating radiology coherent paragraphs that do more than traditional medical image annotation, or single sentence-based description, has been the subject of recent academic attention. This presents a more practical and challenging application and moves towards bridging visual medical features and radiologist text. So far, the most common approach has been to utilize publicly available datasets and develop DL models that integrate convolutional neural networks (CNN) for image analysis alongside recurrent neural networks (RNN) for natural language processing (NLP) and natural language generation (NLG). This is an area of research that we anticipate will grow in the near future. We focus our investigation on the following critical challenges: understanding radiology text/image structures and datasets, applying DL algorithms (mainly CNN and RNN), generating radiology text, and improving existing DL based models and evaluation metrics. Lastly, we include a critical discussion and future research recommendations. This survey will be useful for researchers interested in DL, particularly those interested in applying DL to radiology reporting.
Collapse
Affiliation(s)
- Maram Mahmoud A Monshi
- School of Computer Science, University of Sydney, Sydney, Australia; Department of Information Technology, Taif University, Taif, Saudi Arabia.
| | - Josiah Poon
- School of Computer Science, University of Sydney, Sydney, Australia
| | - Vera Chung
- School of Computer Science, University of Sydney, Sydney, Australia
| |
Collapse
|
68
|
Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors. Eur Radiol 2020; 30:5525-5532. [PMID: 32458173 PMCID: PMC7476917 DOI: 10.1007/s00330-020-06946-y] [Citation(s) in RCA: 124] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 04/03/2020] [Accepted: 05/08/2020] [Indexed: 12/22/2022]
Abstract
Objective The objective was to identify barriers and facilitators to the implementation of artificial intelligence (AI) applications in clinical radiology in The Netherlands. Materials and methods Using an embedded multiple case study, an exploratory, qualitative research design was followed. Data collection consisted of 24 semi-structured interviews from seven Dutch hospitals. The analysis of barriers and facilitators was guided by the recently published Non-adoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework for new medical technologies in healthcare organizations. Results Among the most important facilitating factors for implementation were the following: (i) pressure for cost containment in the Dutch healthcare system, (ii) high expectations of AI’s potential added value, (iii) presence of hospital-wide innovation strategies, and (iv) presence of a “local champion.” Among the most prominent hindering factors were the following: (i) inconsistent technical performance of AI applications, (ii) unstructured implementation processes, (iii) uncertain added value for clinical practice of AI applications, and (iv) large variance in acceptance and trust of direct (the radiologists) and indirect (the referring clinicians) adopters. Conclusion In order for AI applications to contribute to the improvement of the quality and efficiency of clinical radiology, implementation processes need to be carried out in a structured manner, thereby providing evidence on the clinical added value of AI applications. Key Points • Successful implementation of AI in radiology requires collaboration between radiologists and referring clinicians. • Implementation of AI in radiology is facilitated by the presence of a local champion. • Evidence on the clinical added value of AI in radiology is needed for successful implementation. Electronic supplementary material The online version of this article (10.1007/s00330-020-06946-y) contains supplementary material, which is available to authorized users.
Collapse
|
69
|
Vadmal V, Junno G, Badve C, Huang W, Waite KA, Barnholtz-Sloan JS. MRI image analysis methods and applications: an algorithmic perspective using brain tumors as an exemplar. Neurooncol Adv 2020; 2:vdaa049. [PMID: 32642702 PMCID: PMC7236385 DOI: 10.1093/noajnl/vdaa049] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
The use of magnetic resonance imaging (MRI) in healthcare and the emergence of radiology as a practice are both relatively new compared with the classical specialties in medicine. Having its naissance in the 1970s and later adoption in the 1980s, the use of MRI has grown exponentially, consequently engendering exciting new areas of research. One such development is the use of computational techniques to analyze MRI images much like the way a radiologist would. With the advent of affordable, powerful computing hardware and parallel developments in computer vision, MRI image analysis has also witnessed unprecedented growth. Due to the interdisciplinary and complex nature of this subfield, it is important to survey the current landscape and examine the current approaches for analysis and trend trends moving forward.
Collapse
Affiliation(s)
- Vachan Vadmal
- Department of Population Health and Quantitative Sciences, Case Western Reserve University School of Medicine, Cleveland, Ohio
| | - Grant Junno
- Department of Population Health and Quantitative Sciences, Case Western Reserve University School of Medicine, Cleveland, Ohio
| | - Chaitra Badve
- Department of Radiology, University Hospitals Health System (UHHS), Cleveland, Ohio
| | - William Huang
- Department of Population Health and Quantitative Sciences, Case Western Reserve University School of Medicine, Cleveland, Ohio
| | - Kristin A Waite
- Department of Population Health and Quantitative Sciences, Case Western Reserve University School of Medicine, Cleveland, Ohio.,Cleveland Center for Health Outcomes Research (CCHOR), Cleveland, Ohio.,Cleveland Institute for Computational Biology, Cleveland, Ohio
| | - Jill S Barnholtz-Sloan
- Department of Population Health and Quantitative Sciences, Case Western Reserve University School of Medicine, Cleveland, Ohio.,Cleveland Center for Health Outcomes Research (CCHOR), Cleveland, Ohio.,Research Health Analytics and Informatics, UHHS, Cleveland, Ohio.,Case Comprehensive Cancer Center, Cleveland, Ohio.,Cleveland Institute for Computational Biology, Cleveland, Ohio
| |
Collapse
|
70
|
Chae KJ, Jin GY, Ko SB, Wang Y, Zhang H, Choi EJ, Choi H. Deep Learning for the Classification of Small (≤2 cm) Pulmonary Nodules on CT Imaging: A Preliminary Study. Acad Radiol 2020; 27:e55-e63. [PMID: 31780395 DOI: 10.1016/j.acra.2019.05.018] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Revised: 05/23/2019] [Accepted: 05/25/2019] [Indexed: 12/31/2022]
Abstract
RATIONALE AND OBJECTIVES We aimed to present a deep learning-based malignancy prediction model (CT-lungNET) that is simpler and faster to use in the diagnosis of small (≤2 cm) pulmonary nodules on nonenhanced chest CT and to preliminarily evaluate its performance and usefulness for human reviewers. MATERIALS AND METHODS A total of 173 whole nonenhanced chest CT images containing 208 pulmonary nodules (94 malignant and 11 benign nodules) ranging in size from 5 mm to 20 mm were collected. Pathologically confirmed nodules or nodules that remained unchanged for more than 1 year were included, and 30 benign and 30 malignant nodules were randomly assigned into the test set. We designed CT-lungNET to include three convolutional layers followed by two fully-connected layers and compared its diagnostic performance and processing time with those of AlexNET by using the area under the receiver operating curve (AUROC). An observer performance test was conducted involving eight human reviewers of four different groups (medical students, physicians, radiologic residents, and thoracic radiologists) at test 1 and test 2, referring to the CT-lungNET's malignancy prediction rate with pairwise comparison receiver operating curve analysis. RESULTS CT-lungNET showed an improved AUROC (0.85; 95% confidence interval: 0.74-0.93), compared to that of the AlexNET (0.82; 95% confidence interval: 0.71-0.91). The processing speed per one image slice for CT-lungNET was about 10 times faster than that for AlexNET (0.90 vs. 8.79 seconds). During the observer performance test, the classification performance of nonradiologists was increased with the aid of CTlungNET, (mean AUC improvement: 0.13; range: 0.03-0.19) but not significantly so in the radiologists group (mean AUC improvement: 0.02; range: -0.02 to 0.07). CONCLUSION CT-lungNET was able to provide better classification results with a significantly shorter amount of processing time as compared to AlexNET in the diagnosis of small pulmonary nodules on nonenhanced chest CT. In this preliminary observer performance test, CT-lungNET may have a role acting as a second reviewer for less experienced reviewers, resulting in enhanced performance in the diagnosis of early lung cancer.
Collapse
Affiliation(s)
- Kum J Chae
- Department of Radiology, Research Institute of Clinical Medicine of Chonbuk National University, Biomedical Research Institute of Chonbuk National University Hospital, 634-18 Keumam-Dong, Jeonju, Jeonbuk 561-712, South Korea
| | - Gong Y Jin
- Department of Radiology, Research Institute of Clinical Medicine of Chonbuk National University, Biomedical Research Institute of Chonbuk National University Hospital, 634-18 Keumam-Dong, Jeonju, Jeonbuk 561-712, South Korea.
| | - Seok B Ko
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada
| | - Yi Wang
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada
| | - Hao Zhang
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada
| | - Eun J Choi
- Department of Radiology, Research Institute of Clinical Medicine of Chonbuk National University, Biomedical Research Institute of Chonbuk National University Hospital, 634-18 Keumam-Dong, Jeonju, Jeonbuk 561-712, South Korea
| | - Hyemi Choi
- Department of Statistics and Institute of Applied Statistics, Chonbuk National University, Jeonju, South Korea
| |
Collapse
|
71
|
Nomura Y, Miki S, Hayashi N, Hanaoka S, Sato I, Yoshikawa T, Masutani Y, Abe O. Novel platform for development, training, and validation of computer-assisted detection/diagnosis software. Int J Comput Assist Radiol Surg 2020; 15:661-672. [PMID: 32157503 PMCID: PMC7142060 DOI: 10.1007/s11548-020-02132-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 02/27/2020] [Indexed: 11/28/2022]
Abstract
PURPOSE To build a novel, open-source, purely web-based platform system to address problems in the development and clinical use of computer-assisted detection/diagnosis (CAD) software. The new platform system will replace the existing system for the development and validation of CAD software, Clinical Infrastructure for Radiologic Computation of United Solutions (CIRCUS). METHODS In our new system, the two top-level applications visible to users are the web-based image database (CIRCUS DB; database) and the Docker plug-in-based CAD execution platform (CIRCUS CS; clinical server). These applications are built on top of a shared application programming interface server, a three-dimensional image viewer component, and an image repository. RESULTS We successfully installed our new system into a Linux server at two clinical sites. A total of 1954 cases were registered in CIRCUS DB. We have been utilizing CIRCUS CS with four Docker-based CAD plug-ins. CONCLUSIONS We have successfully built a new version of the CIRCUS system. Our platform was successfully implemented at two clinical sites, and we plan to publish it as an open-source software project.
Collapse
Affiliation(s)
- Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Issei Sato
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Department of Complexity Science and Engineering, Graduate School of Frontier Sciences, The University of Tokyo, Tokyo, Japan
- Center for Advanced Intelligence Project, RIKEN, Tokyo, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Yoshitaka Masutani
- Graduate School of Information Sciences, Hiroshima City University, Hiroshima, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| |
Collapse
|
72
|
Capobianco E, Dominietto M. From Medical Imaging to Radiomics: Role of Data Science for Advancing Precision Health. J Pers Med 2020; 10:jpm10010015. [PMID: 32121633 PMCID: PMC7151556 DOI: 10.3390/jpm10010015] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 02/17/2020] [Indexed: 12/17/2022] Open
Abstract
Treating disease according to precision health requires the individualization of therapeutic solutions as a cardinal step that is part of a process that typically depends on multiple factors. The starting point is the collection and assembly of data over time to assess the patient’s health status and monitor response to therapy. Radiomics is a very important component of this process. Its main goal is implementing a protocol to quantify the image informative contents by first mining and then extracting the most representative features. Further analysis aims to detect potential disease phenotypes through signs and marks of heterogeneity. As multimodal images hinge on various data sources, and these can be integrated with treatment plans and follow-up information, radiomics is naturally centered on dynamically monitoring disease progression and/or the health trajectory of patients. However, radiomics creates critical needs too. A concise list includes: (a) successful harmonization of intra/inter-modality radiomic measurements to facilitate the association with other data domains (genetic, clinical, lifestyle aspects, etc.); (b) ability of data science to revise model strategies and analytics tools to tackle multiple data types and structures (electronic medical records, personal histories, hospitalization data, genomic from various specimens, imaging, etc.) and to offer data-agnostic solutions for patient outcomes prediction; (c) and model validation with independent datasets to ensure generalization of results, clinical value of new risk stratifications, and support to clinical decisions for highly individualized patient management.
Collapse
Affiliation(s)
- Enrico Capobianco
- Center for Computational Science, University of Miami, FL 33146, USA
- Correspondence:
| | | |
Collapse
|
73
|
Coccia M. Deep learning technology for improving cancer care in society: New directions in cancer imaging driven by artificial intelligence. TECHNOLOGY IN SOCIETY 2020; 60:101198. [DOI: 10.1016/j.techsoc.2019.101198] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
74
|
Zhu L, Gao G, Liu Y, Han C, Liu J, Zhang X, Wang X. Feasibility of integrating computer-aided diagnosis with structured reports of prostate multiparametric MRI. Clin Imaging 2019; 60:123-130. [PMID: 31874336 DOI: 10.1016/j.clinimag.2019.12.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Revised: 12/02/2019] [Accepted: 12/11/2019] [Indexed: 01/05/2023]
Abstract
OBJECTIVES To evaluate the feasibility of integrating computer-aided diagnosis (CAD) with structured reports of prostate multiparametric MRI (mpMRI). METHODS This retrospective study enrolled 153 patients who underwent prostate mpMRI for the purpose of targeted biopsy; patients were divided into a group with clinically significant prostate cancer (csPCa, Gleason score ≥ 3 + 4, n = 89) and a group with non-csPCa (n = 64). Ten inexperienced radiologists retrospectively evaluated these cases (single reader per case) twice using structured reports, and they were blinded to the pathologic results. Initially, the readers interpreted mpMRI without CAD. Six weeks later, they evaluated the same cases again with CAD assistance. At each time of image interpretation, lesions detected by the readers were marked on the prostate vector map in structured reports, and a PI-RADS score was given to each lesion. Diagnostic efficacy and reading time were evaluated for the two reading sessions. RESULTS With the assistance of CAD, the overall diagnostic efficacy was improved, i.e., the AUC increased from 0.83 to 0.89 (p = 0.018). Specifically, per-patient sensitivity (84.3% vs. 93.3%) and per-lesion sensitivity (76.7% vs. 88.8%) were significantly improved (all p < 0.05). Per-patient specificity with CAD (65.6%) was higher than that without CAD (56.3%), but statistical significance was not reached (p = 0.238). The reading time for each case decreased from 10.9 min to 7.8 min (p < 0.001). CONCLUSIONS It is feasible to integrate CAD with structured reports of prostate mpMRI. This reading paradigm can improve the diagnostic sensitivity of csPCa detection and reduce reading time among inexperienced radiologists.
Collapse
Affiliation(s)
- Lina Zhu
- Department of Radiology, Peking University First Hospital, No. 8 Xishiku Street, Xicheng District, Beijing 100034, China
| | - Ge Gao
- Department of Radiology, Peking University First Hospital, No. 8 Xishiku Street, Xicheng District, Beijing 100034, China
| | - Yi Liu
- Department of Radiology, Peking University First Hospital, No. 8 Xishiku Street, Xicheng District, Beijing 100034, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, No. 8 Xishiku Street, Xicheng District, Beijing 100034, China
| | - Jing Liu
- Department of Radiology, Peking University First Hospital, No. 8 Xishiku Street, Xicheng District, Beijing 100034, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, No. 8 Xishiku Street, Xicheng District, Beijing 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, No. 8 Xishiku Street, Xicheng District, Beijing 100034, China.
| |
Collapse
|
75
|
Gutiérrez-Martínez J, Pineda C, Sandoval H, Bernal-González A. Computer-aided diagnosis in rheumatic diseases using ultrasound: an overview. Clin Rheumatol 2019; 39:993-1005. [DOI: 10.1007/s10067-019-04791-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 08/07/2019] [Accepted: 09/21/2019] [Indexed: 12/12/2022]
|
76
|
Dalal V, Carmicheal J, Dhaliwal A, Jain M, Kaur S, Batra SK. Radiomics in stratification of pancreatic cystic lesions: Machine learning in action. Cancer Lett 2019; 469:228-237. [PMID: 31629933 DOI: 10.1016/j.canlet.2019.10.023] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Revised: 10/03/2019] [Accepted: 10/15/2019] [Indexed: 12/15/2022]
Abstract
Pancreatic cystic lesions (PCLs) are well-known precursors of pancreatic cancer. Their diagnosis can be challenging as their behavior varies from benign to malignant disease. Precise and timely management of malignant pancreatic cysts might prevent transformation to pancreatic cancer. However, the current consensus guidelines, which rely on standard imaging features to predict cyst malignancy potential, are conflicting and unclear. This has led to an increased interest in radiomics, a high-throughput extraction of comprehensible data from standard of care images. Radiomics can be used as a diagnostic and prognostic tool in personalized medicine. It utilizes quantitative image analysis to extract features in conjunction with machine learning and artificial intelligence (AI) methods like support vector machines, random forest, and convolutional neural network for feature selection and classification. Selected features can then serve as imaging biomarkers to predict high-risk PCLs. Radiomics studies conducted heretofore on PCLs have shown promising results. This cost-effective approach would help us to differentiate benign PCLs from malignant ones and potentially guide clinical decision-making leading to better utilization of healthcare resources. In this review, we discuss the process of radiomics, its myriad applications such as diagnosis, prognosis, and prediction of therapy response. We also discuss the outcomes of studies involving radiomic analysis of PCLs and pancreatic cancer, and challenges associated with this novel field along with possible solutions. Although these studies highlight the potential benefit of radiomics in the prevention and optimal treatment of pancreatic cancer, further studies are warranted before incorporating radiomics into the clinical decision support system.
Collapse
Affiliation(s)
- Vipin Dalal
- Department of Biochemistry and Molecular Biology, University of Nebraska Medical Center, Omaha, NE, USA
| | - Joseph Carmicheal
- Department of Biochemistry and Molecular Biology, University of Nebraska Medical Center, Omaha, NE, USA
| | - Amaninder Dhaliwal
- Department of Gastroenterology and Hepatology, University of Nebraska Medical Center, Omaha, NE, USA
| | - Maneesh Jain
- Department of Biochemistry and Molecular Biology, University of Nebraska Medical Center, Omaha, NE, USA; Eppley Institute for Research in Cancer and Allied Diseases, University of Nebraska Medical Center, Omaha, NE, USA; The Fred and Pamela Buffet Cancer Center, University of Nebraska Medical Center, Omaha, NE, USA
| | - Sukhwinder Kaur
- Department of Biochemistry and Molecular Biology, University of Nebraska Medical Center, Omaha, NE, USA
| | - Surinder K Batra
- Department of Biochemistry and Molecular Biology, University of Nebraska Medical Center, Omaha, NE, USA; Eppley Institute for Research in Cancer and Allied Diseases, University of Nebraska Medical Center, Omaha, NE, USA; The Fred and Pamela Buffet Cancer Center, University of Nebraska Medical Center, Omaha, NE, USA.
| |
Collapse
|
77
|
Meakin JR, Ames RM, Jeynes JCG, Welsman J, Gundry M, Knapp K, Everson R. The feasibility of using citizens to segment anatomy from medical images: Accuracy and motivation. PLoS One 2019; 14:e0222523. [PMID: 31600225 PMCID: PMC6786545 DOI: 10.1371/journal.pone.0222523] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 09/02/2019] [Indexed: 11/18/2022] Open
Abstract
The development of automatic methods for segmenting anatomy from medical images is an important goal for many medical and healthcare research areas. Datasets that can be used to train and test computer algorithms, however, are often small due to the difficulties in obtaining experts to segment enough examples. Citizen science provides a potential solution to this problem but the feasibility of using the public to identify and segment anatomy in a medical image has not been investigated. Our study therefore aimed to explore the feasibility, in terms of performance and motivation, of using citizens for such purposes. Public involvement was woven into the study design and evaluation. Twenty-nine citizens were recruited and, after brief training, asked to segment the spine from a dataset of 150 magnetic resonance images. Participants segmented as many images as they could within three one-hour sessions. Their accuracy was evaluated by comparing them, as individuals and as a combined consensus, to the segmentations of three experts. Questionnaires and a focus group were used to determine the citizens' motivation for taking part and their experience of the study. Citizen segmentation accuracy, in terms of agreement with the expert consensus segmentation, varied considerably between individual citizens. The citizen consensus, however, was close to the expert consensus, indicating that when pooled, citizens may be able to replace or supplement experts for generating large image datasets. Personal interest and a desire to help were the two most common reasons for taking part in the study.
Collapse
Affiliation(s)
- Judith R. Meakin
- Biomedical Physics Group, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, United Kingdom
- * E-mail:
| | - Ryan M. Ames
- Biosciences, College of Life and Environmental Sciences, University of Exeter, Exeter, United Kingdom
| | - J. Charles G. Jeynes
- Centre for Biomedical Modelling and Analysis, University of Exeter, Exeter, United Kingdom
| | - Jo Welsman
- Centre for Biomedical Modelling and Analysis, University of Exeter, Exeter, United Kingdom
| | - Michael Gundry
- Medical Imaging, University of Exeter Medical School, University of Exeter, Exeter, United Kingdom
| | - Karen Knapp
- Medical Imaging, University of Exeter Medical School, University of Exeter, Exeter, United Kingdom
| | - Richard Everson
- Computer Science, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, United Kingdom
| |
Collapse
|
78
|
Yanase J, Triantaphyllou E. The seven key challenges for the future of computer-aided diagnosis in medicine. Int J Med Inform 2019; 129:413-422. [DOI: 10.1016/j.ijmedinf.2019.06.017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2019] [Revised: 06/15/2019] [Accepted: 06/19/2019] [Indexed: 12/23/2022]
|
79
|
Akselrod-Ballin A, Chorev M, Shoshan Y, Spiro A, Hazan A, Melamed R, Barkan E, Herzel E, Naor S, Karavani E, Koren G, Goldschmidt Y, Shalev V, Rosen-Zvi M, Guindy M. Predicting Breast Cancer by Applying Deep Learning to Linked Health Records and Mammograms. Radiology 2019; 292:331-342. [DOI: 10.1148/radiol.2019182622] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Ayelet Akselrod-Ballin
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Michal Chorev
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Yoel Shoshan
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Adam Spiro
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Alon Hazan
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Roie Melamed
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Ella Barkan
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Esma Herzel
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Shaked Naor
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Ehud Karavani
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Gideon Koren
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Yaara Goldschmidt
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Varda Shalev
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Michal Rosen-Zvi
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| | - Michal Guindy
- From the Department of Healthcare Informatics, IBM Research, IBM R&D Labs, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel (A.A.B., M.C., Y.S., A.S., A.H., R.M., E.B., S.N., E.K., Y.G., M.R.Z.); MaccabiTech, MKM, Maccabi Healthcare Services, Tel Aviv, Israel (E.H., G.K., V.S.); and Department of Imaging, Assuta Medical Centers, Tel Aviv, Israel (M.G.)
| |
Collapse
|
80
|
Zhao Y, Li H, Wan S, Sekuboyina A, Hu X, Tetteh G, Piraud M, Menze B. Knowledge-Aided Convolutional Neural Network for Small Organ Segmentation. IEEE J Biomed Health Inform 2019; 23:1363-1373. [DOI: 10.1109/jbhi.2019.2891526] [Citation(s) in RCA: 136] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
81
|
Optimizing Neuro-Oncology Imaging: A Review of Deep Learning Approaches for Glioma Imaging. Cancers (Basel) 2019; 11:cancers11060829. [PMID: 31207930 PMCID: PMC6627902 DOI: 10.3390/cancers11060829] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Revised: 06/06/2019] [Accepted: 06/11/2019] [Indexed: 02/07/2023] Open
Abstract
Radiographic assessment with magnetic resonance imaging (MRI) is widely used to characterize gliomas, which represent 80% of all primary malignant brain tumors. Unfortunately, glioma biology is marked by heterogeneous angiogenesis, cellular proliferation, cellular invasion, and apoptosis. This translates into varying degrees of enhancement, edema, and necrosis, making reliable imaging assessment challenging. Deep learning, a subset of machine learning artificial intelligence, has gained traction as a method, which has seen effective employment in solving image-based problems, including those in medical imaging. This review seeks to summarize current deep learning applications used in the field of glioma detection and outcome prediction and will focus on (1) pre- and post-operative tumor segmentation, (2) genetic characterization of tissue, and (3) prognostication. We demonstrate that deep learning methods of segmenting, characterizing, grading, and predicting survival in gliomas are promising opportunities that may enhance both research and clinical activities.
Collapse
|
82
|
Ather S, Kadir T, Gleeson F. Artificial intelligence and radiomics in pulmonary nodule management: current status and future applications. Clin Radiol 2019; 75:13-19. [PMID: 31202567 DOI: 10.1016/j.crad.2019.04.017] [Citation(s) in RCA: 85] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 04/04/2019] [Indexed: 02/06/2023]
Abstract
Artificial intelligence (AI) has been present in some guise within the field of radiology for over 50 years. The first studies investigating computer-aided diagnosis in thoracic radiology date back to the 1960s, and in the subsequent years, the main application of these techniques has been the detection and classification of pulmonary nodules. In addition, there have been other less intensely researched applications, such as the diagnosis of interstitial lung disease, chronic obstructive pulmonary disease, and the detection of pulmonary emboli. Despite extensive literature on the use of convolutional neural networks in thoracic imaging over the last few decades, we are yet to see these systems in use in clinical practice. The article reviews current state-of-the-art applications of AI and in detection, classification, and follow-up of pulmonary nodules and how deep-learning techniques might influence these going forward. Finally, we postulate the impact of these advancements on the role of radiologists and the importance of radiologists in the development and evaluation of these techniques.
Collapse
Affiliation(s)
- S Ather
- Department of Radiology, Churchill Hospital, Oxford, UK
| | - T Kadir
- Optellum Ltd, Oxford Centre of Innovation, Oxford, UK
| | - F Gleeson
- National Consortium of Intelligent Medical Imaging, UK; Department of Oncology, University of Oxford, UK.
| |
Collapse
|
83
|
Freiman M, Manjeshwar R, Goshen L. Unsupervised abnormality detection through mixed structure regularization (MSR) in deep sparse autoencoders. Med Phys 2019; 46:2223-2231. [PMID: 30821364 DOI: 10.1002/mp.13464] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 01/06/2019] [Accepted: 02/22/2019] [Indexed: 12/17/2022] Open
Abstract
PURPOSE The purpose of this study is to introduce and evaluate the mixed structure regularization (MSR) approach for a deep sparse autoencoder aimed at unsupervised abnormality detection in medical images. Unsupervised abnormality detection based on identifying outliers using deep sparse autoencoders is a very appealing approach for computer-aided detection systems as it requires only healthy data for training rather than expert annotated abnormality. However, regularization is required to avoid overfitting of the network to the training data. METHODS We used coronary computed tomography angiography (CCTA) datasets of 90 subjects with expert annotated centerlines. We segmented coronary lumen and wall using an automatic algorithm with manual corrections where required. We defined normal coronary cross section as cross sections with a ratio between lumen and wall areas larger than 0.8. We divided the datasets into training, validation, and testing groups in a tenfold cross-validation scheme. We trained a deep sparse overcomplete autoencoder model for normality modeling with random structure and noise augmentation. We assessed the performance of our deep sparse autoencoder with MSR without denoising (SAE-MSR) and with denoising (SDAE-MSR) in comparison to deep sparse autoencoder (SAE), and deep sparse denoising autoencoder (SDAE) models in the task of detecting coronary artery disease from CCTA data on the test group. RESULTS The SDAE-MSR achieved the best aggregated area under the curve (AUC) with a 20% improvement and the best aggregated Average Precision (AP) with a 30% improvement upon the SAE and SDAE (AUC: 0.78 to 0.94, AP: 0.66 to 0.86) in distinguishing between coronary cross sections with mild stenosis (stenosis grade < 0.3) and coronary cross sections with severe stenosis (stenosis grade > 0.7). The improvements were statistically significant (Mann-Whitney U-test, P < 0.001). Similarly, The SDAE-MSR achieved the best aggregated AUC (AP) with an 18% (18%) improvement upon the SAE and SDAE (AUC: 0.71 to 0.84, AP: 0.68 to 0.80). The improvements were statistically significant (Mann-Whitney U-test, P < 0.05). CONCLUSION Deep sparse autoencoders with MSR in addition to explicit sparsity regularization term and stochastic corruption of the input data with Gaussian noise have the potential to improve unsupervised abnormality detection using deep-learning compared to common deep autoencoders.
Collapse
Affiliation(s)
- Moti Freiman
- CT BU, Global Advanced Technology, Philips Healthcare, Advanced Technologies Center, Building No. 34, P.O. Box 325, Haifa, 3100202, Israel
| | - Ravindra Manjeshwar
- CT BU, Global Advanced Technology, Philips Healthcare, 100 Park Ave, Highland Hills, OH, 44122, USA
| | - Liran Goshen
- CT BU, Global Advanced Technology, Philips Healthcare, Advanced Technologies Center, Building No. 34, P.O. Box 325, Haifa, 3100202, Israel
| |
Collapse
|
84
|
Abbasian Ardakani A, Bitarafan-Rajabi A, Mohammadzadeh A, Mohammadi A, Riazi R, Abolghasemi J, Homayoun Jafari A, Bagher Shiran M. A Hybrid Multilayer Filtering Approach for Thyroid Nodule Segmentation on Ultrasound Images. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2019; 38:629-640. [PMID: 30171626 DOI: 10.1002/jum.14731] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 05/31/2018] [Accepted: 06/03/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVES Speckle noise is the main factor that degrades ultrasound image contrast and segmentation failure. Determining an effective filter can reduce speckle noise and improve segmentation performances. The aim of this study was to define a useful filter to improve the segmentation outcome. METHODS Twelve filters, including median, hybrid median (Hmed), Fourier Butterworth, Fourier ideal, wavelet (Wlet), homomorphic Fourier Butterworth, homomorphic Fourier ideal, homomorphic wavelet (Hmp_Wlet), frost, anisotropic diffusion, probabilistic patch-based (PPB), and homogeneous area filters, were used to find the best filter(s) to prepare thyroid nodule segmentation. A receiver operating characteristic (ROC) analysis was used for filter evaluation in the nodule segmentation process. Accordingly, 10 morphologic parameters were measured from segmented regions to find the best parameters that predict the segmentation performance. RESULTS The best segmentation performance was reached by using 4 hybrid filters that mainly contain contrast-limited adaptive histogram equalization, Wlet, Hmed, Hmp_Wlet, and PPB filters. The area under the ROC curve for these filters ranged from 0.900 to 0.943 in comparison with the original image, with an area under the curve of 0.685. From 10 morphologic parameters, the area, convex area, equivalent diameter, solidity, and extent can evaluate segmentation performance. CONCLUSIONS Hybrid filters that contain contrast-limited adaptive histogram equalization, Wlet, Hmed, Hmp_Wlet, and PPB filters have a high potential to provide good conditions for thyroid nodule segmentation in ultrasound images. In addition to an ROC analysis, morphometry of a segmented region can be used to evaluate segmentation performances.
Collapse
Affiliation(s)
- Ali Abbasian Ardakani
- Department of Medical Physics, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Ahmad Bitarafan-Rajabi
- Department of Medical Physics, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
- School of Medicine, Department of Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Ali Mohammadzadeh
- Department of Radiology, Tehran University of Medical Sciences, Tehran, Iran
| | - Afshin Mohammadi
- Department of Radiology, Faculty of Medicine, Imam Khomeini Hospital, Urmia University of Medical Sciences, Urmia, Iran
| | - Reza Riazi
- Department of Medical Physics, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Jamileh Abolghasemi
- Department of Rajaei Cardiovascular, Medical, and Research Center, Tehran University of Medical Sciences, Tehran, Iran
- Department of Biostatistics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
| | - Amir Homayoun Jafari
- Department of Medical Physics & Biomedical Engineering, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Mohammad Bagher Shiran
- Department of Medical Physics, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
85
|
Bi WL, Hosny A, Schabath MB, Giger ML, Birkbak NJ, Mehrtash A, Allison T, Arnaout O, Abbosh C, Dunn IF, Mak RH, Tamimi RM, Tempany CM, Swanton C, Hoffmann U, Schwartz LH, Gillies RJ, Huang RY, Aerts HJWL. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA Cancer J Clin 2019; 69:127-157. [PMID: 30720861 PMCID: PMC6403009 DOI: 10.3322/caac.21552] [Citation(s) in RCA: 754] [Impact Index Per Article: 125.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.
Collapse
Affiliation(s)
- Wenya Linda Bi
- Assistant Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Ahmed Hosny
- Research Scientist, Department of Radiation Oncology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Matthew B. Schabath
- Associate Member, Department of Cancer EpidemiologyH. Lee Moffitt Cancer Center and Research InstituteTampaFL
| | - Maryellen L. Giger
- Professor of Radiology, Department of RadiologyUniversity of ChicagoChicagoIL
| | - Nicolai J. Birkbak
- Research Associate, The Francis Crick InstituteLondonUnited Kingdom
- Research Associate, University College London Cancer InstituteLondonUnited Kingdom
| | - Alireza Mehrtash
- Research Assistant, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
- Research Assistant, Department of Electrical and Computer EngineeringUniversity of British ColumbiaVancouverBCCanada
| | - Tavis Allison
- Research Assistant, Department of RadiologyColumbia University College of Physicians and SurgeonsNew YorkNY
- Research Assistant, Department of RadiologyNew York Presbyterian HospitalNew YorkNY
| | - Omar Arnaout
- Assistant Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Christopher Abbosh
- Research Fellow, The Francis Crick InstituteLondonUnited Kingdom
- Research Fellow, University College London Cancer InstituteLondonUnited Kingdom
| | - Ian F. Dunn
- Associate Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Raymond H. Mak
- Associate Professor, Department of Radiation Oncology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Rulla M. Tamimi
- Associate Professor, Department of MedicineBrigham and Women’s Hospital, Dana‐Farber Cancer Institute, Harvard Medical SchoolBostonMA
| | - Clare M. Tempany
- Professor of Radiology, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Charles Swanton
- Professor, The Francis Crick InstituteLondonUnited Kingdom
- Professor, University College London Cancer InstituteLondonUnited Kingdom
| | - Udo Hoffmann
- Professor of Radiology, Department of RadiologyMassachusetts General Hospital and Harvard Medical SchoolBostonMA
| | - Lawrence H. Schwartz
- Professor of Radiology, Department of RadiologyColumbia University College of Physicians and SurgeonsNew YorkNY
- Chair, Department of RadiologyNew York Presbyterian HospitalNew YorkNY
| | - Robert J. Gillies
- Professor of Radiology, Department of Cancer PhysiologyH. Lee Moffitt Cancer Center and Research InstituteTampaFL
| | - Raymond Y. Huang
- Assistant Professor, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Hugo J. W. L. Aerts
- Associate Professor, Departments of Radiation Oncology and Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
- Professor in AI in Medicine, Radiology and Nuclear Medicine, GROWMaastricht University Medical Centre (MUMC+)MaastrichtThe Netherlands
| |
Collapse
|
86
|
Abbasian Ardakani A, Bitarafan-Rajabi A, Mohammadi A, Hekmat S, Tahmasebi A, Shiran MB, Mohammadzadeh A. CAD system based on B-mode and color Doppler sonographic features may predict if a thyroid nodule is hot or cold. Eur Radiol 2019; 29:4258-4265. [PMID: 30627819 DOI: 10.1007/s00330-018-5908-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Revised: 10/25/2018] [Accepted: 11/22/2018] [Indexed: 10/27/2022]
Abstract
OBJECTIVES The aim of this study was to evaluate if the analysis of sonographic parameters could predict if a thyroid nodule was hot or cold. METHODS Overall, 102 thyroid nodules, including 51 hyperfunctioning (hot) and 51 hypofunctioning (cold) nodules, were evaluated in this study. Twelve sonographic features (i.e., seven B-mode and five Doppler features) were extracted for each nodule type. The isthmus thickness, nodule volume, echogenicity, margin, internal component, microcalcification, and halo sign features were obtained in the B-mode, while the vascularity pattern, resistive index (RI), peak systolic velocity, end diastolic velocity, and peak systolic/end diastolic velocity ratio (SDR) were determined, based on Doppler ultrasounds. All significant features were incorporated in the computer-aided diagnosis (CAD) system to classify hot and cold nodules. RESULTS Among all sonographic features, only isthmus thickness, nodule volume, echogenicity, RI, and SDR were significantly different between hot and cold nodules. Based on these features in the training dataset, the CAD system could classify hot and cold nodules with an area under the curve (AUC) of 0.898. Also, in the test dataset, hot and cold nodules were classified with an AUC of 0.833. CONCLUSIONS 2D sonographic features could differentiate hot and cold thyroid nodules. The CAD system showed a great potential to achieve it automatically. KEY POINTS • Cold nodules represent higher volume (p = 0.005), isthmus thickness (p = 0.035), RI (p = 0.020), and SDR (p = 0.044) and appear hypoechogenic (p = 0.010) in US. • Nodule volume with an AUC of 0.685 and resistive index with an AUC of 0.628 showed the highest classification potential among all B-mode and Doppler features respectively. • The proposed CAD system could distinguish hot nodules from cold ones with an AUC of 0.833 (sensitivity 90.00%, specificity 70.00%, accuracy 80.00%, PPV 87.50%, and NPV 75.00%).
Collapse
Affiliation(s)
- Ali Abbasian Ardakani
- ENT and Head & Neck Research Center and Department, Hazrat Rasoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran.,Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Ahmad Bitarafan-Rajabi
- Cardiovascular Intervention Research Center, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran.,Echocardiography Research Center, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Afshin Mohammadi
- Department of Radiology, Faculty of Medicine, Urmia University of Medical Science, Urmia, Iran
| | - Sepideh Hekmat
- Department of Nuclear Medicine, School of Medicine, Hasheminejad Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Aylin Tahmasebi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Mohammad Bagher Shiran
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran.
| | - Ali Mohammadzadeh
- Department of Radiology, Rajaie Cardiovascular, Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
87
|
Shariaty F, Mousavi M. Application of CAD systems for the automatic detection of lung nodules. INFORMATICS IN MEDICINE UNLOCKED 2019. [DOI: 10.1016/j.imu.2019.100173] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
|
88
|
Automated Segmentation Methods of Drusen to Diagnose Age-Related Macular Degeneration Screening in Retinal Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2018; 2018:6084798. [PMID: 29721037 PMCID: PMC5867666 DOI: 10.1155/2018/6084798] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 01/13/2018] [Accepted: 02/06/2018] [Indexed: 11/18/2022]
Abstract
Existing drusen measurement is difficult to use in clinic because it requires a lot of time and effort for visual inspection. In order to resolve this problem, we propose an automatic drusen detection method to help clinical diagnosis of age-related macular degeneration. First, we changed the fundus image to a green channel and extracted the ROI of the macular area based on the optic disk. Next, we detected the candidate group using the difference image of the median filter within the ROI. We also segmented vessels and removed them from the image. Finally, we detected the drusen through Renyi's entropy threshold algorithm. We performed comparisons and statistical analysis between the manual detection results and automatic detection results for 30 cases in order to verify validity. As a result, the average sensitivity was 93.37% (80.95%~100%) and the average DSC was 0.73 (0.3~0.98). In addition, the value of the ICC was 0.984 (CI: 0.967~0.993, p < 0.01), showing the high reliability of the proposed automatic method. We expect that the automatic drusen detection helps clinicians to improve the diagnostic performance in the detection of drusen on fundus image.
Collapse
|
89
|
Abd. Rahni AA, Fazwan Mohamed Fuzaie M, Al Irr OI. Automated Bed Detection and Removal from Abdominal CT Images for Automatic Segmentation Applications. 2018 IEEE-EMBS CONFERENCE ON BIOMEDICAL ENGINEERING AND SCIENCES (IECBES) 2018. [DOI: 10.1109/iecbes.2018.8626638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
90
|
Zhang L, Xiang D, Jin C, Shi F, Yu K, Chen X. OIPAV: an Integrated Software System for Ophthalmic Image Processing, Analysis, and Visualization. J Digit Imaging 2018; 32:183-197. [PMID: 30187316 DOI: 10.1007/s10278-017-0047-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
Abstract
Ophthalmic medical images, such as optical coherence tomography (OCT) images and color photo of fundus, provide valuable information for clinical diagnosis and treatment of ophthalmic diseases. In this paper, we introduce a software system specially oriented to ophthalmic images processing, analysis, and visualization (OIPAV) to assist users. OIPAV is a cross-platform system built on a set of powerful and widely used toolkit libraries. Based on the plugin mechanism, the system has an extensible framework. It provides rich functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis, and visualization. By using OIPAV, users can easily access to the ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images, and improve quantitative evaluations. With a satisfying function scalability and expandability, the software is applicable for both ophthalmic researchers and clinicians.
Collapse
Affiliation(s)
- Lichun Zhang
- School of Electronics and Information Engineering, Soochow University, No.1 Shizi Street, Suzhou, Jiangsu Province, 215006, China
| | - Dehui Xiang
- School of Electronics and Information Engineering, Soochow University, No.1 Shizi Street, Suzhou, Jiangsu Province, 215006, China
| | - Chao Jin
- School of Electronics and Information Engineering, Soochow University, No.1 Shizi Street, Suzhou, Jiangsu Province, 215006, China
| | - Fei Shi
- School of Electronics and Information Engineering, Soochow University, No.1 Shizi Street, Suzhou, Jiangsu Province, 215006, China
| | - Kai Yu
- School of Electronics and Information Engineering, Soochow University, No.1 Shizi Street, Suzhou, Jiangsu Province, 215006, China
| | - Xinjian Chen
- School of Electronics and Information Engineering, Soochow University, No.1 Shizi Street, Suzhou, Jiangsu Province, 215006, China.
| |
Collapse
|
91
|
Capellini K, Vignali E, Costa E, Gasparotti E, Biancolini ME, Landini L, Positano V, Celi S. Computational Fluid Dynamic Study for aTAA Hemodynamics: An Integrated Image-Based and Radial Basis Functions Mesh Morphing Approach. J Biomech Eng 2018; 140:2694848. [DOI: 10.1115/1.4040940] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Indexed: 12/31/2022]
Abstract
We present a novel framework for the fluid dynamics analysis of healthy subjects and patients affected by ascending thoracic aorta aneurysm (aTAA). Our aim is to obtain indications about the effect of a bulge on the hemodynamic environment at different enlargements. Three-dimensional (3D) surface models defined from healthy subjects and patients with aTAA, selected for surgical repair, were generated. A representative shape model for both healthy and pathological groups has been identified. A morphing technique based on radial basis functions (RBF) was applied to mold the shape relative to healthy patient into the representative shape of aTAA dataset to enable the parametric simulation of the aTAA formation. Computational fluid dynamics (CFD) simulations were performed by means of a finite volume solver using the mean boundary conditions obtained from three-dimensional (PC-MRI) acquisition. Blood flow helicity and flow descriptors were assessed for all the investigated models. The feasibility of the proposed integrated approach pertaining the coupling between an RBF morphing technique and CFD simulation for aTAA was demonstrated. Significant hemodynamic changes appear at the 60% of the bulge progression. An impingement of the flow toward the bulge was observed by analyzing the normalized flow eccentricity (NFE) index.
Collapse
Affiliation(s)
- Katia Capellini
- BioCardioLab, Fondazione CNR-Regione Toscana “G. Monasterio,” Ospedale del Cuore, Via Aurelia Sud, Massa 54100, Italy e-mail:
| | - Emanuele Vignali
- BioCardioLab, Fondazione CNR-Regione Toscana “G. Monasterio,” Ospedale del Cuore, Via Aurelia Sud, Massa 54100, Italy
| | - Emiliano Costa
- RINA Consulting S.p.A., Viale Cesare Pavese, 305, Roma 00144, Italy
| | - Emanuele Gasparotti
- BioCardioLab, Fondazione CNR-Regione Toscana “G. Monasterio,” Ospedale del Cuore, Via Aurelia Sud, Massa 54100, Italy
| | - Marco Evangelos Biancolini
- Department of Enterprise Engineering, University of Rome Tor Vergata, Via del Politecnico 1, Roma 00133, Italy
| | - Luigi Landini
- Department of Information Engineering, University of Pisa, Via Girolamo Caruso, 16, Pisa 56122, Italy
| | - Vincenzo Positano
- BioCardioLab, Fondazione CNR-Regione Toscana “G. Monasterio,” Ospedale del Cuore, Via Aurelia Sud, Massa 54100, Italy
| | - Simona Celi
- BioCardioLab, Fondazione CNR-Regione Toscana “G. Monasterio,” Ospedale del Cuore, Via Aurelia Sud, Massa 54100, Italy
| |
Collapse
|
92
|
Gibson E, Giganti F, Hu Y, Bonmati E, Bandula S, Gurusamy K, Davidson B, Pereira SP, Clarkson MJ, Barratt DC. Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1822-1834. [PMID: 29994628 PMCID: PMC6076994 DOI: 10.1109/tmi.2018.2806309] [Citation(s) in RCA: 315] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.
Collapse
|
93
|
Kim YJ, Lee SH, Lim KY, Kim KG. Development and Validation of Segmentation Method for Lung Cancer Volumetry on Chest CT. J Digit Imaging 2018; 31:505-512. [PMID: 29380154 PMCID: PMC6113144 DOI: 10.1007/s10278-018-0051-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
The set of criteria called Response Evaluation Criteria In Solid Tumors (RECIST) is used to evaluate the remedial effects of lung cancer, whereby the size of a lesion can be measured in one dimension (diameter). Volumetric evaluation is desirable for estimating the size of a lesion accurately, but there are several constraints and limitations to calculating the volume in clinical trials. In this study, we developed a method to detect lesions automatically, with minimal intervention by the user, and calculate their volume. Our proposed method, called a spherical region-growing method (SPRG), uses segmentation that starts from a seed point set by the user. SPRG is a modification of an existing region-growing method that is based on a sphere instead of pixels. The SPRG method detects lesions while preventing leakage to neighboring tissues, because the sphere is grown, i.e., neighboring voxels are added, only when all the voxels meet the required conditions. In this study, two radiologists segmented lung tumors using a manual method and the proposed method, and the results of both methods were compared. The proposed method showed a high sensitivity of 81.68-84.81% and a high dice similarity coefficient (DSC) of 0.86-0.88 compared with the manual method. In addition, the SPRG intraclass correlation coefficient (ICC) was 0.998 (CI 0.997-0.999, p < 0.01), showing that the SPRG method is highly reliable. If our proposed method is used for segmentation and volumetric measurement of lesions, then objective and accurate results and shorter data analysis time are possible.
Collapse
Affiliation(s)
- Young Jae Kim
- Department of Biomedical Engineering, Gachon University College of Medicine, 21, Namdong-daero 774 beon-gil, Namdong-gu, Incheon, 21565, Republic of Korea
- Department of Plazma Bio Display, Kwangwoon University, 20 Kwangwoon-ro, Nowon-gu, Seoul, 01897, Republic of Korea
| | - Seung Hyun Lee
- Department of Plazma Bio Display, Kwangwoon University, 20 Kwangwoon-ro, Nowon-gu, Seoul, 01897, Republic of Korea
| | - Kun Young Lim
- Department of Diagnostic Radiology, Center for Lung Cancer, National Cancer Center, 323 Ilsan-ro, Ilsandong-gu, Goyang, 10408, Gyeonggi-do, Republic of Korea
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gachon University College of Medicine, 21, Namdong-daero 774 beon-gil, Namdong-gu, Incheon, 21565, Republic of Korea.
| |
Collapse
|
94
|
Abstract
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
Collapse
Affiliation(s)
- Ahmed Hosny
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Chintan Parmar
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - John Quackenbush
- Department of Biostatistics & Computational Biology, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Cancer Biology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Lawrence H Schwartz
- Department of Radiology, Columbia University College of Physicians and Surgeons, New York, NY, USA
- Department of Radiology, New York Presbyterian Hospital, New York, NY, USA
| | - Hugo J W L Aerts
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA.
- Department of Radiology, Dana-Farber Cancer Institute, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
95
|
Kim J, Hong J, Park H. Prospects of deep learning for medical imaging. PRECISION AND FUTURE MEDICINE 2018; 2:37-52. [DOI: 10.23838/pfm.2018.00030] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2018] [Accepted: 04/14/2018] [Indexed: 08/29/2023] Open
|
96
|
Rastogi A, Maheshwari S, Shinagare AB, Baheti AD. Computed Tomography Advances in Oncoimaging. Semin Roentgenol 2018; 53:147-156. [PMID: 29861006 DOI: 10.1053/j.ro.2018.02.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Affiliation(s)
- Ashita Rastogi
- Department of Radiodiagnosis, Tata Memorial Centre, Mumbai, India
| | - Sharad Maheshwari
- Department of Radiology, Kokilaben Dhirubhai Ambani Hospital, Mumbai, India
| | - Atul B Shinagare
- Department of Radiology, Harvard Medical School, Dana-Farber Cancer Institute, Boston, MA
| | - Akshay D Baheti
- Department of Radiodiagnosis, Tata Memorial Centre, Mumbai, India.
| |
Collapse
|
97
|
Dimitriadis SI, Liparas D. How random is the random forest? Random forest algorithm on the service of structural imaging biomarkers for Alzheimer's disease: from Alzheimer's disease neuroimaging initiative (ADNI) database. Neural Regen Res 2018; 13:962-970. [PMID: 29926817 PMCID: PMC6022472 DOI: 10.4103/1673-5374.233433] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/10/2018] [Indexed: 11/08/2022] Open
Abstract
Neuroinformatics is a fascinating research field that applies computational models and analytical tools to high dimensional experimental neuroscience data for a better understanding of how the brain functions or dysfunctions in brain diseases. Neuroinformaticians work in the intersection of neuroscience and informatics supporting the integration of various sub-disciplines (behavioural neuroscience, genetics, cognitive psychology, etc.) working on brain research. Neuroinformaticians are the pathway of information exchange between informaticians and clinicians for a better understanding of the outcome of computational models and the clinical interpretation of the analysis. Machine learning is one of the most significant computational developments in the last decade giving tools to neuroinformaticians and finally to radiologists and clinicians for an automatic and early diagnosis-prognosis of a brain disease. Random forest (RF) algorithm has been successfully applied to high-dimensional neuroimaging data for feature reduction and also has been applied to classify the clinical label of a subject using single or multi-modal neuroimaging datasets. Our aim was to review the studies where RF was applied to correctly predict the Alzheimer's disease (AD), the conversion from mild cognitive impairment (MCI) and its robustness to overfitting, outliers and handling of non-linear data. Finally, we described our RF-based model that gave us the 1st position in an international challenge for automated prediction of MCI from MRI data.
Collapse
Affiliation(s)
- Stavros I. Dimitriadis
- Division of Psychological Medicine and Clinical Neurosciences, School of Medicine, Cardiff University, Cardiff, UK
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, UK
- School of Psychology, Cardiff University, Cardiff, UK
- Neuroinformatics Group, Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, UK
- Neuroscience and Mental Health Research Institute, Cardiff University, Cardiff, UK
- MRC Centre for Neuropsychiatric Genetics and Genomics, School of Medicine, Cardiff University, Cardiff, UK
| | - Dimitris Liparas
- High Performance Computing Center Stuttgart (HLRS), University of Stuttgart, Stuttgart, Germany
- Department of Informatics, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | | |
Collapse
|
98
|
Wang C, Elazab A, Jia F, Wu J, Hu Q. Automated chest screening based on a hybrid model of transfer learning and convolutional sparse denoising autoencoder. Biomed Eng Online 2018; 17:63. [PMID: 29792208 PMCID: PMC5966927 DOI: 10.1186/s12938-018-0496-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Accepted: 05/09/2018] [Indexed: 01/23/2023] Open
Abstract
Objective In this paper, we aim to investigate the effect of computer-aided triage system, which is implemented for the health checkup of lung lesions involving tens of thousands of chest X-rays (CXRs) that are required for diagnosis. Therefore, high accuracy of diagnosis by an automated system can reduce the radiologist’s workload on scrutinizing the medical images. Method We present a deep learning model in order to efficiently detect abnormal levels or identify normal levels during mass chest screening so as to obtain the probability confidence of the CXRs. Moreover, a convolutional sparse denoising autoencoder is designed to compute the reconstruction error. We employ four publicly available radiology datasets pertaining to CXRs, analyze their reports, and utilize their images for mining the correct disease level of the CXRs that are to be submitted to a computer aided triaging system. Based on our approach, we vote for the final decision from multi-classifiers to determine which three levels of the images (i.e. normal, abnormal, and uncertain cases) that the CXRs fall into. Results We only deal with the grade diagnosis for physical examination and propose multiple new metric indices. Combining predictors for classification by using the area under a receiver operating characteristic curve, we observe that the final decision is related to the threshold from reconstruction error and the probability value. Our method achieves promising results in terms of precision of 98.7 and 94.3% based on the normal and abnormal cases, respectively. Conclusion The results achieved by the proposed framework show superiority in classifying the disease level with high accuracy. This can potentially save the radiologists time and effort, so as to allow them to focus on higher-level risk CXRs.
Collapse
Affiliation(s)
- Changmiao Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Boulevard, Shenzhen, 518055, China.,University of Chinese Academy of Sciences, 52 Sanlihe Road, Beijing, 100864, China
| | - Ahmed Elazab
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China.,Department of Computer Science, Misr Higher Institute for Commerce and Computers, Mansoura, 35516, Egypt
| | - Fucang Jia
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Boulevard, Shenzhen, 518055, China
| | - Jianhuang Wu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Boulevard, Shenzhen, 518055, China
| | - Qingmao Hu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Boulevard, Shenzhen, 518055, China. .,Key Laboratory of Human-Machine Intelligence Synergy Systems, 1068 Xueyuan Boulevard, Shenzhen, 518055, China.
| |
Collapse
|
99
|
Zia ur Rehman M, Javaid M, Shah SIA, Gilani SO, Jamil M, Butt SI. An appraisal of nodules detection techniques for lung cancer in CT images. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2017.11.017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
100
|
Kiessling F. The changing face of cancer diagnosis: From computational image analysis to systems biology. Eur Radiol 2018; 28:3160-3164. [PMID: 29488085 DOI: 10.1007/s00330-018-5347-9] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2018] [Revised: 01/18/2018] [Accepted: 01/19/2018] [Indexed: 12/19/2022]
Abstract
ᅟ: KEY POINTS: • Radiomics and radiogenomics will merge radiology, nuclear medicine, pathology and laboratory medicine. • Automation of image data analysis will change the daily routine work. • Image-guided therapy and handling complex radiogenomic data will play a major role.
Collapse
Affiliation(s)
- Fabian Kiessling
- Department of Experimental Molecular Imaging, Medical Faculty, Institute for Experimental Molecular Imaging, RWTH Aachen University, Pauwelsstraße 30, 52074, Aachen, Germany.
| |
Collapse
|