151
|
Siddique S, Chow JC. Artificial intelligence in radiotherapy. Rep Pract Oncol Radiother 2020; 25:656-666. [PMID: 32617080 PMCID: PMC7321818 DOI: 10.1016/j.rpor.2020.03.015] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Revised: 01/06/2020] [Accepted: 03/27/2020] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) has already been implemented widely in the medical field in the recent years. This paper first reviews the background of AI and radiotherapy. Then it explores the basic concepts of different AI algorithms and machine learning methods, such as neural networks, that are available to us today and how they are being implemented in radiotherapy and diagnostic processes, such as medical imaging, treatment planning, patient simulation, quality assurance and radiation dose delivery. It also explores the ongoing research on AI methods that are to be implemented in radiotherapy in the future. The review shows very promising progress and future for AI to be widely used in various areas of radiotherapy. However, basing on various concerns such as availability and security of using big data, and further work on polishing and testing AI algorithms, it is found that we may not ready to use AI primarily in radiotherapy at the moment.
Collapse
Affiliation(s)
- Sarkar Siddique
- Department of Physics, Ryerson University, Toronto, ON M5B 2K3, Canada
| | - James C.L. Chow
- Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, ON M5G 1X6, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
| |
Collapse
|
152
|
Wataya T, Nakanishi K, Suzuki Y, Kido S, Tomiyama N. Introduction to deep learning: minimum essence required to launch a research. Jpn J Radiol 2020; 38:907-921. [DOI: 10.1007/s11604-020-00998-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Accepted: 06/02/2020] [Indexed: 02/08/2023]
|
153
|
Waheed A, Goyal M, Gupta D, Khanna A, Al-Turjman F, Pinheiro PR. CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:91916-91923. [PMID: 34192100 PMCID: PMC8043420 DOI: 10.1109/access.2020.2994762] [Citation(s) in RCA: 267] [Impact Index Per Article: 53.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 05/11/2020] [Indexed: 05/08/2023]
Abstract
Coronavirus (COVID-19) is a viral disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The spread of COVID-19 seems to have a detrimental effect on the global economy and health. A positive chest X-ray of infected patients is a crucial step in the battle against COVID-19. Early results suggest that abnormalities exist in chest X-rays of patients suggestive of COVID-19. This has led to the introduction of a variety of deep learning systems and studies have shown that the accuracy of COVID-19 patient detection through the use of chest X-rays is strongly optimistic. Deep learning networks like convolutional neural networks (CNNs) need a substantial amount of training data. Because the outbreak is recent, it is difficult to gather a significant number of radiographic images in such a short time. Therefore, in this research, we present a method to generate synthetic chest X-ray (CXR) images by developing an Auxiliary Classifier Generative Adversarial Network (ACGAN) based model called CovidGAN. In addition, we demonstrate that the synthetic images produced from CovidGAN can be utilized to enhance the performance of CNN for COVID-19 detection. Classification using CNN alone yielded 85% accuracy. By adding synthetic images produced by CovidGAN,the accuracy increased to 95%. We hope this method will speed up COVID-19 detection and lead to more robust systems of radiology.
Collapse
Affiliation(s)
- Abdul Waheed
- Maharaja Agrasen Institute of TechnologyNew Delhi110086India
| | - Muskan Goyal
- Maharaja Agrasen Institute of TechnologyNew Delhi110086India
| | - Deepak Gupta
- Maharaja Agrasen Institute of TechnologyNew Delhi110086India
| | - Ashish Khanna
- Maharaja Agrasen Institute of TechnologyNew Delhi110086India
| | - Fadi Al-Turjman
- Artificial Intelligence DepartmentResearch Center for AI and IoTNear East University99138MersinTurkey
| | | |
Collapse
|
154
|
Das N, Hussain E, Mahanta LB. Automated classification of cells into multiple classes in epithelial tissue of oral squamous cell carcinoma using transfer learning and convolutional neural network. Neural Netw 2020; 128:47-60. [PMID: 32416467 DOI: 10.1016/j.neunet.2020.05.003] [Citation(s) in RCA: 66] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 04/01/2020] [Accepted: 05/01/2020] [Indexed: 11/24/2022]
Abstract
The analysis of tissue of a tumor in the oral cavity is essential for the pathologist to ascertain its grading. Recent studies using biopsy images reveal computer-aided diagnosis for oral sub-mucous fibrosis (OSF) carried out using machine learning algorithms, but no research has yet been outlined for multi-class grading of oral squamous cell carcinoma (OSCC). Pertinently, with the advent of deep learning in digital imaging and computational aid in the diagnosis, multi-class classification of OSCC biopsy images can help in timely and effective prognosis and multi-modal treatment protocols for oral cancer patients, thus reducing the operational workload of pathologists while enhancing management of the disease. With this motivation, this study attempts to classify OSCC into its four classes as per the Broder's system of histological grading. The study is conducted on oral biopsy images applying two methods: (i) through the application of transfer learning using pre-trained deep convolutional neural network (CNN) wherein four candidate pre-trained models, namely Alexnet, VGG-16, VGG-19 and Resnet-50, were chosen to find the most suitable model for our classification problem, and (ii) by a proposed CNN model. Although the highest classification accuracy of 92.15% is achieved by Resnet-50 model, the experimental findings highlight that the proposed CNN model outperformed the transfer learning approaches displaying accuracy of 97.5%. It can be concluded that the proposed CNN based multi-class grading method of OSCC could be used for diagnosis of patients with OSCC.
Collapse
Affiliation(s)
- Navarun Das
- National Institute of Technology, Silchar, Assam, India
| | - Elima Hussain
- Central Computational and Numerical Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, Assam, India
| | - Lipi B Mahanta
- Central Computational and Numerical Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, Assam, India.
| |
Collapse
|
155
|
Sun L, Ma W, Ding X, Huang Y, Liang D, Paisley J. A 3D Spatially Weighted Network for Segmentation of Brain Tissue From MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:898-909. [PMID: 31449009 DOI: 10.1109/tmi.2019.2937271] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The segmentation of brain tissue in MRI is valuable for extracting brain structure to aid diagnosis, treatment and tracking the progression of different neurologic diseases. Medical image data are volumetric and some neural network models for medical image segmentation have addressed this using a 3D convolutional architecture. However, this volumetric spatial information has not been fully exploited to enhance the representative ability of deep networks, and these networks have not fully addressed the practical issues facing the analysis of multimodal MRI data. In this paper, we propose a spatially-weighted 3D network (SW-3D-UNet) for brain tissue segmentation of single-modality MRI, and extend it using multimodality MRI data. We validate our model on the MRBrainS13 and MALC12 datasets. This unpublished model ranked first on the leaderboard of the MRBrainS13 Challenge.
Collapse
|
156
|
Tennakoon R, Bortsova G, Orting S, Gostar AK, Wille MMW, Saghir Z, Hoseinnezhad R, de Bruijne M, Bab-Hadiashar A. Classification of Volumetric Images Using Multi-Instance Learning and Extreme Value Theorem. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:854-865. [PMID: 31425069 DOI: 10.1109/tmi.2019.2936244] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Volumetric imaging is an essential diagnostic tool for medical practitioners. The use of popular techniques such as convolutional neural networks (CNN) for analysis of volumetric images is constrained by the availability of detailed (with local annotations) training data and GPU memory. In this paper, the volumetric image classification problem is posed as a multi-instance classification problem and a novel method is proposed to adaptively select positive instances from positive bags during the training phase. This method uses the extreme value theory to model the feature distribution of the images without a pathology and use it to identify positive instances of an imaged pathology. The experimental results, on three separate image classification tasks (i.e. classify retinal OCT images according to the presence or absence of fluid build-ups, emphysema detection in pulmonary 3D-CT images and detection of cancerous regions in 2D histopathology images) show that the proposed method produces classifiers that have similar performance to fully supervised methods and achieves the state of the art performance in all examined test cases.
Collapse
|
157
|
Abstract
Artificial intelligence (AI) has contributed substantially to the resolution of a variety of biomedical problems, including cancer, over the past decade. Deep learning, a subfield of AI that is highly flexible and supports automatic feature extraction, is increasingly being applied in various areas of both basic and clinical cancer research. In this review, we describe numerous recent examples of the application of AI in oncology, including cases in which deep learning has efficiently solved problems that were previously thought to be unsolvable, and we address obstacles that must be overcome before such application can become more widespread. We also highlight resources and datasets that can help harness the power of AI for cancer research. The development of innovative approaches to and applications of AI will yield important insights in oncology in the coming decade.
Collapse
Affiliation(s)
- Hideyuki Shimizu
- Department of Molecular and Cellular BiologyMedical Institute of BioregulationKyushu UniversityFukuokaJapan
| | - Keiichi I. Nakayama
- Department of Molecular and Cellular BiologyMedical Institute of BioregulationKyushu UniversityFukuokaJapan
| |
Collapse
|
158
|
Calculating the target exposure index using a deep convolutional neural network and a rule base. Phys Med 2020; 71:108-114. [PMID: 32114324 DOI: 10.1016/j.ejmp.2020.02.012] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 02/17/2020] [Accepted: 02/18/2020] [Indexed: 11/22/2022] Open
Abstract
PURPOSE The objective of this study is to determine the quality of chest X-ray images using a deep convolutional neural network (DCNN) and a rule base without performing any visual assessment. A method is proposed for determining the minimum diagnosable exposure index (EI) and the target exposure index (EIt). METHODS The proposed method involves transfer learning to assess the lung fields, mediastinum, and spine using GoogLeNet, which is a type of DCNN that has been trained using conventional images. Three detectors were created, and the image quality of local regions was rated. Subsequently, the results were used to determine the overall quality of chest X-ray images using a rule-based technique that was in turn based on expert assessment. The minimum EI required for diagnosis was calculated based on the distribution of the EI values, which were classified as either suitable or non-suitable and then used to ascertain the EIt. RESULTS The accuracy rate using the DCNN and the rule base was 81%. The minimum EI required for diagnosis was 230, and the EIt was 288. CONCLUSION The results indicated that the proposed method using the DCNN and the rule base could discriminate different image qualities without any visual assessment; moreover, it could determine both the minimum EI required for diagnosis and the EIt.
Collapse
|
159
|
Lu L, Wang D, Wang L, E L, Guo P, Li Z, Xiang J, Yang H, Li H, Yin S, Schwartz LH, Xie C, Zhao B. A quantitative imaging biomarker for predicting disease-free-survival-associated histologic subgroups in lung adenocarcinoma. Eur Radiol 2020; 30:3614-3623. [PMID: 32086583 DOI: 10.1007/s00330-020-06663-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 12/11/2019] [Accepted: 01/17/2020] [Indexed: 12/28/2022]
Abstract
OBJECTIVES Classification of histologic subgroups has significant prognostic value for lung adenocarcinoma patients who undergo surgical resection. However, clinical histopathology assessment is generally performed on only a small portion of the overall tumor from biopsy or surgery. Our objective is to identify a noninvasive quantitative imaging biomarker (QIB) for the classification of histologic subgroups in lung adenocarcinoma patients. METHODS We retrospectively collected and reviewed 1313 CT scans of patients with resected lung adenocarcinomas from two geographically distant institutions who were seen between January 2014 and October 2017. Three study cohorts, the training, internal validation, and external validation cohorts, were created, within which lung adenocarcinomas were divided into two disease-free-survival (DFS)-associated histologic subgroups, the mid/poor and good DFS groups. A comprehensive machine learning- and deep learning-based analytical system was adopted to identify reproducible QIBs and help to understand QIBs' significance. RESULTS Intensity-Skewness, a QIB quantifying tumor density distribution, was identified as the optimal biomarker for predicting histologic subgroups. Intensity-Skewness achieved high AUCs (95% CI) of 0.849(0.813,0.881), 0.820(0.781,0.856) and 0.863(0.827,0.895) on the training, internal validation, and external validation cohorts, respectively. A criterion of Intensity-Skewness ≤ 1.5, which indicated high tumor density, showed high specificity of 96% (sensitivity 46%) and 99% (sensitivity 53%) on predicting the mid/poor DFS group in the training and external validation cohorts, respectively. CONCLUSIONS A QIB derived from routinely acquired CT was able to predict lung adenocarcinoma histologic subgroups, providing a noninvasive method that could potentially benefit personalized treatment decision-making for lung cancer patients. KEY POINTS • A noninvasive imaging biomarker, Intensity-Skewness, which described the distortion of pixel-intensity distribution within lesions on CT images, was identified as a biomarker to predict disease-free-survival-associated histologic subgroups in lung adenocarcinoma. • An Intensity-Skewness of ≤ 1.5 has high specificity in predicting the mid/poor disease-free survival histologic patient group in both the training cohort and the external validation cohort. • The Intensity-Skewness is a feature that can be automatically computed with high reproducibility and robustness.
Collapse
Affiliation(s)
- Lin Lu
- Department of Radiology, Columbia University Medical Center, 710 West 168th Street, B26, New York, NY, 10032, USA
| | - Deling Wang
- Department of Radiology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China
| | - Lili Wang
- Department of Molecular Pathology, the Affiliated Hospital of Qingdao University, Qingdao University, Wutaishan Road 1677, Qingdao, 266000, Shandong, People's Republic of China
| | - Linning E
- Department of Radiology, Shanxi BETHUNE Hospital, 99 Longcheng Street, Taiyuan, 030032, Shanxi, People's Republic of China
| | - Pingzhen Guo
- Department of Radiology, Columbia University Medical Center, 710 West 168th Street, B26, New York, NY, 10032, USA
| | - Zhiming Li
- Department of Radiology, the Affiliated Hospital of Qingdao University, Qingdao University, Wutaishan Road 1677, Qingdao, 266000, Shandong, People's Republic of China
| | - Jin Xiang
- Department of Pathology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China
| | - Hao Yang
- Department of Radiology, Columbia University Medical Center, 710 West 168th Street, B26, New York, NY, 10032, USA
| | - Hui Li
- Department of Radiology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China
| | - Shaohan Yin
- Department of Radiology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China
| | - Lawrence H Schwartz
- Department of Radiology, Columbia University Medical Center, 710 West 168th Street, B26, New York, NY, 10032, USA
| | - Chuanmiao Xie
- Department of Radiology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, People's Republic of China.
| | - Binsheng Zhao
- Department of Radiology, Columbia University Medical Center, 710 West 168th Street, B26, New York, NY, 10032, USA.
| |
Collapse
|
160
|
Montagnon E, Cerny M, Cadrin-Chênevert A, Hamilton V, Derennes T, Ilinca A, Vandenbroucke-Menu F, Turcotte S, Kadoury S, Tang A. Deep learning workflow in radiology: a primer. Insights Imaging 2020; 11:22. [PMID: 32040647 PMCID: PMC7010882 DOI: 10.1186/s13244-019-0832-5] [Citation(s) in RCA: 86] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 12/17/2019] [Indexed: 02/08/2023] Open
Abstract
Interest for deep learning in radiology has increased tremendously in the past decade due to the high achievable performance for various computer vision tasks such as detection, segmentation, classification, monitoring, and prediction. This article provides step-by-step practical guidance for conducting a project that involves deep learning in radiology, from defining specifications, to deployment and scaling. Specifically, the objectives of this article are to provide an overview of clinical use cases of deep learning, describe the composition of multi-disciplinary team, and summarize current approaches to patient, data, model, and hardware selection. Key ideas will be illustrated by examples from a prototypical project on imaging of colorectal liver metastasis. This article illustrates the workflow for liver lesion detection, segmentation, classification, monitoring, and prediction of tumor recurrence and patient survival. Challenges are discussed, including ethical considerations, cohorting, data collection, anonymization, and availability of expert annotations. The practical guidance may be adapted to any project that requires automated medical image analysis.
Collapse
Affiliation(s)
- Emmanuel Montagnon
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
| | - Milena Cerny
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
| | | | - Vincent Hamilton
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
| | - Thomas Derennes
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
| | - André Ilinca
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
| | - Franck Vandenbroucke-Menu
- Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Service, Centre Hospitalier de l'Université de Montréal (CHUM), Montréal, Quebec, Canada
| | - Simon Turcotte
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
- Department of Surgery, Hepatopancreatobiliary and Liver Transplantation Service, Centre Hospitalier de l'Université de Montréal (CHUM), Montréal, Quebec, Canada
| | | | - An Tang
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
- Department of Radiology, Radio-Oncology and Nuclear Medicine, Université Montréal and CRCHUM, 1058 rue Saint-Denis, Montréal, Québec, H2X 3 J4, Canada
| |
Collapse
|
161
|
NF-RCNN: Heart localization and right ventricle wall motion abnormality detection in cardiac MRI. Phys Med 2020; 70:65-74. [DOI: 10.1016/j.ejmp.2020.01.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 12/31/2019] [Accepted: 01/09/2020] [Indexed: 12/18/2022] Open
|
162
|
Bermejo-Peláez D, Ash SY, Washko GR, San José Estépar R, Ledesma-Carbayo MJ. Classification of Interstitial Lung Abnormality Patterns with an Ensemble of Deep Convolutional Neural Networks. Sci Rep 2020; 10:338. [PMID: 31941918 PMCID: PMC6962320 DOI: 10.1038/s41598-019-56989-5] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 12/12/2019] [Indexed: 12/31/2022] Open
Abstract
Subtle interstitial changes in the lung parenchyma of smokers, known as Interstitial Lung Abnormalities (ILA), have been associated with clinical outcomes, including mortality, even in the absence of Interstitial Lung Disease (ILD). Although several methods have been proposed for the automatic identification of more advanced Interstitial Lung Disease (ILD) patterns, few have tackled ILA, which likely precedes the development ILD in some cases. In this context, we propose a novel methodology for automated identification and classification of ILA patterns in computed tomography (CT) images. The proposed method is an ensemble of deep convolutional neural networks (CNNs) that detect more discriminative features by incorporating two, two-and-a-half and three- dimensional architectures, thereby enabling more accurate classification. This technique is implemented by first training each individual CNN, and then combining its output responses to form the overall ensemble output. To train and test the system we used 37424 radiographic tissue samples corresponding to eight different parenchymal feature classes from 208 CT scans. The resulting ensemble performance including an average sensitivity of 91,41% and average specificity of 98,18% suggests it is potentially a viable method to identify radiographic patterns that precede the development of ILD.
Collapse
Affiliation(s)
- David Bermejo-Peláez
- Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid & CIBER-BBN, Madrid, Spain.
| | - Samuel Y Ash
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Brigham and Women's Hospital, Boston, MA, USA
| | - George R Washko
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Brigham and Women's Hospital, Boston, MA, USA
| | - Raúl San José Estépar
- Applied Chest Imaging Laboratory, Department of Radiology, Brigham and Women's Hospital, Boston, Massachusetts, United States of America
| | - María J Ledesma-Carbayo
- Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid & CIBER-BBN, Madrid, Spain
| |
Collapse
|
163
|
Saygili A, Albayrak S. Knee Meniscus Segmentation and Tear Detection from MRI: A Review. Curr Med Imaging 2020; 16:2-15. [DOI: 10.2174/1573405614666181017122109] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 09/20/2018] [Accepted: 09/29/2018] [Indexed: 12/22/2022]
Abstract
Background:
Automatic diagnostic systems in medical imaging provide useful information
to support radiologists and other relevant experts. The systems that help radiologists in their
analysis and diagnosis appear to be increasing.
Discussion:
Knee joints are intensively studied structures, as well. In this review, studies that
automatically segment meniscal structures from the knee joint MR images and detect tears have
been investigated. Some of the studies in the literature merely perform meniscus segmentation,
while others include classification procedures that detect both meniscus segmentation and anomalies
on menisci. The studies performed on the meniscus were categorized according to the methods
they used. The methods used and the results obtained from such studies were analyzed along with
their drawbacks, and the aspects to be developed were also emphasized.
Conclusion:
The work that has been done in this area can effectively support the decisions that will
be made by radiology and orthopedics specialists. Furthermore, these operations, which were performed
manually on MR images, can be performed in a shorter time with the help of computeraided
systems, which enables early diagnosis and treatment.
Collapse
Affiliation(s)
- Ahmet Saygili
- Computer Engineering Department, Corlu Faculty of Engineering, Namık Kemal University, Tekirdağ, Turkey
| | - Songül Albayrak
- Computer Engineering Department, Faculty of Electric and Electronics, Yıldız Technical University, İstanbul, Turkey
| |
Collapse
|
164
|
Shachor Y, Greenspan H, Goldberger J. A mixture of views network with applications to multi-view medical imaging. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.09.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
165
|
Zhou LQ, Wu XL, Huang SY, Wu GG, Ye HR, Wei Q, Bao LY, Deng YB, Li XR, Cui XW, Dietrich CF. Lymph Node Metastasis Prediction from Primary Breast Cancer US Images Using Deep Learning. Radiology 2020; 294:19-28. [PMID: 31746687 DOI: 10.1148/radiol.2019190372] [Citation(s) in RCA: 196] [Impact Index Per Article: 39.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Background Deep learning (DL) algorithms are gaining extensive attention for their excellent performance in image recognition tasks. DL models can automatically make a quantitative assessment of complex medical image characteristics and achieve increased accuracy in diagnosis with higher efficiency. Purpose To determine the feasibility of using a DL approach to predict clinically negative axillary lymph node metastasis from US images in patients with primary breast cancer. Materials and Methods A data set of US images in patients with primary breast cancer with clinically negative axillary lymph nodes from Tongji Hospital (974 imaging studies from 2016 to 2018, 756 patients) and an independent test set from Hubei Cancer Hospital (81 imaging studies from 2018 to 2019, 78 patients) were collected. Axillary lymph node status was confirmed with pathologic examination. Three different convolutional neural networks (CNNs) of Inception V3, Inception-ResNet V2, and ResNet-101 architectures were trained on 90% of the Tongji Hospital data set and tested on the remaining 10%, as well as on the independent test set. The performance of the models was compared with that of five radiologists. The models' performance was analyzed in terms of accuracy, sensitivity, specificity, receiver operating characteristic curves, areas under the receiver operating characteristic curve (AUCs), and heat maps. Results The best-performing CNN model, Inception V3, achieved an AUC of 0.89 (95% confidence interval [CI]: 0.83, 0.95) in the prediction of the final clinical diagnosis of axillary lymph node metastasis in the independent test set. The model achieved 85% sensitivity (35 of 41 images; 95% CI: 70%, 94%) and 73% specificity (29 of 40 images; 95% CI: 56%, 85%), and the radiologists achieved 73% sensitivity (30 of 41 images; 95% CI: 57%, 85%; P = .17) and 63% specificity (25 of 40 images; 95% CI: 46%, 77%; P = .34). Conclusion Using US images from patients with primary breast cancer, deep learning models can effectively predict clinically negative axillary lymph node metastasis. Artificial intelligence may provide an early diagnostic strategy for lymph node metastasis in patients with breast cancer with clinically negative lymph nodes. Published under a CC BY 4.0 license. Online supplemental material is available for this article. See also the editorial by Bae in this issue.
Collapse
Affiliation(s)
- Li-Qiang Zhou
- From the Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China (L.Q.Z., G.G.W., Q.W., Y.B.D., X.W.C., C.F.D.); School of Mathematics and Computer Science, Wuhan Textile University, Wuhan, Hubei Province, China (X.L.W.); Department of Ultrasound, The First People's Hospital of Huaihua, University of South China, Huaihua, China (S.Y.H.); Department of Ultrasound, China Resources & Wisco General Hospital, Wuhan, Hubei Province, China (H.R.Y.); Department of Ultrasound, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China (L.Y.B.); Department of Thyroid and Breast Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei Province, China (X.R.L.); and Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Wuerzburg, Bad Mergentheim, Germany (C.F.D.)
| | - Xing-Long Wu
- From the Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China (L.Q.Z., G.G.W., Q.W., Y.B.D., X.W.C., C.F.D.); School of Mathematics and Computer Science, Wuhan Textile University, Wuhan, Hubei Province, China (X.L.W.); Department of Ultrasound, The First People's Hospital of Huaihua, University of South China, Huaihua, China (S.Y.H.); Department of Ultrasound, China Resources & Wisco General Hospital, Wuhan, Hubei Province, China (H.R.Y.); Department of Ultrasound, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China (L.Y.B.); Department of Thyroid and Breast Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei Province, China (X.R.L.); and Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Wuerzburg, Bad Mergentheim, Germany (C.F.D.)
| | - Shu-Yan Huang
- From the Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China (L.Q.Z., G.G.W., Q.W., Y.B.D., X.W.C., C.F.D.); School of Mathematics and Computer Science, Wuhan Textile University, Wuhan, Hubei Province, China (X.L.W.); Department of Ultrasound, The First People's Hospital of Huaihua, University of South China, Huaihua, China (S.Y.H.); Department of Ultrasound, China Resources & Wisco General Hospital, Wuhan, Hubei Province, China (H.R.Y.); Department of Ultrasound, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China (L.Y.B.); Department of Thyroid and Breast Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei Province, China (X.R.L.); and Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Wuerzburg, Bad Mergentheim, Germany (C.F.D.)
| | - Ge-Ge Wu
- From the Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China (L.Q.Z., G.G.W., Q.W., Y.B.D., X.W.C., C.F.D.); School of Mathematics and Computer Science, Wuhan Textile University, Wuhan, Hubei Province, China (X.L.W.); Department of Ultrasound, The First People's Hospital of Huaihua, University of South China, Huaihua, China (S.Y.H.); Department of Ultrasound, China Resources & Wisco General Hospital, Wuhan, Hubei Province, China (H.R.Y.); Department of Ultrasound, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China (L.Y.B.); Department of Thyroid and Breast Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei Province, China (X.R.L.); and Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Wuerzburg, Bad Mergentheim, Germany (C.F.D.)
| | - Hua-Rong Ye
- From the Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China (L.Q.Z., G.G.W., Q.W., Y.B.D., X.W.C., C.F.D.); School of Mathematics and Computer Science, Wuhan Textile University, Wuhan, Hubei Province, China (X.L.W.); Department of Ultrasound, The First People's Hospital of Huaihua, University of South China, Huaihua, China (S.Y.H.); Department of Ultrasound, China Resources & Wisco General Hospital, Wuhan, Hubei Province, China (H.R.Y.); Department of Ultrasound, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China (L.Y.B.); Department of Thyroid and Breast Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei Province, China (X.R.L.); and Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Wuerzburg, Bad Mergentheim, Germany (C.F.D.)
| | - Qi Wei
- From the Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China (L.Q.Z., G.G.W., Q.W., Y.B.D., X.W.C., C.F.D.); School of Mathematics and Computer Science, Wuhan Textile University, Wuhan, Hubei Province, China (X.L.W.); Department of Ultrasound, The First People's Hospital of Huaihua, University of South China, Huaihua, China (S.Y.H.); Department of Ultrasound, China Resources & Wisco General Hospital, Wuhan, Hubei Province, China (H.R.Y.); Department of Ultrasound, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China (L.Y.B.); Department of Thyroid and Breast Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei Province, China (X.R.L.); and Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Wuerzburg, Bad Mergentheim, Germany (C.F.D.)
| | - Ling-Yun Bao
- From the Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China (L.Q.Z., G.G.W., Q.W., Y.B.D., X.W.C., C.F.D.); School of Mathematics and Computer Science, Wuhan Textile University, Wuhan, Hubei Province, China (X.L.W.); Department of Ultrasound, The First People's Hospital of Huaihua, University of South China, Huaihua, China (S.Y.H.); Department of Ultrasound, China Resources & Wisco General Hospital, Wuhan, Hubei Province, China (H.R.Y.); Department of Ultrasound, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China (L.Y.B.); Department of Thyroid and Breast Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei Province, China (X.R.L.); and Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Wuerzburg, Bad Mergentheim, Germany (C.F.D.)
| | - You-Bin Deng
- From the Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China (L.Q.Z., G.G.W., Q.W., Y.B.D., X.W.C., C.F.D.); School of Mathematics and Computer Science, Wuhan Textile University, Wuhan, Hubei Province, China (X.L.W.); Department of Ultrasound, The First People's Hospital of Huaihua, University of South China, Huaihua, China (S.Y.H.); Department of Ultrasound, China Resources & Wisco General Hospital, Wuhan, Hubei Province, China (H.R.Y.); Department of Ultrasound, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China (L.Y.B.); Department of Thyroid and Breast Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei Province, China (X.R.L.); and Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Wuerzburg, Bad Mergentheim, Germany (C.F.D.)
| | - Xing-Rui Li
- From the Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China (L.Q.Z., G.G.W., Q.W., Y.B.D., X.W.C., C.F.D.); School of Mathematics and Computer Science, Wuhan Textile University, Wuhan, Hubei Province, China (X.L.W.); Department of Ultrasound, The First People's Hospital of Huaihua, University of South China, Huaihua, China (S.Y.H.); Department of Ultrasound, China Resources & Wisco General Hospital, Wuhan, Hubei Province, China (H.R.Y.); Department of Ultrasound, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China (L.Y.B.); Department of Thyroid and Breast Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei Province, China (X.R.L.); and Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Wuerzburg, Bad Mergentheim, Germany (C.F.D.)
| | - Xin-Wu Cui
- From the Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China (L.Q.Z., G.G.W., Q.W., Y.B.D., X.W.C., C.F.D.); School of Mathematics and Computer Science, Wuhan Textile University, Wuhan, Hubei Province, China (X.L.W.); Department of Ultrasound, The First People's Hospital of Huaihua, University of South China, Huaihua, China (S.Y.H.); Department of Ultrasound, China Resources & Wisco General Hospital, Wuhan, Hubei Province, China (H.R.Y.); Department of Ultrasound, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China (L.Y.B.); Department of Thyroid and Breast Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei Province, China (X.R.L.); and Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Wuerzburg, Bad Mergentheim, Germany (C.F.D.)
| | - Christoph F Dietrich
- From the Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China (L.Q.Z., G.G.W., Q.W., Y.B.D., X.W.C., C.F.D.); School of Mathematics and Computer Science, Wuhan Textile University, Wuhan, Hubei Province, China (X.L.W.); Department of Ultrasound, The First People's Hospital of Huaihua, University of South China, Huaihua, China (S.Y.H.); Department of Ultrasound, China Resources & Wisco General Hospital, Wuhan, Hubei Province, China (H.R.Y.); Department of Ultrasound, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China (L.Y.B.); Department of Thyroid and Breast Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei Province, China (X.R.L.); and Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Wuerzburg, Bad Mergentheim, Germany (C.F.D.)
| |
Collapse
|
166
|
Cao K, Meng G, Wang Z, Liu Y, Liu H, Sun L. An adaptive pulmonary nodule detection algorithm. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 28:427-447. [PMID: 32333576 DOI: 10.3233/xst-200656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Recently, lung cancer has been paid more and more attention. People have reached a consensus that early detection and early treatment can improve the survival rate of patients. Among them, pulmonary nodules are the important reference for doctors to determine the lung health. With the continuous improvement of CT image resolution, more suspected pulmonary nodule information appears from the impact of chest CT. How to relatively and accurately locate the suspected nodule location from a large number of CT images has brought challenges to the doctor's daily diagnosis. To solve the problem that the original DBSCAN clustering algorithm needs manual setting of the threshold, this paper proposes a region growing algorithm and an adaptive DBSCAN clustering algorithm to improve the accuracy of pulmonary nodule detection. The image is roughly processed and ROI (Regions of Interest) region is roughly extracted by CLAHE transform. The region growing algorithm is used to roughly process the adjacent region's expansibility and the suspected region in ROI, and mark the center point in the region and the boundary point of its point set. The mean value of region range is taken as the threshold value of DBSCAN clustering algorithm. The center of the point domain is used as the starting point of clustering, and the rough set of points is used as the MinPts threshold. Finally, the clustering results are labeled in the initial CT image. Experiments show that the pulmonary nodule detection method proposed in this paper effectively improves the accuracy of the detection results.
Collapse
Affiliation(s)
- Keyan Cao
- College of Information and Control Engineering, Shenyang Jianzhu University, Shenyang, China
- Liaoning Province Big Data Management and Analysis Laboratory of Urban Construction, Shenyang, China
| | - Gongjie Meng
- College of Information and Control Engineering, Shenyang Jianzhu University, Shenyang, China
| | - Zhiqiong Wang
- College of Medicine and Biological Information Engineering, Northeast University, Shenyang, China
| | - Yefan Liu
- College of Information and Control Engineering, Shenyang Jianzhu University, Shenyang, China
| | - Haoli Liu
- College of Information and Control Engineering, Shenyang Jianzhu University, Shenyang, China
| | - Liangliang Sun
- College of Information and Control Engineering, Shenyang Jianzhu University, Shenyang, China
| |
Collapse
|
167
|
Gao F, Wu T, Chu X, Yoon H, Xu Y, Patel B. Deep Residual Inception Encoder–Decoder Network for Medical Imaging Synthesis. IEEE J Biomed Health Inform 2020; 24:39-49. [DOI: 10.1109/jbhi.2019.2912659] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
168
|
Kim M, Bae HJ. Data Augmentation Techniques for Deep Learning-Based Medical Image Analyses. JOURNAL OF THE KOREAN SOCIETY OF RADIOLOGY 2020; 81:1290-1304. [PMID: 36237718 PMCID: PMC9431833 DOI: 10.3348/jksr.2020.0158] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 09/22/2020] [Accepted: 09/24/2020] [Indexed: 11/15/2022]
Abstract
영상처리 기반으로 의료영상을 분석하는 기법은 정상 환자와 비정상 환자를 분류, 병변 검출 및 장기나 병변의 분할 등에 사용되고 있다. 최근 인공지능 기술의 비약적 발전으로 의료영상 분석 연구들이 딥러닝 기술을 활용하여 시도되고 있다. 의료영상은 학습에 필요한 데이터를 충분히 모으기 어렵고 클래스별 데이터 수의 차이 때문에, 딥러닝 모델의 성능을 올리는데 어려움이 있다. 이러한 문제를 해결하기 위해 다양한 연구가 시도되고 있으며, 이 중 하나가 학습 데이터를 증강하는 것이다. 본 종설에서는 회전, 역상, 밝기 변화 등과 같은 영상처리 기반의 데이터 증강, 적대적생성네트워크를 활용한 데이터 증강, 그리고 기존 영상의 속성들을 섞는 등의 최신 데이터 증강 기법을 알아보고, 의료영상 연구에 적용된 사례들과 그 결과를 조사해 보고자 한다. 끝으로 데이터 증강의 필요성을 고찰하고 앞으로의 방향을 짚어본다.
Collapse
Affiliation(s)
- Mingyu Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Seoul, Korea
| | | |
Collapse
|
169
|
Debats OA, Litjens GJS, Huisman HJ. Lymph node detection in MR Lymphography: false positive reduction using multi-view convolutional neural networks. PeerJ 2019; 7:e8052. [PMID: 31772836 PMCID: PMC6876485 DOI: 10.7717/peerj.8052] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Accepted: 10/17/2019] [Indexed: 01/09/2023] Open
Abstract
Purpose To investigate whether multi-view convolutional neural networks can improve a fully automated lymph node detection system for pelvic MR Lymphography (MRL) images of patients with prostate cancer. Methods A fully automated computer-aided detection (CAD) system had been previously developed to detect lymph nodes in MRL studies. The CAD system was extended with three types of 2D multi-view convolutional neural networks (CNN) aiming to reduce false positives (FP). A 2D multi-view CNN is an efficient approximation of a 3D CNN, and three types were evaluated: a 1-view, 3-view, and 9-view 2D CNN. The three deep learning CNN architectures were trained and configured on retrospective data of 240 prostate cancer patients that received MRL images as the standard of care between January 2008 and April 2010. The MRL used ferumoxtran-10 as a contrast agent and comprised at least two imaging sequences: a 3D T1-weighted and a 3D T2*-weighted sequence. A total of 5089 lymph nodes were annotated by two expert readers, reading in consensus. A first experiment compared the performance with and without CNNs and a second experiment compared the individual contribution of the 1-view, 3-view, or 9-view architecture to the performance. The performances were visually compared using free-receiver operating characteristic (FROC) analysis and statistically compared using partial area under the FROC curve analysis. Training and analysis were performed using bootstrapped FROC and 5-fold cross-validation. Results Adding multi-view CNNs significantly (p < 0.01) reduced false positive detections. The 3-view and 9-view CNN outperformed (p < 0.01) the 1-view CNN, reducing FP from 20.6 to 7.8/image at 80% sensitivity. Conclusion Multi-view convolutional neural networks significantly reduce false positives in a lymph node detection system for MRL images, and three orthogonal views are sufficient. At the achieved level of performance, CAD for MRL may help speed up finding lymph nodes and assessing them for potential metastatic involvement.
Collapse
Affiliation(s)
- Oscar A Debats
- Department of Radiology and Nuclear Medicine, Radboudumc, Nijmegen, The Netherlands
| | - Geert J S Litjens
- Department of Radiology and Nuclear Medicine, Radboudumc, Nijmegen, The Netherlands
| | - Henkjan J Huisman
- Department of Radiology and Nuclear Medicine, Radboudumc, Nijmegen, The Netherlands
| |
Collapse
|
170
|
Khan AA, Narejo GB. Analysis of Abdominal Computed Tomography Images for Automatic Liver Cancer Diagnosis Using Image Processing Algorithm. Curr Med Imaging 2019; 15:972-982. [DOI: 10.2174/1573405615666190716122040] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Revised: 04/23/2019] [Accepted: 06/13/2019] [Indexed: 01/12/2023]
Abstract
Background:
The application of image processing algorithms for medical image analysis
has been found effectual in the past years. Imaging techniques provide assistance to the radiologists
and physicians for the diagnosis of abnormalities in different organs.
Objective:
The proposed algorithm is designed for automatic computer-aided diagnosis of liver
cancer from low contrast CT images. The idea expressed in this article is to classify the malignancy
of the liver tumor ahead of liver segmentation and to locate HCC burden on the liver.
Methods:
A novel Fuzzy Linguistic Constant (FLC) is designed for image enhancement. To classify
the enhanced liver image as cancerous or non-cancerous, fuzzy membership function is applied.
The extracted features are assessed for malignancy and benignancy using the structural similarity
index. The malignant CT image is further processed for automatic tumor segmentation and grading
by applying morphological image processing techniques.
Results:
The validity of the concept is verified on a dataset of 179 clinical cases which consist of
98 benign and 81 malignant liver tumors. Classification accuracy of 98.3% is achieved by Support
Vector Machine (SVM). The proposed method has the ability to automatically segment the tumor
with an improved detection rate of 78% and a precision value of 0.6.
Conclusion:
The algorithm design offers an efficient tool to the radiologist in classifying the malignant
cases from benign cases. The CAD system allows automatic segmentation of tumor and locates
tumor burden on the liver. The methodology adopted can aid medical practitioners in tumor
diagnosis and surgery planning.
Collapse
Affiliation(s)
- Ayesha Adil Khan
- Department of Electronics Engineering, NED University of Engineering & Technology, Karachi, Pakistan
| | - Ghous Bakhsh Narejo
- Department of Electronics Engineering, NED University of Engineering & Technology, Karachi, Pakistan
| |
Collapse
|
171
|
Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09788-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
172
|
Arefan D, Mohamed AA, Berg WA, Zuley ML, Sumkin JH, Wu S. Deep learning modeling using normal mammograms for predicting breast cancer risk. Med Phys 2019; 47:110-118. [PMID: 31667873 DOI: 10.1002/mp.13886] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 08/30/2019] [Accepted: 10/16/2019] [Indexed: 02/06/2023] Open
Abstract
PURPOSE To investigate two deep learning-based modeling schemes for predicting short-term risk of developing breast cancer using prior normal screening digital mammograms in a case-control setting. METHODS We conducted a retrospective Institutional Review Board-approved study on a case-control cohort of 226 patients (including 113 women diagnosed with breast cancer and 113 controls) who underwent general population breast cancer screening. For each patient, a prior normal (i.e., with negative or benign findings) digital mammogram examination [including mediolateral oblique (MLO) view and craniocaudal (CC) view two images] was collected. Thus, a total of 452 normal images (226 MLO view images and 226 CC view images) of this case-control cohort were analyzed to predict the outcome, i.e., developing breast cancer (cancer cases) or remaining breast cancer-free (controls) within the follow-up period. We implemented an end-to-end deep learning model and a GoogLeNet-LDA model and compared their effects in several experimental settings using two mammographic view images and inputting two different subregions of the images to the models. The proposed models were also compared to logistic regression modeling of mammographic breast density. Area under the receiver operating characteristic curve (AUC) was used as the model performance metric. RESULTS The highest AUC was 0.73 [95% Confidence Interval (CI): 0.68-0.78; GoogLeNet-LDA model on CC view] when using the whole-breast and was 0.72 (95% CI: 0.67-0.76; GoogLeNet-LDA model on MLO + CC view) when using the dense tissue, respectively, as the model input. The GoogleNet-LDA model significantly (all P < 0.05) outperformed the end-to-end GoogLeNet model in all experiments. CC view was consistently more predictive than MLO view in both deep learning models, regardless of the input subregions. Both models exhibited superior performance than the percent breast density (AUC = 0.54; 95% CI: 0.49-0.59). CONCLUSIONS The proposed deep learning modeling approach can predict short-term breast cancer risk using normal screening mammogram images. Larger studies are needed to further reveal the promise of deep learning in enhancing imaging-based breast cancer risk assessment.
Collapse
Affiliation(s)
- Dooman Arefan
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA
| | - Aly A Mohamed
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA
| | - Wendie A Berg
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA.,Magee-Womens Hospital of University of Pittsburgh Medical Center, 300 Halket St, Pittsburgh, PA, 15213, USA
| | - Margarita L Zuley
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA.,Magee-Womens Hospital of University of Pittsburgh Medical Center, 300 Halket St, Pittsburgh, PA, 15213, USA
| | - Jules H Sumkin
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA.,Magee-Womens Hospital of University of Pittsburgh Medical Center, 300 Halket St, Pittsburgh, PA, 15213, USA
| | - Shandong Wu
- Departments of Radiology, Biomedical Informatics, Bioengineering, and Intelligent Systems Program, University of Pittsburgh, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA
| |
Collapse
|
173
|
Mao Z, Su Y, Xu G, Wang X, Huang Y, Yue W, Sun L, Xiong N. Spatio-temporal deep learning method for ADHD fMRI classification. Inf Sci (N Y) 2019. [DOI: 10.1016/j.ins.2019.05.043] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
174
|
Deep learning in drug discovery: opportunities, challenges and future prospects. Drug Discov Today 2019; 24:2017-2032. [DOI: 10.1016/j.drudis.2019.07.006] [Citation(s) in RCA: 104] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2019] [Revised: 06/11/2019] [Accepted: 07/18/2019] [Indexed: 12/27/2022]
|
175
|
Jiang Z, Ardywibowo R, Samereh A, Evans HL, Lober WB, Chang X, Qian X, Wang Z, Huang S. A Roadmap for Automatic Surgical Site Infection Detection and Evaluation Using User-Generated Incision Images. Surg Infect (Larchmt) 2019; 20:555-565. [PMID: 31424335 PMCID: PMC6823883 DOI: 10.1089/sur.2019.154] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Background: Emerging technologies such as smartphones and wearable sensors have enabled the paradigm shift to new patient-centered healthcare, together with recent mobile health (mHealth) app development. One such promising healthcare app is incision monitoring based on patient-taken incision images. In this review, challenges and potential solution strategies are investigated for surgical site infection (SSI) detection and evaluation using surgical site images taken at home. Methods: Potential image quality issues, feature extraction, and surgical site image analysis challenges are discussed. Recent image analysis and machine learning solutions are reviewed to extract meaningful representations as image markers for incision monitoring. Discussions on opportunities and challenges of applying these methods to derive accurate SSI prediction are provided. Conclusions: Interactive image acquisition as well as customized image analysis and machine learning methods for SSI monitoring will play critical roles in developing sustainable mHealth apps to achieve the expected outcomes of patient-taken incision images for effective out-of-clinic patient-centered healthcare with substantially reduced cost.
Collapse
Affiliation(s)
- Ziyu Jiang
- Department of Computer Science and Engineering, Texas A&M University, College Station, Texas
| | - Randy Ardywibowo
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, Texas
| | - Aven Samereh
- Department of Industrial and Systems Engineering, University of Washington, Seattle, Washington
| | - Heather L. Evans
- Department of Surgery, University of South Carolina, Columbia, South Carolina
| | - William B. Lober
- Department of Biobehavioral Nursing and Health Informatics, University of Washington, Seattle, Washington
| | - Xiangyu Chang
- Department of Industrial and Systems Engineering, University of Washington, Seattle, Washington
- Center of Data Science and Information Quality, School of Management, Xi'an Jiaotong University, Shaanxi Sheng, China
| | - Xiaoning Qian
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, Texas
| | - Zhangyang Wang
- Department of Computer Science and Engineering, Texas A&M University, College Station, Texas
| | - Shuai Huang
- Department of Industrial and Systems Engineering, University of Washington, Seattle, Washington
| |
Collapse
|
176
|
Alkadi R, Taher F, El-baz A, Werghi N. A Deep Learning-Based Approach for the Detection and Localization of Prostate Cancer in T2 Magnetic Resonance Images. J Digit Imaging 2019; 32:793-807. [PMID: 30506124 PMCID: PMC6737129 DOI: 10.1007/s10278-018-0160-1] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
We address the problem of prostate lesion detection, localization, and segmentation in T2W magnetic resonance (MR) images. We train a deep convolutional encoder-decoder architecture to simultaneously segment the prostate, its anatomical structure, and the malignant lesions. To incorporate the 3D contextual spatial information provided by the MRI series, we propose a novel 3D sliding window approach, which preserves the 2D domain complexity while exploiting 3D information. Experiments on data from 19 patients provided for the public by the Initiative for Collaborative Computer Vision Benchmarking (I2CVB) show that our approach outperforms traditional pattern recognition and machine learning approaches by a significant margin. Particularly, for the task of cancer detection and localization, the system achieves an average AUC of 0.995, an accuracy of 0.894, and a recall of 0.928. The proposed mono-modal deep learning-based system performs comparably to other multi-modal MR-based systems. It could improve the performance of a radiologist in prostate cancer diagnosis and treatment planning.
Collapse
Affiliation(s)
- Ruba Alkadi
- Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, United Arab Emirates
| | - Fatma Taher
- Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, United Arab Emirates
| | - Ayman El-baz
- University of Louisville, Louisville, KY 40292 USA
| | - Naoufel Werghi
- Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, United Arab Emirates
| |
Collapse
|
177
|
Vision-Based Autonomous Crack Detection of Concrete Structures Using a Fully Convolutional Encoder-Decoder Network. SENSORS 2019; 19:s19194251. [PMID: 31574963 PMCID: PMC6806320 DOI: 10.3390/s19194251] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2019] [Revised: 09/19/2019] [Accepted: 09/28/2019] [Indexed: 11/24/2022]
Abstract
The visual inspection of massive civil infrastructure is a common trend for maintaining its reliability and structural health. However, this procedure, which uses human inspectors, requires long inspection times and relies on the subjective and empirical knowledge of the inspectors. To address these limitations, a machine vision-based autonomous crack detection method is proposed using a deep convolutional neural network (DCNN) technique. It consists of a fully convolutional neural network (FCN) with an encoder and decoder framework for semantic segmentation, which performs pixel-wise classification to accurately detect cracks. The main idea is to capture the global context of a scene and determine whether cracks are in the image while also providing a reduced and essential picture of the crack locations. The visual geometry group network (VGGNet), a variant of the DCCN, is employed as a backbone in the proposed FCN for end-to-end training. The efficacy of the proposed FCN method is tested on a publicly available benchmark dataset of concrete crack images. The experimental results indicate that the proposed method is highly effective for concrete crack classification, obtaining scores of approximately 92% for both the recall and F1 average.
Collapse
|
178
|
Wang X, Zhong X, Xia M, Jiang W, Huang X, Gu Z, Huang X. Automatic Carotid Artery Detection Using Attention Layer Region-Based Convolution Neural Network. INT J HUM ROBOT 2019. [DOI: 10.1142/s0219843619500154] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Localization of vessel Region of Interest (ROI) from medical images provides an interactive approach that can assist doctors in evaluating carotid artery diseases. Accurate vessel detection is a prerequisite for the following procedures, like wall segmentation, plaque identification and 3D reconstruction. Deep learning models such as CNN have been widely used in medical image processing, and achieve state-of-the-art performance. Faster R-CNN is one of the most representative and successful methods for object detection. Using outputs of feature maps in different layers has been proved to be a useful way to improve the detection performance, however, the common method is to ensemble outputs of different layers directly, and the special characteristic and different importance of each layer haven’t been considered. In this work, we introduce a new network named Attention Layer R-CNN(AL R-CNN) and use it for automatic carotid artery detection, in which we integrate a new module named Attention Layer Part (ALP) into a basic Faster R-CNN system for better assembling feature maps of different layers. Experimental results on carotid dataset show that our method surpasses other state-of-the-art object detection systems.
Collapse
Affiliation(s)
- Xiaoyan Wang
- School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, Zhejiang, P. R. China
| | - Xingyu Zhong
- School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, Zhejiang, P. R. China
| | - Ming Xia
- School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, Zhejiang, P. R. China
| | - Weiwei Jiang
- School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, Zhejiang, P. R. China
| | - Xiaojie Huang
- The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou 310009, P. R. China
| | - Zheng Gu
- The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou 310009, P. R. China
| | - Xiangsheng Huang
- The Institute of Automation, Chinese Academy of Sciences, Beijing 100190, P. R. China
| |
Collapse
|
179
|
Chen Y, Ren Y, Fu L, Xiong J, Larsson R, Xu X, Sun J, Zhao J. A 3D Convolutional Neural Network Framework for Polyp Candidates Detection on the Limited Dataset of CT Colonography. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:678-681. [PMID: 30440487 DOI: 10.1109/embc.2018.8512305] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Proper training of convolutional neural networks (CNNs) requires annotated training datasets oflarge size, which are not currently available in CT colonography (CTC). In this paper, we propose a well-designed framework to address the challenging problem of data shortage in the training of 3D CNN for the detection of polyp candidates, which is the first and crucial part of the computer-aided diagnosis (CAD) of CTC. Our scheme relies on the following two aspects to reduce overfitting: 1) mass data augmentation, and 2) a flat 3D residual fully convolutional network (FCN). In the first aspect, we utilize extensive rotation, translation, and scaling with continuous value to provide numerous data samples. In the second aspect, we adapt the well-known V-Net to a flat residual FCN to resolve the problem of detection other than segmentation. Our proposed framework does not rely on accurate colon segmentation nor any electrical cleansing of tagged fluid, and experimental results show that it can still achieve high sensitivity with much fewer false positives. Code has been made available at: http://github.com/chenyzstju/ctc_screening_cnn.
Collapse
|
180
|
Gao Y, Zhang Y, Cao Z, Guo X, Zhang J. Decoding Brain States From fMRI Signals by Using Unsupervised Domain Adaptation. IEEE J Biomed Health Inform 2019; 24:1677-1685. [PMID: 31514162 DOI: 10.1109/jbhi.2019.2940695] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
With the development of deep learning in medical image analysis, decoding brain states from functional magnetic resonance imaging (fMRI) signals has made significant progress. Previous studies often utilized deep neural networks to automatically classify brain activity patterns related to diverse cognitive states. However, due to the individual differences between subjects and the variation in acquisition parameters across devices, the inconsistency in data distributions degrades the performance of cross-subject decoding. Besides, most current networks were trained in a supervised way, which is not suitable for the actual scenarios in which massive amounts of data are unlabeled. To address these problems, we proposed the deep cross-subject adaptation decoding (DCAD) framework to decipher the brain states. The proposed volume-based 3D feature extraction architecture can automatically learn the common spatiotemporal features of labeled source data to generate a distinct descriptor. Then, the distance between the source and target distributions is minimized via an unsupervised domain adaptation (UDA) method, which can help to accurately decode the cognitive states across subjects. The performance of the DCAD was evaluated on task-fMRI (tfMRI) dataset from the Human Connectome Project (HCP). Experimental results showed that the proposed method achieved the state-of-the-art decoding performance with mean 81.9% and 84.9% accuracies under two conditions (4 brain states and 9 brain states respectively) of working memory task. Our findings also demonstrated that UDA can mitigate the impact of the data distribution shift, thereby providing a superior choice for increasing the performance of cross-subject decoding without depending on annotations.
Collapse
|
181
|
Li S, Xu P, Li B, Chen L, Zhou Z, Hao H, Duan Y, Folkert M, Ma J, Huang S, Jiang S, Wang J. Predicting lung nodule malignancies by combining deep convolutional neural network and handcrafted features. Phys Med Biol 2019; 64:175012. [PMID: 31307017 PMCID: PMC7106773 DOI: 10.1088/1361-6560/ab326a] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
To predict lung nodule malignancy with a high sensitivity and specificity for low dose CT (LDCT) lung cancer screening, we propose a fusion algorithm that combines handcrafted features (HF) into the features learned at the output layer of a 3D deep convolutional neural network (CNN). First, we extracted twenty-nine HF, including nine intensity features, eight geometric features, and twelve texture features based on grey-level co-occurrence matrix (GLCM). We then trained 3D CNNs modified from three 2D CNN architectures (AlexNet, VGG-16 Net and Multi-crop Net) to extract the CNN features learned at the output layer. For each 3D CNN, the CNN features combined with the 29 HF were used as the input for the support vector machine (SVM) coupled with the sequential forward feature selection (SFS) method to select the optimal feature subset and construct the classifiers. The fusion algorithm takes full advantage of the HF and the highest level CNN features learned at the output layer. It can overcome the disadvantage of the HF that may not fully reflect the unique characteristics of a particular lesion by combining the intrinsic CNN features. Meanwhile, it also alleviates the requirement of a large scale annotated dataset for the CNNs based on the complementary of HF. The patient cohort includes 431 malignant nodules and 795 benign nodules extracted from the LIDC/IDRI database. For each investigated CNN architecture, the proposed fusion algorithm achieved the highest AUC, accuracy, sensitivity, and specificity scores among all competitive classification models.
Collapse
Affiliation(s)
- Shulong Li
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China
| | - Panpan Xu
- Longgang District People’s Hospital, Shenzhen, 518172, China
| | - Bin Li
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China
| | - Liyuan Chen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, 75235, USA
| | - Zhiguo Zhou
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, 75235, USA
| | - Hongxia Hao
- School of Computer Science and Technology, Xidian University, Xi’an, 710071, China
| | - Yingying Duan
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China
| | - Michael Folkert
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, 75235, USA
| | - Jianhua Ma
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China
| | - Shiying Huang
- School of Traditional Chinese Medicine, Southern Medical University, Guangzhou, 510515, China
| | - Steve Jiang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, 75235, USA
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, 75235, USA
| |
Collapse
|
182
|
Pezeshk A, Hamidian S, Petrick N, Sahiner B. 3-D Convolutional Neural Networks for Automatic Detection of Pulmonary Nodules in Chest CT. IEEE J Biomed Health Inform 2019; 23:2080-2090. [DOI: 10.1109/jbhi.2018.2879449] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
183
|
Multi-Scale Heterogeneous 3D CNN for False-Positive Reduction in Pulmonary Nodule Detection, Based on Chest CT Images. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9163261] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Currently, lung cancer has one of the highest mortality rates because it is often caught too late. Therefore, early detection is essential to reduce the risk of death. Pulmonary nodules are considered key indicators of primary lung cancer. Developing an efficient and accurate computer-aided diagnosis system for pulmonary nodule detection is an important goal. Typically, a computer-aided diagnosis system for pulmonary nodule detection consists of two parts: candidate nodule extraction and false-positive reduction of candidate nodules. The reduction of false positives (FPs) of candidate nodules remains an important challenge due to morphological characteristics of nodule height changes and similar characteristics to other organs. In this study, we propose a novel multi-scale heterogeneous three-dimensional (3D) convolutional neural network (MSH-CNN) based on chest computed tomography (CT) images. There are three main strategies of the design: (1) using multi-scale 3D nodule blocks with different levels of contextual information as inputs; (2) using two different branches of 3D CNN to extract the expression features; (3) using a set of weights which are determined by back propagation to fuse the expression features produced by step 2. In order to test the performance of the algorithm, we trained and tested on the Lung Nodule Analysis 2016 (LUNA16) dataset, achieving an average competitive performance metric (CPM) score of 0.874 and a sensitivity of 91.7% at two FPs/scan. Moreover, our framework is universal and can be easily extended to other candidate false-positive reduction tasks in 3D object detection, as well as 3D object classification.
Collapse
|
184
|
Tajbakhsh N, Shin JY, Gotway MB, Liang J. Computer-aided detection and visualization of pulmonary embolism using a novel, compact, and discriminative image representation. Med Image Anal 2019; 58:101541. [PMID: 31416007 DOI: 10.1016/j.media.2019.101541] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 07/31/2019] [Accepted: 08/01/2019] [Indexed: 01/15/2023]
Abstract
Diagnosing pulmonary embolism (PE) and excluding disorders that may clinically and radiologically simulate PE poses a challenging task for both human and machine perception. In this paper, we propose a novel vessel-oriented image representation (VOIR) that can improve the machine perception of PE through a consistent, compact, and discriminative image representation, and can also improve radiologists' diagnostic capabilities for PE assessment by serving as the backbone of an effective PE visualization system. Specifically, our image representation can be used to train more effective convolutional neural networks for distinguishing PE from PE mimics, and also allows radiologists to inspect the vessel lumen from multiple perspectives, so that they can report filling defects (PE), if any, with confidence. Our image representation offers four advantages: (1) Efficiency and compactness-concisely summarizing the 3D contextual information around an embolus in only three image channels, (2) consistency-automatically aligning the embolus in the 3-channel images according to the orientation of the affected vessel, (3) expandability-naturally supporting data augmentation for training CNNs, and (4) multi-view visualization-maximally revealing filling defects. To evaluate the effectiveness of VOIR for PE diagnosis, we use 121 CTPA datasets with a total of 326 emboli. We first compare VOIR with two other compact alternatives using six CNN architectures of varying depths and under varying amounts of labeled training data. Our experiments demonstrate that VOIR enables faster training of a higher-performing model compared to the other compact representations, even in the absence of deep architectures and large labeled training sets. Our experiments comparing VOIR with the 3D image representation further demonstrate that the 2D CNN trained with VOIR achieves a significant performance gain over the 3D CNNs. Our robustness analyses also show that the suggested PE CAD is robust to the choice of CT scanner machines and the physical size of crops used for training. Finally, our PE CAD is ranked second at the PE challenge in the category of 0 mm localization error.
Collapse
Affiliation(s)
- Nima Tajbakhsh
- Department of Biomedical Informatics, Arizona State University, Scottsdale, AZ, USA
| | - Jae Y Shin
- Department of Biomedical Informatics, Arizona State University, Scottsdale, AZ, USA
| | | | - Jianming Liang
- Department of Biomedical Informatics, Arizona State University, Scottsdale, AZ, USA.
| |
Collapse
|
185
|
Zhou W, Wang G, Xie G, Zhang L. Grading of hepatocellular carcinoma based on diffusion weighted images with multiple b-values using convolutional neural networks. Med Phys 2019; 46:3951-3960. [PMID: 31169907 DOI: 10.1002/mp.13642] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Revised: 04/09/2019] [Accepted: 05/29/2019] [Indexed: 12/16/2022] Open
Abstract
PURPOSE To effectively grade hepatocellular carcinoma (HCC) based on deep features derived from diffusion weighted images (DWI) with multiple b-values using convolutional neural networks (CNN). MATERIALS AND METHODS Ninety-eight subjects with 100 pathologically confirmed HCC lesions from July 2012 to October 2018 were included in this retrospective study, including 47 low-grade and 53 high-grade HCCs. DWI was performed for each subject with a 3.0T MR scanner in a breath-hold routine with three b-values (0,100, and 600 s/mm2 ). First, logarithmic transformation was performed on original DWI images to generate log maps (logb0, logb100, and logb600). Then, a resampling method was performed to extract multiple 2D axial planes of HCCs from the log map to increase the dataset for training. Subsequently, 2D CNN was used to extract deep features of the log map for HCCs. Finally, fusion of deep features derived from three b-value log maps was conducted for HCC malignancy classification. Specifically, a deeply supervised loss function was devised to further improve the performance of lesion characterization. The data set was split into two parts: the training and validation set (60 HCCs) and the fixed test set (40 HCCs). Four-fold cross validation with 10 repetitions was performed to assess the performance of deep features extracted from single b-value images for HCC grading using the training and validation set. Receiver operating characteristic curve (ROC) and area under the curve (AUC) values were used to assess the characterization performance of the proposed deep feature fusion method to differentiate low-grade and high-grade in the fixed test set. RESULTS The proposed fusion of deep features derived from logb0, logb100, and logb600 with deeply supervised loss function generated the highest accuracy for HCC grading (80%), thus outperforming the method of deep feature derived from the ADC map directly (72.5%), the original b0 (65%), b100 (68%), and b600 (70%) images. Furthermore, AUC values of the deep features of the ADC map, the deep feature fusion with concatenation, and the proposed deep feature fusion with deeply supervised loss function were 0.73, 0.78, and 0.83, respectively. CONCLUSION The proposed fusion of deep features derived from the logarithm of the three b-value images yields high performance for HCC grading, thus providing a promising approach for the assessment of DWI in lesion characterization.
Collapse
Affiliation(s)
- Wu Zhou
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, China, 510006
| | - Guangyi Wang
- Department of Radiology, Guangdong General Hospital, Guangzhou, China, 510080
| | - Guoxi Xie
- School of Basic Medical Sciences, Guangzhou Medical University, Guangzhou, China, 510182
| | - Lijuan Zhang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China, 510085
| |
Collapse
|
186
|
Indraswari R, Kurita T, Arifin AZ, Suciati N, Astuti ER. Multi-projection deep learning network for segmentation of 3D medical images. Pattern Recognit Lett 2019. [DOI: 10.1016/j.patrec.2019.08.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
187
|
Zhao Y, Cumming P, Rominger A, Zuo C, Shi K, Wu P, Wang J, Li H, Navab N, Yakushev I, Weber W, Schwaiger M, Huang SC. A 3D Deep Residual Convolutional Neural Network for Differential Diagnosis of Parkinsonian Syndromes on 18F-FDG PET Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2019:3531-3534. [PMID: 31946640 DOI: 10.1109/embc.2019.8856747] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Idiopathic Parkinsons disease and atypical parkinsonian syndromes have similar symptoms at early disease stages, which makes the early differential diagnosis difficult. Positron emission tomography with 18F-FDG shows the ability to assess early neuronal dysfunction of neurodegenerative diseases and is well established for clinical use. In the past decades, machine learning methods have been widely used for the differential diagnosis of parkinsonism based on metabolic patterns. Unlike these conventional machine learning methods relying on hand-crafted features, the deep convolutional neural networks, which have achieved significant success in medical applications recently, have the advantage of learning salient feature representations automatically and effectively. This advantage may offer more appropriate invisible features extracted from data for the enhancement of the diagnosis accuracy. Therefore, this paper develops a 3D deep convolutional neural network on 18F-FDG PET images for the automated early diagnosis. Furthermore, we depicted in saliency maps the decision mechanism of the deep learning method to assist the physiological interpretation of deep learning performance. The proposed method was evaluated on a dataset with 920 patients. In addition to improving the accuracy in the differential diagnosis of parkinsonism compared to state-of-the-art approaches, the deep learning methods also discovered saliency features in a number of critical regions (e.g., midbrain), which are widely accepted as characteristic pathological regions for movement disorders but were ignored in the conventional analysis of FDG PET images.
Collapse
|
188
|
Asaturyan H, Gligorievski A, Villarini B. Morphological and multi-level geometrical descriptor analysis in CT and MRI volumes for automatic pancreas segmentation. Comput Med Imaging Graph 2019; 75:1-13. [DOI: 10.1016/j.compmedimag.2019.04.004] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 12/10/2018] [Accepted: 04/26/2019] [Indexed: 11/24/2022]
|
189
|
Jang HJ, Cho KO. Applications of deep learning for the analysis of medical data. Arch Pharm Res 2019; 42:492-504. [PMID: 31140082 DOI: 10.1007/s12272-019-01162-9] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Accepted: 05/20/2019] [Indexed: 02/06/2023]
Abstract
Over the past decade, deep learning has demonstrated superior performances in solving many problems in various fields of medicine compared with other machine learning methods. To understand how deep learning has surpassed traditional machine learning techniques, in this review, we briefly explore the basic learning algorithms underlying deep learning. In addition, the procedures for building deep learning-based classifiers for seizure electroencephalograms and gastric tissue slides are described as examples to demonstrate the simplicity and effectiveness of deep learning applications. Finally, we review the clinical applications of deep learning in radiology, pathology, and drug discovery, where deep learning has been actively adopted. Considering the great advantages of deep learning techniques, deep learning will be increasingly and widely utilized in a wide variety of different areas in medicine in the coming decades.
Collapse
Affiliation(s)
- Hyun-Jong Jang
- Department of Physiology, Department of Biomedicine & Health Sciences, Catholic Neuroscience Institute, College of Medicine, The Catholic University of Korea, Seoul, 06591, South Korea
| | - Kyung-Ok Cho
- Department of Pharmacology, Department of Biomedicine & Health Sciences, Catholic Neuroscience Institute, Institute of Aging and Metabolic Diseases, College of Medicine, The Catholic University of Korea, 222 Banpo-Daero, Seocho-Gu, Seoul, 06591, South Korea.
| |
Collapse
|
190
|
Xiong J, Li X, Lu L, Lawrence SH, Fu X, Zhao J, Zhao B. Implementation strategy of a CNN model affects the performance of CT assessment of EGFR mutation status in lung cancer patients. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2019; 7:64583-64591. [PMID: 32953368 PMCID: PMC7500487 DOI: 10.1109/access.2019.2916557] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
OBJECTIVE To compare CNN models implemented using different strategies in the CT assessment of EGFR mutation status in patients with lung adenocarcinoma. METHODS 1,010 consecutive lung adenocarcinoma patients with known EGFR mutation status were randomly divided into a training set (n=810) and a testing set (n=200). CNN models were constructed based on ResNet-101 architecture but implemented using different strategies: dimension filters (2D/3D), input sizes (small/middle/large and their fusion), slicing methods (transverse plane only and arbitrary multi-view planes), and training approaches (from scratch and fine-tuning a pre-trained CNN). The performance of the CNN models was compared using AUC. RESULTS The fusion approach yielded consistently better performance than other input sizes, although the effect often did not reach statistical significance. Multi-view slicing was significantly superior to the transverse method when fine-tuning a pre-trained 2D CNN but not a CNN trained from scratch. The 3D CNN was significantly better than the 2D transverse plane method but only marginally better than the multi-view slicing method when trained from scratch. The highest performance (AUC=0.838) was achieved for the fine-tuned 2D CNN model when built using the fusion input size and multi-view slicing method. CONCLUSION The assessment of EGFR mutation status in patients is more accurate when CNN models use more spatial information and are fine-tuned by transfer learning. Our finding about implementation strategy of a CNN model could be a guidance to other medical 3D images applications. Compared with other published studies which used medical images to identify EGFR mutation status, our CNN model achieved the best performance in a biggest patient cohort.
Collapse
Affiliation(s)
- Junfeng Xiong
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240 China
- Department of Radiology, Columbia University Medical Center, NY 10032 USA
| | - Xiaoyang Li
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030 China
| | - Lin Lu
- Department of Radiology, Columbia University Medical Center, NY 10032 USA
| | | | - Xiaolong Fu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030 China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240 China
| | - Binsheng Zhao
- Department of Radiology, Columbia University Medical Center, NY 10032 USA
| |
Collapse
|
191
|
Mazurowski MA, Buda M, Saha A, Bashir MR. Deep learning in radiology: An overview of the concepts and a survey of the state of the art with focus on MRI. J Magn Reson Imaging 2019; 49:939-954. [PMID: 30575178 PMCID: PMC6483404 DOI: 10.1002/jmri.26534] [Citation(s) in RCA: 244] [Impact Index Per Article: 40.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 09/14/2018] [Accepted: 09/17/2018] [Indexed: 12/15/2022] Open
Abstract
Deep learning is a branch of artificial intelligence where networks of simple interconnected units are used to extract patterns from data in order to solve complex problems. Deep-learning algorithms have shown groundbreaking performance in a variety of sophisticated tasks, especially those related to images. They have often matched or exceeded human performance. Since the medical field of radiology mainly relies on extracting useful information from images, it is a very natural application area for deep learning, and research in this area has rapidly grown in recent years. In this article, we discuss the general context of radiology and opportunities for application of deep-learning algorithms. We also introduce basic concepts of deep learning, including convolutional neural networks. Then, we present a survey of the research in deep learning applied to radiology. We organize the studies by the types of specific tasks that they attempt to solve and review a broad range of deep-learning algorithms being utilized. Finally, we briefly discuss opportunities and challenges for incorporating deep learning in the radiology practice of the future. Level of Evidence: 3 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2019;49:939-954.
Collapse
Affiliation(s)
- Maciej A. Mazurowski
- Department of Radiology, Duke University, Durham, NC
- Department of Electrical and Computer Engineering, Duke University, Durham, NC
- Duke Medical Physics Program, Duke University, Durham, NC
| | - Mateusz Buda
- Department of Radiology, Duke University, Durham, NC
| | | | - Mustafa R. Bashir
- Department of Radiology, Duke University, Durham, NC
- Center for Advanced Magnetic Resonance Development, Duke University, Durham, NC
| |
Collapse
|
192
|
Kim BC, Yoon JS, Choi JS, Suk HI. Multi-scale gradual integration CNN for false positive reduction in pulmonary nodule detection. Neural Netw 2019; 115:1-10. [PMID: 30909118 DOI: 10.1016/j.neunet.2019.03.003] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 12/24/2018] [Accepted: 03/07/2019] [Indexed: 12/22/2022]
Abstract
Lung cancer is a global and dangerous disease, and its early detection is crucial for reducing the risks of mortality. In this regard, it has been of great interest in developing a computer-aided system for pulmonary nodules detection as early as possible on thoracic CT scans. In general, a nodule detection system involves two steps: (i) candidate nodule detection at a high sensitivity, which captures many false positives and (ii) false positive reduction from candidates. However, due to the high variation of nodule morphological characteristics and the possibility of mistaking them for neighboring organs, candidate nodule detection remains a challenge. In this study, we propose a novel Multi-scale Gradual Integration Convolutional Neural Network (MGI-CNN), designed with three main strategies: (1) to use multi-scale inputs with different levels of contextual information, (2) to use abstract information inherent in different input scales with gradual integration, and (3) to learn multi-stream feature integration in an end-to-end manner. To verify the efficacy of the proposed network, we conducted exhaustive experiments on the LUNA16 challenge datasets by comparing the performance of the proposed method with state-of-the-art methods in the literature. On two candidate subsets of the LUNA16 dataset, i.e., V1 and V2, our method achieved an average CPM of 0.908 (V1) and 0.942 (V2), outperforming comparable methods by a large margin. Our MGI-CNN is implemented in Python using TensorFlow and the source code is available from https://github.com/ku-milab/MGICNN.
Collapse
Affiliation(s)
- Bum-Chae Kim
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Jee Seok Yoon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Jun-Sik Choi
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea.
| |
Collapse
|
193
|
Asaoka R, Murata H, Hirasawa K, Fujino Y, Matsuura M, Miki A, Kanamoto T, Ikeda Y, Mori K, Iwase A, Shoji N, Inoue K, Yamagami J, Araie M. Using Deep Learning and Transfer Learning to Accurately Diagnose Early-Onset Glaucoma From Macular Optical Coherence Tomography Images. Am J Ophthalmol 2019; 198:136-145. [PMID: 30316669 DOI: 10.1016/j.ajo.2018.10.007] [Citation(s) in RCA: 139] [Impact Index Per Article: 23.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2018] [Revised: 10/02/2018] [Accepted: 10/03/2018] [Indexed: 01/26/2023]
Abstract
PURPOSE We sought to construct and evaluate a deep learning (DL) model to diagnose early glaucoma from spectral-domain optical coherence tomography (OCT) images. DESIGN Artificial intelligence diagnostic tool development, evaluation, and comparison. METHODS This multi-institution study included pretraining data of 4316 OCT images (RS3000) from 1371 eyes with open angle glaucoma (OAG) regardless of the stage of glaucoma and 193 normal eyes. Training data included OCT-1000/2000 images from 94 eyes of 94 patients with early OAG (mean deviation > -5.0 dB) and 84 eyes of 84 normal subjects. Testing data included OCT-1000/2000 from 114 eyes of 114 patients with early OAG (mean deviation > -5.0 dB) and 82 eyes of 82 normal subjects. A DL (convolutional neural network) classifier was trained using a pretraining dataset, followed by a second round of training using an independent training dataset. The DL model input features were the 8 × 8 grid macular retinal nerve fiber layer thickness and ganglion cell complex layer thickness from spectral-domain OCT. Diagnostic accuracy was investigated in the testing dataset. For comparison, diagnostic accuracy was also evaluated using the random forests and support vector machine models. The primary outcome measure was the area under the receiver operating characteristic curve (AROC). RESULTS The AROC with the DL model was 93.7%. The AROC significantly decreased to between 76.6% and 78.8% without the pretraining process. Significantly smaller AROCs were obtained with random forests and support vector machine models (82.0% and 67.4%, respectively). CONCLUSION A DL model for glaucoma using spectral-domain OCT offers a substantive increase in diagnostic performance.
Collapse
|
194
|
Ragab DA, Sharkas M, Marshall S, Ren J. Breast cancer detection using deep convolutional neural networks and support vector machines. PeerJ 2019; 7:e6201. [PMID: 30713814 PMCID: PMC6354665 DOI: 10.7717/peerj.6201] [Citation(s) in RCA: 109] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Accepted: 12/03/2018] [Indexed: 01/28/2023] Open
Abstract
It is important to detect breast cancer as early as possible. In this manuscript, a new methodology for classifying breast cancer using deep learning and some segmentation techniques are introduced. A new computer aided detection (CAD) system is proposed for classifying benign and malignant mass tumors in breast mammography images. In this CAD system, two segmentation approaches are used. The first approach involves determining the region of interest (ROI) manually, while the second approach uses the technique of threshold and region based. The deep convolutional neural network (DCNN) is used for feature extraction. A well-known DCNN architecture named AlexNet is used and is fine-tuned to classify two classes instead of 1,000 classes. The last fully connected (fc) layer is connected to the support vector machine (SVM) classifier to obtain better accuracy. The results are obtained using the following publicly available datasets (1) the digital database for screening mammography (DDSM); and (2) the Curated Breast Imaging Subset of DDSM (CBIS-DDSM). Training on a large number of data gives high accuracy rate. Nevertheless, the biomedical datasets contain a relatively small number of samples due to limited patient volume. Accordingly, data augmentation is a method for increasing the size of the input data by generating new data from the original input data. There are many forms for the data augmentation; the one used here is the rotation. The accuracy of the new-trained DCNN architecture is 71.01% when cropping the ROI manually from the mammogram. The highest area under the curve (AUC) achieved was 0.88 (88%) for the samples obtained from both segmentation techniques. Moreover, when using the samples obtained from the CBIS-DDSM, the accuracy of the DCNN is increased to 73.6%. Consequently, the SVM accuracy becomes 87.2% with an AUC equaling to 0.94 (94%). This is the highest AUC value compared to previous work using the same conditions.
Collapse
Affiliation(s)
- Dina A Ragab
- Electronics and Communications Engineering Department, Arab Academy for Science, Technology, and Maritime Transport (AASTMT), Alexandria, Egypt.,Electronic & Electrical Engineering Department, University of Strathclyde, Glasgow, United Kingdom
| | - Maha Sharkas
- Electronics and Communications Engineering Department, Arab Academy for Science, Technology, and Maritime Transport (AASTMT), Alexandria, Egypt
| | - Stephen Marshall
- Electronic & Electrical Engineering Department, University of Strathclyde, Glasgow, United Kingdom
| | - Jinchang Ren
- Electronic & Electrical Engineering Department, University of Strathclyde, Glasgow, United Kingdom
| |
Collapse
|
195
|
Akay A, Hess H. Deep Learning: Current and Emerging Applications in Medicine and Technology. IEEE J Biomed Health Inform 2019; 23:906-920. [PMID: 30676989 DOI: 10.1109/jbhi.2019.2894713] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Machine learning is enabling researchers to analyze and understand increasingly complex physical and biological phenomena in traditional fields such as biology, medicine, and engineering and emerging fields like synthetic biology, automated chemical synthesis, and biomanufacturing. These fields require new paradigms toward understanding increasingly complex data and converting such data into medical products and services for patients. The move toward deep learning and complex modeling is an attempt to bridge the gap between acquiring massive quantities of complex data, and converting such data into practical insights. Here, we provide an overview of the field of machine learning, its current applications and needs in traditional and emerging fields, and discuss an illustrative attempt at using deep learning to understand swarm behavior of molecular shuttles.
Collapse
|
196
|
Shafique S, Tehsin S. Acute Lymphoblastic Leukemia Detection and Classification of Its Subtypes Using Pretrained Deep Convolutional Neural Networks. Technol Cancer Res Treat 2019; 17:1533033818802789. [PMID: 30261827 PMCID: PMC6161200 DOI: 10.1177/1533033818802789] [Citation(s) in RCA: 82] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
Leukemia is a fatal disease of white blood cells which affects the blood and bone marrow in human body. We deployed deep convolutional neural network for automated detection of acute lymphoblastic leukemia and classification of its subtypes into 4 classes, that is, L1, L2, L3, and Normal which were mostly neglected in previous literature. In contrary to the training from scratch, we deployed pretrained AlexNet which was fine-tuned on our data set. Last layers of the pretrained network were replaced with new layers which can classify the input images into 4 classes. To reduce overtraining, data augmentation technique was used. We also compared the data sets with different color models to check the performance over different color images. For acute lymphoblastic leukemia detection, we achieved a sensitivity of 100%, specificity of 98.11%, and accuracy of 99.50%; and for acute lymphoblastic leukemia subtype classification the sensitivity was 96.74%, specificity was 99.03%, and accuracy was 96.06%. Unlike the standard methods, our proposed method was able to achieve high accuracy without any need of microscopic image segmentation.
Collapse
Affiliation(s)
- Sarmad Shafique
- Department of Computer Science, Bahria University, Islamabad, Pakistan
| | - Samabia Tehsin
- Department of Computer Science, Bahria University, Islamabad, Pakistan
- Samabia Tehsin, PhD, Department of Computer Science, Bahria University, Islamabad 46000, Pakistan.
| |
Collapse
|
197
|
Fahmy AS, El-Rewaidy H, Nezafat M, Nakamori S, Nezafat R. Automated analysis of cardiovascular magnetic resonance myocardial native T 1 mapping images using fully convolutional neural networks. J Cardiovasc Magn Reson 2019; 21:7. [PMID: 30636630 PMCID: PMC6330747 DOI: 10.1186/s12968-018-0516-1] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Accepted: 12/05/2018] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Cardiovascular magnetic resonance (CMR) myocardial native T1 mapping allows assessment of interstitial diffuse fibrosis. In this technique, the global and regional T1 are measured manually by drawing region of interest in motion-corrected T1 maps. The manual analysis contributes to an already lengthy CMR analysis workflow and impacts measurements reproducibility. In this study, we propose an automated method for combined myocardium segmentation, alignment, and T1 calculation for myocardial T1 mapping. METHODS A deep fully convolutional neural network (FCN) was used for myocardium segmentation in T1 weighted images. The segmented myocardium was then resampled on a polar grid, whose origin is located at the center-of-mass of the segmented myocardium. Myocardium T1 maps were reconstructed from the resampled T1 weighted images using curve fitting. The FCN was trained and tested using manually segmented images for 210 patients (5 slices, 11 inversion times per patient). An additional image dataset for 455 patients (5 slices and 11 inversion times per patient), analyzed by an expert reader using a semi-automatic tool, was used to validate the automatically calculated global and regional T1 values. Bland-Altman analysis, Pearson correlation coefficient, r, and the Dice similarity coefficient (DSC) were used to evaluate the performance of the FCN-based analysis on per-patient and per-slice basis. Inter-observer variability was assessed using intraclass correlation coefficient (ICC) of the T1 values calculated by the FCN-based automatic method and two readers. RESULTS The FCN achieved fast segmentation (< 0.3 s/image) with high DSC (0.85 ± 0.07). The automatically and manually calculated T1 values (1091 ± 59 ms and 1089 ± 59 ms, respectively) were highly correlated in per-patient (r = 0.82; slope = 1.01; p < 0.0001) and per-slice (r = 0.72; slope = 1.01; p < 0.0001) analyses. Bland-Altman analysis showed good agreement between the automated and manual measurements with 95% of measurements within the limits-of-agreement in both per-patient and per-slice analyses. The intraclass correllation of the T1 calculations by the automatic method vs reader 1 and reader 2 was respectively 0.86/0.56 and 0.74/0.49 in the per-patient/per-slice analyses, which were comparable to that between two expert readers (=0.72/0.58 in per-patient/per-slice analyses). CONCLUSION The proposed FCN-based image processing platform allows fast and automatic analysis of myocardial native T1 mapping images mitigating the burden and observer-related variability of manual analysis.
Collapse
Affiliation(s)
- Ahmed S. Fahmy
- Department of Medicine, Beth Israel Deaconess Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA 02215 USA
- Biomedical Engineering Department, Cairo University, Cairo, Egypt
| | - Hossam El-Rewaidy
- Department of Medicine, Beth Israel Deaconess Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA 02215 USA
| | - Maryam Nezafat
- Department of Medicine, Beth Israel Deaconess Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA 02215 USA
| | - Shiro Nakamori
- Department of Medicine, Beth Israel Deaconess Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA 02215 USA
| | - Reza Nezafat
- Department of Medicine, Beth Israel Deaconess Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA 02215 USA
| |
Collapse
|
198
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2019; 46:e1-e36. [PMID: 30367497 PMCID: PMC9560030 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 398] [Impact Index Per Article: 66.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | - Karen Drukker
- Department of RadiologyUniversity of ChicagoChicagoIL60637USA
| | - Kenny H. Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Ronald M. Summers
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | | |
Collapse
|
199
|
Ren Y, Ma J, Xiong J, Chen Y, Lu L, Zhao J. Improved False Positive Reduction by Novel Morphological Features for Computer-Aided Polyp Detection in CT Colonography. IEEE J Biomed Health Inform 2019; 23:324-333. [DOI: 10.1109/jbhi.2018.2808199] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
200
|
Khosravan N, Celik H, Turkbey B, Jones EC, Wood B, Bagci U. A collaborative computer aided diagnosis (C-CAD) system with eye-tracking, sparse attentional model, and deep learning. Med Image Anal 2019; 51:101-115. [PMID: 30399507 PMCID: PMC6407631 DOI: 10.1016/j.media.2018.10.010] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Revised: 07/27/2018] [Accepted: 10/26/2018] [Indexed: 12/19/2022]
Abstract
Computer aided diagnosis (CAD) tools help radiologists to reduce diagnostic errors such as missing tumors and misdiagnosis. Vision researchers have been analyzing behaviors of radiologists during screening to understand how and why they miss tumors or misdiagnose. In this regard, eye-trackers have been instrumental in understanding visual search processes of radiologists. However, most relevant studies in this aspect are not compatible with realistic radiology reading rooms. In this study, we aim to develop a paradigm shifting CAD system, called collaborative CAD (C-CAD), that unifies CAD and eye-tracking systems in realistic radiology room settings. We first developed an eye-tracking interface providing radiologists with a real radiology reading room experience. Second, we propose a novel algorithm that unifies eye-tracking data and a CAD system. Specifically, we present a new graph based clustering and sparsification algorithm to transform eye-tracking data (gaze) into a graph model to interpret gaze patterns quantitatively and qualitatively. The proposed C-CAD collaborates with radiologists via eye-tracking technology and helps them to improve their diagnostic decisions. The C-CAD uses radiologists' search efficiency by processing their gaze patterns. Furthermore, the C-CAD incorporates a deep learning algorithm in a newly designed multi-task learning platform to segment and diagnose suspicious areas simultaneously. The proposed C-CAD system has been tested in a lung cancer screening experiment with multiple radiologists, reading low dose chest CTs. Promising results support the efficiency, accuracy and applicability of the proposed C-CAD system in a real radiology room setting. We have also shown that our framework is generalizable to more complex applications such as prostate cancer screening with multi-parametric magnetic resonance imaging (mp-MRI).
Collapse
Affiliation(s)
- Naji Khosravan
- Center for Research in Computer Vision, University of Central Florida, FL, United States
| | - Haydar Celik
- Clinical Center, National Institutes of Health, Bethesda, MD, United States
| | - Baris Turkbey
- Clinical Center, National Institutes of Health, Bethesda, MD, United States
| | - Elizabeth C Jones
- Clinical Center, National Institutes of Health, Bethesda, MD, United States
| | - Bradford Wood
- Clinical Center, National Institutes of Health, Bethesda, MD, United States
| | - Ulas Bagci
- Center for Research in Computer Vision, University of Central Florida, FL, United States.
| |
Collapse
|