51
|
Kang J, Ullah Z, Gwak J. MRI-Based Brain Tumor Classification Using Ensemble of Deep Features and Machine Learning Classifiers. SENSORS 2021; 21:s21062222. [PMID: 33810176 PMCID: PMC8004778 DOI: 10.3390/s21062222] [Citation(s) in RCA: 107] [Impact Index Per Article: 26.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 03/13/2021] [Accepted: 03/17/2021] [Indexed: 11/21/2022]
Abstract
Brain tumor classification plays an important role in clinical diagnosis and effective treatment. In this work, we propose a method for brain tumor classification using an ensemble of deep features and machine learning classifiers. In our proposed framework, we adopt the concept of transfer learning and uses several pre-trained deep convolutional neural networks to extract deep features from brain magnetic resonance (MR) images. The extracted deep features are then evaluated by several machine learning classifiers. The top three deep features which perform well on several machine learning classifiers are selected and concatenated as an ensemble of deep features which is then fed into several machine learning classifiers to predict the final output. To evaluate the different kinds of pre-trained models as a deep feature extractor, machine learning classifiers, and the effectiveness of an ensemble of deep feature for brain tumor classification, we use three different brain magnetic resonance imaging (MRI) datasets that are openly accessible from the web. Experimental results demonstrate that an ensemble of deep features can help improving performance significantly, and in most cases, support vector machine (SVM) with radial basis function (RBF) kernel outperforms other machine learning classifiers, especially for large datasets.
Collapse
Affiliation(s)
- Jaeyong Kang
- Department of Software, Korea National University of Transportation, Chungju 27469, Korea; (J.K.); (Z.U.)
| | - Zahid Ullah
- Department of Software, Korea National University of Transportation, Chungju 27469, Korea; (J.K.); (Z.U.)
| | - Jeonghwan Gwak
- Department of Software, Korea National University of Transportation, Chungju 27469, Korea; (J.K.); (Z.U.)
- Department of Biomedical Engineering, Korea National University of Transportation, Chungju 27469, Korea
- Department of AI Robotics Engineering, Korea National University of Transportation, Chungju 27469, Korea
- Department of IT Convergence (Brain Korea PLUS 21), Korea National University of Transportation, Chungju 27469, Korea
- Correspondence: ; Tel.: +82-43-841-5852
| |
Collapse
|
52
|
Yu W, Zhou H, Goldin JG, Wong WK, Kim GHJ. End-to-end domain knowledge-assisted automatic diagnosis of idiopathic pulmonary fibrosis (IPF) using computed tomography (CT). Med Phys 2021; 48:2458-2467. [PMID: 33547645 DOI: 10.1002/mp.14754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 01/21/2021] [Accepted: 01/25/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Domain knowledge (DK) acquired from prior studies is important for medical diagnosis. This paper leverages the population-level DK using an optimality design criterion to train a deep learning model in an end-to-end manner. In this study, the problem of interest is at the patient level to diagnose a subject with idiopathic pulmonary fibrosis (IPF) among subjects with interstitial lung disease (ILD) using a computed tomography (CT). IPF diagnosis is a complicated process with multidisciplinary discussion with experts and is subject to interobserver variability, even for experienced radiologists. To this end, we propose a new statistical method to construct a time/memory-efficient IPF diagnosis model using axial chest CT and DK, along with an optimality design criterion via a DK-enhanced loss function of deep learning. METHODS Four state-of-the-art two-dimensional convolutional neural network (2D-CNN) architectures (MobileNet, VGG16, ResNet-50, and DenseNet-121) and one baseline 2D-CNN are implemented to automatically diagnose IPF among ILD patients. Axial lung CT images are retrospectively acquired from 389 IPF patients and 700 non-IPF ILD patients in five multicenter clinical trials. To enrich the sample size and boost model performance, we sample 20 three-slice samples (triplets) from each CT scan, where these three slices are randomly selected from the top, middle, and bottom of both lungs respectively. Model performance is evaluated using a fivefold cross-validation, where each fold was stratified using a fixed proportion of IPF vs non-IPF. RESULTS Using DK-enhanced loss function increases the model performance of the baseline CNN model from 0.77 to 0.89 in terms of study-wise accuracy. Four other well-developed models reach satisfactory model performance with an overall accuracy >0.95 but the benefits brought on by the DK-enhanced loss function is not noticeable. CONCLUSIONS We believe this is the first attempt that (a) uses population-level DK with an optimal design criterion to train deep learning-based diagnostic models in an end-to-end manner and (b) focuses on patient-level IPF diagnosis. Further evaluation of using population-level DK on prospective studies is warranted and is underway.
Collapse
Affiliation(s)
- Wenxi Yu
- Department of Biostatistics, University of California, Los Angeles, CA, 90024, USA
| | - Hua Zhou
- Department of Biostatistics, University of California, Los Angeles, CA, 90024, USA
| | - Jonathan G Goldin
- Department of Radiology, University of California, Los Angeles, CA, 90024, USA
| | - Weng Kee Wong
- Department of Biostatistics, University of California, Los Angeles, CA, 90024, USA
| | - Grace Hyun J Kim
- Department of Biostatistics, University of California, Los Angeles, CA, 90024, USA.,Department of Radiology, University of California, Los Angeles, CA, 90024, USA
| |
Collapse
|
53
|
Hou Y, Chen C, Zhang L, Zhou W, Lu Q, Jia X, Zhang J, Guo C, Qin Y, Zhu L, Zuo M, Xiao J, Huang L, Zhan W. Using Deep Neural Network to Diagnose Thyroid Nodules on Ultrasound in Patients With Hashimoto's Thyroiditis. Front Oncol 2021; 11:614172. [PMID: 33796455 PMCID: PMC8008116 DOI: 10.3389/fonc.2021.614172] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 01/28/2021] [Indexed: 01/08/2023] Open
Abstract
Objective The aim of this study is to develop a model using Deep Neural Network (DNN) to diagnose thyroid nodules in patients with Hashimoto’s Thyroiditis. Methods In this retrospective study, we included 2,932 patients with thyroid nodules who underwent thyroid ultrasonogram in our hospital from January 2017 to August 2019. 80% of them were included as training set and 20% as test set. Nodules suspected for malignancy underwent FNA or surgery for pathological results. Two DNN models were trained to diagnose thyroid nodules, and we chose the one with better performance. The features of nodules as well as parenchyma around nodules will be learned by the model to achieve better performance under diffused parenchyma. 10-fold cross-validation and an independent test set were used to evaluate the performance of the algorithm. The performance of the model was compared with that of the three groups of radiologists with clinical experience of <5 years, 5–10 years, >10 years respectively. Results In total, 9,127 images were collected from 2,932 patients with 7,301 images for the training set and 1,806 for the test set. 56% of the patients enrolled had Hashimoto’s Thyroiditis. The model achieved an AUC of 0.924 for distinguishing malignant and benign nodules in the test set. It showed similar performance under diffused thyroid parenchyma and normal parenchyma with sensitivity of 0.881 versus 0.871 (p = 0.938) and specificity of 0.846 versus 0.822 (p = 0.178). In patients with HT, the model achieved an AUC of 0.924 to differentiate malignant and benign nodules which was significantly higher than that of the three groups of radiologists (AUC = 0.824, 0.857, 0.863 respectively, p < 0.05). Conclusion The model showed high performance in diagnosing thyroid nodules under both normal and diffused parenchyma. In patients with Hashimoto’s Thyroiditis, the model showed a better performance compared to radiologists with various years of experience.
Collapse
Affiliation(s)
- Yiqing Hou
- Department of Ultrasound Diagnosis, Ruijin Hospital Affiliated to Shanghai Jiaotong University, Shanghai, China
| | - Chao Chen
- Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
| | - Lu Zhang
- Department of Ultrasound Diagnosis, Ruijin Hospital Affiliated to Shanghai Jiaotong University, Shanghai, China
| | - Wei Zhou
- Department of Ultrasound Diagnosis, Ruijin Hospital Affiliated to Shanghai Jiaotong University, Shanghai, China
| | - Qinyang Lu
- Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
| | - Xiaohong Jia
- Department of Ultrasound Diagnosis, Ruijin Hospital Affiliated to Shanghai Jiaotong University, Shanghai, China
| | - Jingwen Zhang
- Department of Ultrasound Diagnosis, Ruijin Hospital Affiliated to Shanghai Jiaotong University, Shanghai, China
| | - Cen Guo
- Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
| | - Yuxiang Qin
- Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
| | - Lifeng Zhu
- Computer Centre, Ruijin Hospital Affiliated to Shanghai Jiaotong University, Shanghai, China
| | - Ming Zuo
- Computer Centre, Ruijin Hospital Affiliated to Shanghai Jiaotong University, Shanghai, China
| | - Jing Xiao
- Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
| | - Lingyun Huang
- Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
| | - Weiwei Zhan
- Department of Ultrasound Diagnosis, Ruijin Hospital Affiliated to Shanghai Jiaotong University, Shanghai, China
| |
Collapse
|
54
|
Elmuogy S, Hikal NA, Hassan E. An efficient technique for CT scan images classification of COVID-19. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-201985] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Nowadays, Coronavirus (COVID-19) considered one of the most critical pandemics in the earth. This is due its ability to spread rapidly between humans as well as animals. COVID-19 expected to outbreak around the world, around 70 % of the earth population might infected with COVID-19 in the incoming years. Therefore, an accurate and efficient diagnostic tool is highly required, which the main objective of our study. Manual classification was mainly used to detect different diseases, but it took too much time in addition to the probability of human errors. Automatic image classification reduces doctors diagnostic time, which could save human’s life. We propose an automatic classification architecture based on deep neural network called Worried Deep Neural Network (WDNN) model with transfer learning. Comparative analysis reveals that the proposed WDNN model outperforms by using three pre-training models: InceptionV3, ResNet50, and VGG19 in terms of various performance metrics. Due to the shortage of COVID-19 data set, data augmentation was used to increase the number of images in the positive class, then normalization used to make all images have the same size. Experimentation is done on COVID-19 dataset collected from different cases with total 2623 where (1573 training, 524 validation, 524 test). Our proposed model achieved 99,046, 98,684, 99,119, 98,90 in terms of accuracy, precision, recall, F-score, respectively. The results are compared with both the traditional machine learning methods and those using Convolutional Neural Networks (CNNs). The results demonstrate the ability of our classification model to use as an alternative of the current diagnostic tool.
Collapse
Affiliation(s)
- Samir Elmuogy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University,, Mansoura, Egypt
| | - Noha A. Hikal
- Department of Information Technology, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Esraa Hassan
- Department of Machine Learning and Information Retrieval, Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh, Egypt
| |
Collapse
|
55
|
Goyal S. An Overview of Current Trends, Techniques, Prospects, and Pitfalls of Artificial Intelligence in Breast Imaging. REPORTS IN MEDICAL IMAGING 2021. [DOI: 10.2147/rmi.s295205] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
|
56
|
Sun L, Jiang Z, Chang Y, Ren L. Building a patient-specific model using transfer learning for four-dimensional cone beam computed tomography augmentation. Quant Imaging Med Surg 2021; 11:540-555. [PMID: 33532255 DOI: 10.21037/qims-20-655] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background We previously developed a deep learning model to augment the quality of four-dimensional (4D) cone-beam computed tomography (CBCT). However, the model was trained using group data, and thus was not optimized for individual patients. Consequently, the augmented images could not depict small anatomical structures, such as lung vessels. Methods In the present study, the transfer learning method was used to further improve the performance of the deep learning model for individual patients. Specifically, a U-Net-based model was first trained to augment 4D-CBCT using group data. Next, transfer learning was used to fine tune the model based on a specific patient's available data to improve its performance for that individual patient. Two types of transfer learning were studied: layer-freezing and whole-network fine-tuning. The performance of the transfer learning model was evaluated by comparing the augmented CBCT images with the ground truth images both qualitatively and quantitatively using a structure similarity index matrix (SSIM) and peak signal-to-noise ratio (PSNR). The results were also compared to those obtained using only the U-Net method. Results Qualitatively, the patient-specific model recovered more detailed information of the lung area than the group-based U-Net model. Quantitatively, the SSIM improved from 0.924 to 0.958, and the PSNR improved from 33.77 to 38.42 for the whole volumetric images for the group-based U-Net and patient-specific models, respectively. The layer-freezing method was found to be more efficient than the whole-network fine-tuning method, and had a training time as short as 10 minutes. The effect of augmentation by transfer learning increased as the number of projections used for CBCT reconstruction decreased. Conclusions Overall, the patient-specific model optimized by transfer learning was efficient and effective at improving image qualities of augmented undersampled three-dimensional (3D)- and 4D-CBCT images, and could be extremely valuable for applications in image-guided radiation therapy.
Collapse
Affiliation(s)
- Leshan Sun
- Department of Radiation Oncology, Duke University Medical Center (DUMC), Durham, North Carolina, USA.,Medical Physics Graduate Program, Duke Kunshan University, Kunshan, China
| | - Zhuoran Jiang
- Department of Radiation Oncology, Duke University Medical Center (DUMC), Durham, North Carolina, USA
| | - Yushi Chang
- Medical Physics Graduate Program, Duke University, Durham, NC, USA
| | - Lei Ren
- Department of Radiation Oncology, Duke University Medical Center (DUMC), Durham, North Carolina, USA.,Medical Physics Graduate Program, Duke University, Durham, NC, USA
| |
Collapse
|
57
|
Masquelin AH, Cheney N, Kinsey CM, Bates JHT. Wavelet decomposition facilitates training on small datasets for medical image classification by deep learning. Histochem Cell Biol 2021; 155:309-317. [PMID: 33502624 PMCID: PMC7957953 DOI: 10.1007/s00418-020-01961-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/30/2020] [Indexed: 12/15/2022]
Abstract
The adoption of low-dose computed tomography (LDCT) as the standard of care for lung cancer screening results in decreased mortality rates in high-risk population while increasing false-positive rate. Convolutional neural networks provide an ideal opportunity to improve malignant nodule detection; however, due to the lack of large adjudicated medical datasets these networks suffer from poor generalizability and overfitting. Using computed tomography images of the thorax from the National Lung Screening Trial (NLST), we compared discrete wavelet transforms (DWTs) against convolutional layers found in a CNN in order to evaluate their ability to classify suspicious lung nodules as either malignant or benign. We explored the use of the DWT as an alternative to the convolutional operations within CNNs in order to decrease the number of parameters to be estimated during training and reduce the risk of overfitting. We found that multi-level DWT performed better than convolutional layers when multiple kernel resolutions were utilized, yielding areas under the receiver-operating curve (AUC) of 94% and 92%, respectively. Furthermore, we found that multi-level DWT reduced the number of network parameters requiring evaluation when compared to a CNN and had a substantially faster convergence rate. We conclude that utilizing multi-level DWT composition in place of early convolutional layers within a DNN may improve for image classification in data-limited domains.
Collapse
Affiliation(s)
| | - Nicholas Cheney
- Department of Computer Science, University of Vermont, Burlington, VT, USA
| | - C Matthew Kinsey
- Pulmonary and Critical Care, Department of Medicine, University of Vermont, Burlington, VT, USA
| | - Jason H T Bates
- University of Vermont, Burlington, VT, USA.
- Department of Medicine, University of Vermont, 149 Beaumont Avenue, Burlington, VT, 05405, USA.
| |
Collapse
|
58
|
De Bois M, El Yacoubi MA, Ammi M. Adversarial multi-source transfer learning in healthcare: Application to glucose prediction for diabetic people. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 199:105874. [PMID: 33333366 DOI: 10.1016/j.cmpb.2020.105874] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 11/19/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES Deep learning has yet to revolutionize general practices in healthcare, despite promising results for some specific tasks. This is partly due to data being in insufficient quantities hurting the training of the models. To address this issue, data from multiple health actors or patients could be combined by capitalizing on their heterogeneity through the use of transfer learning. METHODS To improve the quality of the transfer between multiple sources of data, we propose a multi-source adversarial transfer learning framework that enables the learning of a feature representation that is similar across the sources, and thus more general and more easily transferable. We apply this idea to glucose forecasting for diabetic people using a fully convolutional neural network. The evaluation is done by exploring various transfer scenarios with three datasets characterized by their high inter and intra variability. RESULTS While transferring knowledge is beneficial in general, we show that the statistical and clinical accuracies can be further improved by using of the adversarial training methodology, surpassing the current state-of-the-art results. In particular, it shines when using data from different datasets, or when there is too little data in an intra-dataset situation. To understand the behavior of the models, we analyze the learnt feature representations and propose a new metric in this regard. Contrary to a standard transfer, the adversarial transfer does not discriminate the patients and datasets, helping the learning of a more general feature representation. CONCLUSION The adversarial training framework improves the learning of a general feature representation in a multi-source environment, enhancing the knowledge transfer to an unseen target. The proposed method can help improve the efficiency of data shared by different health actors in the training of deep models.
Collapse
Affiliation(s)
- Maxime De Bois
- CNRS-LIMSI and the Université Paris-Saclay, Orsay, France.
| | - Mounîm A El Yacoubi
- Samovar, CNRS, Télécom SudParis, Institut Polytechnique de Paris, Évry, France
| | | |
Collapse
|
59
|
Inés A, Domínguez C, Heras J, Mata E, Pascual V. Biomedical image classification made easier thanks to transfer and semi-supervised learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105782. [PMID: 33065493 DOI: 10.1016/j.cmpb.2020.105782] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Accepted: 09/29/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVES Deep learning techniques are the state-of-the-art approach to solve image classification problems in biomedicine; however, they require the acquisition and annotation of a considerable volume of images. In addition, using deep learning libraries and tuning the hyperparameters of the networks trained with them might be challenging for several users. These drawbacks prevent the adoption of these techniques outside the machine-learning community. In this work, we present an Automated Machine Learning (AutoML) method to deal with these problems. METHODS Our AutoML method combines transfer learning with a new semi-supervised learning procedure to train models when few annotated images are available. In order to facilitate the dissemination of our method, we have implemented it as an open-source tool called ATLASS. Finally, we have evaluated our method with two benchmarks of biomedical image classification datasets. RESULTS Our method has been thoroughly tested both with small datasets and partially annotated biomedical datasets; and, it outperforms, both in terms of speed and accuracy, the existing AutoML tools when working with small datasets; and, might improve the accuracy of models up to a 10% when working with partially annotated datasets. CONCLUSIONS The work presented in this paper allows the use of deep learning techniques to solve an image classification problem with few resources. Namely, it is possible to train deep models with small, and partially annotated datasets of images. In addition, we have proven that our AutoML method outperforms other AutoML tools both in terms of accuracy and speed when working with small datasets.
Collapse
Affiliation(s)
- A Inés
- Department of Mathematics and Computer Science of University of La Rioja, Centro Científico Tecnológico Logroño E-26006, La Rioja, Spain.
| | - C Domínguez
- Department of Mathematics and Computer Science of University of La Rioja, Centro Científico Tecnológico Logroño E-26006, La Rioja, Spain
| | - J Heras
- Department of Mathematics and Computer Science of University of La Rioja, Centro Científico Tecnológico Logroño E-26006, La Rioja, Spain
| | - E Mata
- Department of Mathematics and Computer Science of University of La Rioja, Centro Científico Tecnológico Logroño E-26006, La Rioja, Spain
| | - V Pascual
- Department of Mathematics and Computer Science of University of La Rioja, Centro Científico Tecnológico Logroño E-26006, La Rioja, Spain
| |
Collapse
|
60
|
A transfer learning approach to drug resistance classification in mixed HIV dataset. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100568] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
61
|
Drukker K, Yan P, Sibley A, Wang G. Biomedical imaging and analysis through deep learning. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00004-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
62
|
Abstract
Machine learning has been heavily researched and widely used in many disciplines. However, achieving high accuracy requires a large amount of data that is sometimes difficult, expensive, or impractical to obtain. Integrating human knowledge into machine learning can significantly reduce data requirement, increase reliability and robustness of machine learning, and build explainable machine learning systems. This allows leveraging the vast amount of human knowledge and capability of machine learning to achieve functions and performance not available before and will facilitate the interaction between human beings and machine learning systems, making machine learning decisions understandable to humans. This paper gives an overview of the knowledge and its representations that can be integrated into machine learning and the methodology. We cover the fundamentals, current status, and recent progress of the methods, with a focus on popular and new topics. The perspectives on future directions are also discussed.
Collapse
Affiliation(s)
- Changyu Deng
- Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| | - Xunbi Ji
- Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| | - Colton Rainey
- Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| | - Jianyu Zhang
- Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| | - Wei Lu
- Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
- Department of Materials Science & Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
63
|
Yan T, Wong PK, Ren H, Wang H, Wang J, Li Y. Automatic distinction between COVID-19 and common pneumonia using multi-scale convolutional neural network on chest CT scans. CHAOS, SOLITONS, AND FRACTALS 2020; 140:110153. [PMID: 32834641 PMCID: PMC7381895 DOI: 10.1016/j.chaos.2020.110153] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Accepted: 07/23/2020] [Indexed: 05/16/2023]
Abstract
The COVID-19 pneumonia is a global threat since it emerged in early December 2019. Driven by the desire to develop a computer-aided system for the rapid diagnosis of COVID-19 to assist radiologists and clinicians to combat with this pandemic, we retrospectively collected 206 patients with positive reverse-transcription polymerase chain reaction (RT-PCR) for COVID-19 and their 416 chest computed tomography (CT) scans with abnormal findings from two hospitals, 412 non-COVID-19 pneumonia and their 412 chest CT scans with clear sign of pneumonia are also retrospectively selected from participating hospitals. Based on these CT scans, we design an artificial intelligence (AI) system that uses a multi-scale convolutional neural network (MSCNN) and evaluate its performance at both slice level and scan level. Experimental results show that the proposed AI has promising diagnostic performance in the detection of COVID-19 and differentiating it from other common pneumonia under limited number of training data, which has great potential to assist radiologists and physicians in performing a quick diagnosis and mitigate the heavy workload of them especially when the health system is overloaded. The data is publicly available for further research at https://data.mendeley.com/datasets/3y55vgckg6/1https://data.mendeley.com/datasets/3y55vgckg6/1.
Collapse
Affiliation(s)
- Tao Yan
- School of Mechanical Engineering, Hubei University of Arts and Science, Xiangyang 441053, China
- Department of Electromechanical Engineering, University of Macau, Taipa 999078, Macau SAR, China
| | - Pak Kin Wong
- Department of Electromechanical Engineering, University of Macau, Taipa 999078, Macau SAR, China
| | - Hao Ren
- Department of Radiology, Xiangyang Central Hospital, Affiliated Hospital of Hubei University of Arts and Science, Xiangyang 441021, China
| | - Huaqiao Wang
- Department of Radiology, Xiangyang No.1 People's Hospital, Hubei University of Medicine, Xiangyang 441000, China
| | - Jiangtao Wang
- Department of Radiology, Xiangyang Central Hospital, Affiliated Hospital of Hubei University of Arts and Science, Xiangyang 441021, China
| | - Yang Li
- Department of Radiology, Xiangyang No.1 People's Hospital, Hubei University of Medicine, Xiangyang 441000, China
| |
Collapse
|
64
|
|
65
|
Feng D, Chen X, Zhou Z, Liu H, Wang Y, Bai L, Zhang S, Mou X. A Preliminary Study of Predicting Effectiveness of Anti-VEGF Injection Using OCT Images Based on Deep Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:5428-5431. [PMID: 33019208 DOI: 10.1109/embc44109.2020.9176743] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Deep learning based radiomics have made great progress such as CNN based diagnosis and U-Net based segmentation. However, the prediction of drug effectiveness based on deep learning has fewer studies. Choroidal neovascularization (CNV) and cystoid macular edema (CME) are the diseases often leading to a sudden onset but progressive decline in central vision. And the curative treatment using anti-vascular endothelial growth factor (anti-VEGF) may not be effective for some patients. Therefore, the prediction of the effectiveness of anti-VEGF for patients is important. With the development of Convolutional Neural Networks (CNNs) coupled with transfer learning, medical image classifications have achieved great success. We used a method based on transfer learning to automatically predict the effectiveness of anti-VEGF by Optical Coherence tomography (OCT) images before giving medication. The method consists of image preprocessing, data augmentation and CNN-based transfer learning, the prediction AUC can be over 0.8. We also made a comparison study of using lesion region images and full OCT images on this task. Experiments shows that using the full OCT images can obtain better performance. Different deep neural networks such as AlexNet, VGG-16, GooLeNet and ResNet-50 were compared, and the modified ResNet-50 is more suitable for predicting the effectiveness of anti-VEGF.Clinical Relevance - This prediction model can give an estimation of whether anti-VEGF is effective for patients with CNV or CME, which can help ophthalmologists make treatment plan.
Collapse
|
66
|
Draelos RL, Dov D, Mazurowski MA, Lo JY, Henao R, Rubin GD, Carin L. Machine-learning-based multiple abnormality prediction with large-scale chest computed tomography volumes. Med Image Anal 2020; 67:101857. [PMID: 33129142 DOI: 10.1016/j.media.2020.101857] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 09/15/2020] [Accepted: 09/18/2020] [Indexed: 12/11/2022]
Abstract
Machine learning models for radiology benefit from large-scale data sets with high quality labels for abnormalities. We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients. This is the largest multiply-annotated volumetric medical imaging data set reported. To annotate this data set, we developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports with an average F-score of 0.976 (min 0.941, max 1.0). We also developed a model for multi-organ, multi-disease classification of chest CT volumes that uses a deep convolutional neural network (CNN). This model reached a classification performance of AUROC >0.90 for 18 abnormalities, with an average AUROC of 0.773 for all 83 abnormalities, demonstrating the feasibility of learning from unfiltered whole volume CT data. We show that training on more labels improves performance significantly: for a subset of 9 labels - nodule, opacity, atelectasis, pleural effusion, consolidation, mass, pericardial effusion, cardiomegaly, and pneumothorax - the model's average AUROC increased by 10% when the number of training labels was increased from 9 to all 83. All code for volume preprocessing, automated label extraction, and the volume abnormality prediction model is publicly available. The 36,316 CT volumes and labels will also be made publicly available pending institutional approval.
Collapse
Affiliation(s)
- Rachel Lea Draelos
- Computer Science Department, Duke University, LSRC Building D101, 308 Research Drive, Duke Box 90129, Durham, North Carolina 27708-0129, United States of America; School of Medicine, Duke University, DUMC 3710, Durham, North Carolina 27710, United States of America.
| | - David Dov
- Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America
| | - Maciej A Mazurowski
- Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America; Radiology Department, Duke University, Box 3808 DUMC, Durham, North Carolina 27710, United States of America; Biostatistics and Bioinformatics Department, Duke University, DUMC 2424 Erwin Road, Suite 1102 Hock Plaza, Box 2721 Durham, North Carolina 27710, United States of America
| | - Joseph Y Lo
- Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America; Radiology Department, Duke University, Box 3808 DUMC, Durham, North Carolina 27710, United States of America; Biomedical Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Room 1427, Fitzpatrick Center (FCIEMAS), 101 Science Drive, Campus Box 90281, Durham, North Carolina 27708-0281, United States of America
| | - Ricardo Henao
- Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America; Biostatistics and Bioinformatics Department, Duke University, DUMC 2424 Erwin Road, Suite 1102 Hock Plaza, Box 2721 Durham, North Carolina 27710, United States of America
| | - Geoffrey D Rubin
- Radiology Department, Duke University, Box 3808 DUMC, Durham, North Carolina 27710, United States of America
| | - Lawrence Carin
- Computer Science Department, Duke University, LSRC Building D101, 308 Research Drive, Duke Box 90129, Durham, North Carolina 27708-0129, United States of America; Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America; Statistical Science Department, Duke University, Box 90251, Durham, North Carolina 27708-0251, United States of America
| |
Collapse
|
67
|
Yu J, Deng Y, Liu T, Zhou J, Jia X, Xiao T, Zhou S, Li J, Guo Y, Wang Y, Zhou J, Chang C. Lymph node metastasis prediction of papillary thyroid carcinoma based on transfer learning radiomics. Nat Commun 2020; 11:4807. [PMID: 32968067 PMCID: PMC7511309 DOI: 10.1038/s41467-020-18497-3] [Citation(s) in RCA: 169] [Impact Index Per Article: 33.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Accepted: 08/24/2020] [Indexed: 12/24/2022] Open
Abstract
Non-invasive assessment of the risk of lymph node metastasis (LNM) in patients with papillary thyroid carcinoma (PTC) is of great value for the treatment option selection. The purpose of this paper is to develop a transfer learning radiomics (TLR) model for preoperative prediction of LNM in PTC patients in a multicenter, cross-machine, multi-operator scenario. Here we report the TLR model produces a stable LNM prediction. In the experiments of cross-validation and independent testing of the main cohort according to diagnostic time, machine, and operator, the TLR achieves an average area under the curve (AUC) of 0.90. In the other two independent cohorts, TLR also achieves 0.93 AUC, and this performance is statistically better than the other three methods according to Delong test. Decision curve analysis also proves that the TLR model brings more benefit to PTC patients than other methods.
Collapse
Affiliation(s)
- Jinhua Yu
- Department of Electronic Engineering, Fudan University, Shanghai, China.,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Yinhui Deng
- Department of Electronic Engineering, Fudan University, Shanghai, China.,MingGe Research, Fudan University Science Park, Shanghai, China
| | - Tongtong Liu
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Jin Zhou
- Fudan University Shanghai Cancer Center, Shanghai, China
| | - Xiaohong Jia
- Ruijin Hospital Affiliated to Shanghai Jiaotong University, Shanghai, China
| | - Tianlei Xiao
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Shichong Zhou
- Fudan University Shanghai Cancer Center, Shanghai, China
| | - Jiawei Li
- Fudan University Shanghai Cancer Center, Shanghai, China
| | - Yi Guo
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Yuanyuan Wang
- Department of Electronic Engineering, Fudan University, Shanghai, China. .,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China.
| | - Jianqiao Zhou
- Ruijin Hospital Affiliated to Shanghai Jiaotong University, Shanghai, China.
| | - Cai Chang
- Fudan University Shanghai Cancer Center, Shanghai, China.
| |
Collapse
|
68
|
Deep Learning Signature Based on Staging CT for Preoperative Prediction of Sentinel Lymph Node Metastasis in Breast Cancer. Acad Radiol 2020; 27:1226-1233. [PMID: 31818648 DOI: 10.1016/j.acra.2019.11.007] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 11/10/2019] [Accepted: 11/13/2019] [Indexed: 12/14/2022]
Abstract
RATIONALE AND OBJECTIVES To evaluate the noninvasive predictive performance of deep learning features based on staging CT for sentinel lymph node (SLN) metastasis of breast cancer. MATERIALS AND METHODS A total of 348 breast cancer patients were enrolled in this study, with their SLN metastases pathologically confirmed. All patients received contrast-enhanced CT preoperative examinations and CT images were segmented and analyzed to extract deep features. After the feature selection, deep learning signature was built with the selected key features. The performance of the deep learning signatures was assessed with respect to discrimination, calibration, and clinical usefulness in the primary cohort (184 patients from January 2016 to March 2017) and then validated in the independent validation cohort (164 patients from April 2017 to December 2018). RESULTS Ten deep learning features were automatically selected in the primary cohort to establish the deep learning signature of SLN metastasis. The deep learning signature shows favorable discriminative ability with an area under curve of 0.801 (95% confidence interval: 0.736-0.867) in primary cohort and 0.817 (95% confidence interval: 0.751-0.884) in validation cohort. To further distinguish the number of metastatic SLNs (1-2 or more than two metastatic SLN), another deep learning signature was constructed and also showed moderate performance (area under curve 0.770). CONCLUSION We developed the deep learning signatures for preoperative prediction of SLN metastasis status and numbers (1-2 or more than two metastatic SLN) in patients with breast cancer. The deep learning signature may potentially provide a noninvasive approach to assist clinicians in predicting SLN metastasis in patients with breast cancer.
Collapse
|
69
|
He C, Wang J, Yin Y, Li Z. Automated classification of coronary plaque calcification in OCT pullbacks with 3D deep neural networks. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:JBO-200088R. [PMID: 32914606 PMCID: PMC7481437 DOI: 10.1117/1.jbo.25.9.095003] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Accepted: 08/24/2020] [Indexed: 05/07/2023]
Abstract
SIGNIFICANCE Detection and characterization of coronary atherosclerotic plaques often need reviews of a large number of optical coherence tomography (OCT) imaging slices to make a clinical decision. However, it is a challenge to manually review all the slices and consider the interrelationship between adjacent slices. APPROACH Inspired by the recent success of deep convolutional network on the classification of medical images, we proposed a ResNet-3D network for classification of coronary plaque calcification in OCT pullbacks. The ResNet-3D network was initialized with a trained ResNet-50 network and a three-dimensional convolution filter filled with zeros padding and non-zeros padding with a convolutional filter. To retrain ResNet-50, we used a dataset of ∼4860 OCT images, derived by 18 entire pullbacks from different patients. In addition, we investigated a two-phase training method to address the data imbalance. For an improved performance, we evaluated different input sizes for the ResNet-3D network, such as 3, 5, and 7 OCT slices. Furthermore, we integrated all ResNet-3D results by majority voting. RESULTS A comparative analysis proved the effectiveness of the proposed ResNet-3D networks against ResNet-2D network in the OCT dataset. The classification performance (F1-scores = 94 % for non-zeros padding and F1-score = 96 % for zeros padding) demonstrated the potential of convolutional neural networks (CNNs) in classifying plaque calcification. CONCLUSIONS This work may provide a foundation for further work in extending the CNN to voxel segmentation, which may lead to a supportive diagnostic tool for assessment of coronary plaque vulnerability.
Collapse
Affiliation(s)
- Chunliu He
- Southeast University, School of Biological Science and Medical Engineering, Nanjing, China
| | - Jiaqiu Wang
- Queensland University of Technology, School of Mechanical, Medical and Process Engineering, Brisbane, Australia
| | - Yifan Yin
- Southeast University, School of Biological Science and Medical Engineering, Nanjing, China
| | - Zhiyong Li
- Southeast University, School of Biological Science and Medical Engineering, Nanjing, China
- Queensland University of Technology, School of Mechanical, Medical and Process Engineering, Brisbane, Australia
| |
Collapse
|
70
|
Yi C, Xu Y, Yu H, Yan Y, Liu Y. Multi-component transfer metric learning for handling unrelated source domain samples. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106132] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
71
|
Hatabu H, Hunninghake GM, Richeldi L, Brown KK, Wells AU, Remy-Jardin M, Verschakelen J, Nicholson AG, Beasley MB, Christiani DC, San José Estépar R, Seo JB, Johkoh T, Sverzellati N, Ryerson CJ, Graham Barr R, Goo JM, Austin JHM, Powell CA, Lee KS, Inoue Y, Lynch DA. Interstitial lung abnormalities detected incidentally on CT: a Position Paper from the Fleischner Society. THE LANCET RESPIRATORY MEDICINE 2020; 8:726-737. [PMID: 32649920 DOI: 10.1016/s2213-2600(20)30168-5] [Citation(s) in RCA: 340] [Impact Index Per Article: 68.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Revised: 03/20/2020] [Accepted: 03/31/2020] [Indexed: 12/12/2022]
Abstract
The term interstitial lung abnormalities refers to specific CT findings that are potentially compatible with interstitial lung disease in patients without clinical suspicion of the disease. Interstitial lung abnormalities are increasingly recognised as a common feature on CT of the lung in older individuals, occurring in 4-9% of smokers and 2-7% of non-smokers. Identification of interstitial lung abnormalities will increase with implementation of lung cancer screening, along with increased use of CT for other diagnostic purposes. These abnormalities are associated with radiological progression, increased mortality, and the risk of complications from medical interventions, such as chemotherapy and surgery. Management requires distinguishing interstitial lung abnormalities that represent clinically significant interstitial lung disease from those that are subclinical. In particular, it is important to identify the subpleural fibrotic subtype, which is more likely to progress and to be associated with mortality. This multidisciplinary Position Paper by the Fleischner Society addresses important issues regarding interstitial lung abnormalities, including standardisation of the definition and terminology; predisposing risk factors; clinical outcomes; options for initial evaluation, monitoring, and management; the role of quantitative evaluation; and future research needs.
Collapse
Affiliation(s)
- Hiroto Hatabu
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| | - Gary M Hunninghake
- Department of Pulmonary and Critical Care Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Luca Richeldi
- Unitá Operativa Complessa di Pneumologia, Universitá Cattolica del Sacro Cuore, Fondazione Policlinico A Gemelli IRCCS, Rome, Italy
| | - Kevin K Brown
- Department of Medicine, Denver, CO, USA; National Jewish Health, Denver, CO, USA
| | - Athol U Wells
- Department of Respiratory Medicine, Royal Brompton and Hospital NHS Foundation Trust, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Martine Remy-Jardin
- Department of Thoracic Imaging, Hospital Calmette, University Centre of Lille, Lille, France
| | | | - Andrew G Nicholson
- Department of Histopathology, Royal Brompton and Hospital NHS Foundation Trust, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Mary B Beasley
- Department of Pathology, Icahn School of Medicine at Mount, New York, NY, USA
| | - David C Christiani
- Department of Environmental Health, Harvard T.H. Chan School of Public Health, Boston, MA, USA; Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Raúl San José Estépar
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Joon Beom Seo
- Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Takeshi Johkoh
- Department of Radiology, Kansai Rosai Hospital, Hyogo, Japan
| | | | - Christopher J Ryerson
- Department of Medicine, University of British Columbia and Centre for Heart Lung Innovations, St Paul's Hospital, Vancouver, BC, Canada
| | - R Graham Barr
- Department of Medicine and Department of Epidemiology, Columbia University Medical Center, New York, NY, USA
| | - Jin Mo Goo
- Department of Radiology, Seoul National University College of Medicine, Seoul, South Korea
| | - John H M Austin
- Department of Radiology, Columbia University Medical Center, New York, NY, USA
| | - Charles A Powell
- Pulmonary, Critical Care and Sleep Medicine, Icahn School of Medicine at Mount, New York, NY, USA
| | - Kyung Soo Lee
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Yoshikazu Inoue
- Clinical Research Center, National Hospital Organization Kinki-Chuo Chest Medical Center, Osaka, Japan
| | | |
Collapse
|
72
|
Improvement of Heterogeneous Transfer Learning Efficiency by Using Hebbian Learning Principle. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10165631] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Transfer learning algorithms have been widely studied for machine learning in recent times. In particular, in image recognition and classification tasks, transfer learning has shown significant benefits, and is getting plenty of attention in the research community. While performing a transfer of knowledge among source and target tasks, homogeneous dataset is not always available, and heterogeneous dataset can be chosen in certain circumstances. In this article, we propose a way of improving transfer learning efficiency, in case of a heterogeneous source and target, by using the Hebbian learning principle, called Hebbian transfer learning (HTL). In computer vision, biologically motivated approaches such as Hebbian learning represent associative learning, where simultaneous activation of brain cells positively affect the increase in synaptic connection strength between the individual cells. The discriminative nature of learning for the search of features in the task of image classification fits well to the techniques, such as the Hebbian learning rule—neurons that fire together wire together. The deep learning models, such as convolutional neural networks (CNN), are widely used for image classification. In transfer learning, for such models, the connection weights of the learned model should adapt to new target dataset with minimum effort. The discriminative learning rule, such as Hebbian learning, can improve performance of learning by quickly adapting to discriminate between different classes defined by target task. We apply the Hebbian principle as synaptic plasticity in transfer learning for classification of images using a heterogeneous source-target dataset, and compare results with the standard transfer learning case. Experimental results using CIFAR-10 (Canadian Institute for Advanced Research) and CIFAR-100 datasets with various combinations show that the proposed HTL algorithm can improve the performance of transfer learning, especially in the case of a heterogeneous source and target dataset.
Collapse
|
73
|
Yang W, Shi Y, Park SH, Yang M, Gao Y, Shen D. An Effective MR-Guided CT Network Training for Segmenting Prostate in CT Images. IEEE J Biomed Health Inform 2020; 24:2278-2291. [DOI: 10.1109/jbhi.2019.2960153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
74
|
Wei R, Liu B, Zhou F, Bai X, Fu D, Liang B, Wu Q. A patient-independent CT intensity matching method using conditional generative adversarial networks (cGAN) for single x-ray projection-based tumor localization. Phys Med Biol 2020; 65:145009. [PMID: 32320959 DOI: 10.1088/1361-6560/ab8bf2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
A convolutional neural network (CNN)-based tumor localization method with a single x-ray projection was previously developed by us. One finding is that the discrepancy in the discrepancy in the intensity between a digitally reconstructed radiograph (DRR) of a three-dimensional computed tomography (3D-CT) and the measured x-ray projection has an impact on the performance. To address this issue, a patient-dependent intensity matching process for 3D-CT was performed using 3D-cone-beam computed tomography (3D-CBCT) from the same patient, which was sometimes inefficient and could adversely affect the clinical implementation of the framework. To circumvent this, in this work, we propose and validate a patient-independent intensity matching method based on a conditional generative adversarial network (cGAN). A 3D cGAN was trained to approximate the mapping from 3D-CT to 3D-CBCT from previous patient data. By applying the trained network to a new patient, a synthetic 3D-CBCT could be generated without the need to perform an actual CBCT scan on that patient. The DRR of the synthetic 3D-CBCT was subsequently utilized in our CNN-based tumor localization scheme. The method was tested using data from 12 patients with the same imaging parameters. The resulting 3D-CBCT and DRR were compared with real ones to demonstrate the efficacy of the proposed method. The tumor localization errors were also analyzed. The difference between the synthetic and real 3D-CBCT had a median value of no more than 10 HU for all patients. The relative error between the DRR and the measured x-ray projection was less than 4.8% ± 2.0% for all patients. For the three patients with a visible tumor in the x-ray projections, the average tumor localization errors were below 1.7 and 0.9 mm in the superior-inferior and lateral directions, resepectively. A patient-independent CT intensity matching method was developed, based on which accurate tumor localization was achieved. It does not require an actual CBCT scan to be performed before treatment for each patient, therefore making it more efficient in the clinical workflow.
Collapse
Affiliation(s)
- Ran Wei
- Image Processing Center, Beihang University, Beijing 100191, People's Republic of China. These authors contributed equally
| | | | | | | | | | | | | |
Collapse
|
75
|
Duan W, Zhang J, Zhang L, Lin Z, Chen Y, Hao X, Wang Y, Zhang H. Evaluation of an artificial intelligent hydrocephalus diagnosis model based on transfer learning. Medicine (Baltimore) 2020; 99:e21229. [PMID: 32702895 PMCID: PMC7373556 DOI: 10.1097/md.0000000000021229] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
To design and develop artificial intelligence (AI) hydrocephalus (HYC) imaging diagnostic model using a transfer learning algorithm and evaluate its application in the diagnosis of HYC by non-contrast material-enhanced head computed tomographic (CT) images.A training and validation dataset of non-contrast material-enhanced head CT examinations that comprised of 1000 patients with HYC and 1000 normal people with no HYC accumulating to 28,500 images. Images were pre-processed, and the feature variables were labeled. The feature variables were extracted by the neural network for transfer learning. AI algorithm performance was tested on a separate dataset containing 250 examinations of HYC and 250 of normal. Resident, attending and consultant in the department of radiology were also tested with the test sets, their results were compared with the AI model.Final model performance for HYC showed 93.6% sensitivity (95% confidence interval: 77%, 97%) and 94.4% specificity (95% confidence interval: 79%, 98%), with area under the characteristic curve of 0.93. Accuracy rate of model, resident, attending, and consultant were 94.0%, 93.4%, 95.6%, and 97.0%.AI can effectively identify the characteristics of HYC from CT images of the brain and automatically analyze the images. In the future, AI can provide auxiliary diagnosis of image results and reduce the burden on junior doctors.
Collapse
Affiliation(s)
- Weike Duan
- Department of Neurosurgery, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang
| | - Jinsen Zhang
- Department of Neurosurgery, Huashan Hospital, Fudan University
| | - Liang Zhang
- Shanghai Nanoperception Information Technology Co. Ltd, Shanghai, P.R. China
| | - Zongsong Lin
- Shanghai Nanoperception Information Technology Co. Ltd, Shanghai, P.R. China
| | - Yuhang Chen
- Department of Neurosurgery, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang
| | - Xiaowei Hao
- Department of Neurosurgery, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang
| | - Yixin Wang
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA
| | - Hongri Zhang
- Department of Neurosurgery, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang
| |
Collapse
|
76
|
Dou Q, Liu Q, Heng PA, Glocker B. Unpaired Multi-Modal Segmentation via Knowledge Distillation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2415-2425. [PMID: 32012001 DOI: 10.1109/tmi.2019.2963882] [Citation(s) in RCA: 67] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-modal learning is typically performed with network architectures containing modality-specific layers and shared layers, utilizing co-registered images of different modalities. We propose a novel learning scheme for unpaired cross-modality image segmentation, with a highly compact architecture achieving superior segmentation accuracy. In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI, and only employ modality-specific internal normalization layers which compute respective statistics. To effectively train such a highly compact model, we introduce a novel loss term inspired by knowledge distillation, by explicitly constraining the KL-divergence of our derived prediction distributions between modalities. We have extensively validated our approach on two multi-class segmentation problems: i) cardiac structure segmentation, and ii) abdominal organ segmentation. Different network settings, i.e., 2D dilated network and 3D U-net, are utilized to investigate our method's general efficacy. Experimental results on both tasks demonstrate that our novel multi-modal learning scheme consistently outperforms single-modal training and previous multi-modal approaches.
Collapse
|
77
|
Hybrid deep learning convolutional neural networks and optimal nonlinear support vector machine to detect presence of hemorrhage in retina. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101978] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
78
|
Trusculescu AA, Manolescu D, Tudorache E, Oancea C. Deep learning in interstitial lung disease-how long until daily practice. Eur Radiol 2020; 30:6285-6292. [PMID: 32537728 PMCID: PMC7554005 DOI: 10.1007/s00330-020-06986-4] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Revised: 03/28/2020] [Accepted: 05/27/2020] [Indexed: 12/19/2022]
Abstract
Interstitial lung diseases are a diverse group of disorders that involve inflammation and fibrosis of interstitium, with clinical, radiological, and pathological overlapping features. These are an important cause of morbidity and mortality among lung diseases. This review describes computer-aided diagnosis systems centered on deep learning approaches that improve the diagnostic of interstitial lung diseases. We highlighted the challenges and the implementation of important daily practice, especially in the early diagnosis of idiopathic pulmonary fibrosis (IPF). Developing a convolutional neuronal network (CNN) that could be deployed on any computer station and be accessible to non-academic centers is the next frontier that needs to be crossed. In the future, early diagnosis of IPF should be possible. CNN might not only spare the human resources but also will reduce the costs spent on all the social and healthcare aspects of this deadly disease. Key Points • Deep learning algorithms are used in pattern recognition of different interstitial lung diseases. • High-resolution computed tomography plays a central role in the diagnosis and in the management of all interstitial lung diseases, especially fibrotic lung disease. • Developing an accessible algorithm that could be deployed on any computer station and be used in non-academic centers is the next frontier in the early diagnosis of idiopathic pulmonary fibrosis.
Collapse
Affiliation(s)
- Ana Adriana Trusculescu
- Department of Pulmonology, University of Medicine and Pharmacy "Victor Babes", Timisoara, Romania
| | - Diana Manolescu
- Department of Radiology, University of Medicine and Pharmacy "Victor Babes", Eftimie Murgu Square, Number 2, Timisoara, Romania.
| | - Emanuela Tudorache
- Department of Pulmonology, University of Medicine and Pharmacy "Victor Babes", Timisoara, Romania
| | - Cristian Oancea
- Department of Pulmonology, University of Medicine and Pharmacy "Victor Babes", Timisoara, Romania
| |
Collapse
|
79
|
Transfer Learning with Deep Convolutional Neural Network (CNN) for Pneumonia Detection Using Chest X-ray. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10093233] [Citation(s) in RCA: 115] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Pneumonia is a life-threatening disease, which occurs in the lungs caused by either bacterial or viral infection. It can be life-endangering if not acted upon at the right time and thus the early diagnosis of pneumonia is vital. The paper aims to automatically detect bacterial and viral pneumonia using digital x-ray images. It provides a detailed report on advances in accurate detection of pneumonia and then presents the methodology adopted by the authors. Four different pre-trained deep Convolutional Neural Network (CNN): AlexNet, ResNet18, DenseNet201, and SqueezeNet were used for transfer learning. A total of 5247 chest X-ray images consisting of bacterial, viral, and normal chest x-rays images were preprocessed and trained for the transfer learning-based classification task. In this study, the authors have reported three schemes of classifications: normal vs. pneumonia, bacterial vs. viral pneumonia, and normal, bacterial, and viral pneumonia. The classification accuracy of normal and pneumonia images, bacterial and viral pneumonia images, and normal, bacterial, and viral pneumonia were 98%, 95%, and 93.3%, respectively. This is the highest accuracy, in any scheme, of the accuracies reported in the literature. Therefore, the proposed study can be useful in more quickly diagnosing pneumonia by the radiologist and can help in the fast airport screening of pneumonia patients.
Collapse
|
80
|
Output based transfer learning with least squares support vector machine and its application in bladder cancer prognosis. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.11.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
81
|
Gong Y, Zhang Y, Zhu H, Lv J, Cheng Q, Zhang H, He Y, Wang S. Fetal Congenital Heart Disease Echocardiogram Screening Based on DGACNN: Adversarial One-Class Classification Combined with Video Transfer Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1206-1222. [PMID: 31603775 DOI: 10.1109/tmi.2019.2946059] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Fetal congenital heart disease (FHD) is a common and serious congenital malformation in children. In Asia, FHD birth defect rates have reached as high as 9.3%. For the early detection of birth defects and mortality, echocardiography remains the most effective method for screening fetal heart malformations. However, standard echocardiograms of the fetal heart, especially four-chamber view images, are difficult to obtain. In addition, the pathophysiological changes in fetal hearts during different pregnancy periods lead to ever-changing two-dimensional fetal heart structures and hemodynamics, and it requires extensive professional knowledge to recognize and judge disease development. Thus, research on the automatic screening for FHD is necessary. In this paper, we proposed a new model named DGACNN that shows the best performance in recognizing FHD, achieving a rate of 85%. The motivation for this network is to deal with the problem that there are insufficient training datasets to train a robust model. There are many unlabeled video slices, but they are tough and time-consuming to annotate. Thus, how to use these un-annotated video slices to improve the DGACNN capability for recognizing FHD, in terms of both recognition accuracy and robustness, is very meaningful for FHD screening. The architecture of DGACNN comprises two parts, that is, DANomaly and GACNN (Wgan-GP and CNN). DANomaly, similar to the ALOCC network, but incorporates cycle adversarial learning to train an end-to-end one-class classification (OCC) network that is more robust and has a higher accuracy than ALOCC in screening video slices. For the GACNN architecture, we use FCH (four chamber heart) video slices at around the end-systole, as screened by DANomaly, to train a WGAN-GP for the purpose of obtaining ideal low-level features that can robustly improve the FHD recognition accuracy. A few annotated video slices, as screened by DANomaly, can also be used for data augmentation so as to improve the FHD recognition further. The experiments show that the DGACNN outperforms other state-of-the-art networks by 1%-20% in recognizing FHD. A comparison experiment shows that the proposed network already outperforms the performance of expert cardiologists in recognizing FHD, reaching 84% in a test. Thus, the proposed architecture has high potential for helping cardiologists complete early FHD screenings.
Collapse
|
82
|
Abstract
OBJECTIVES The objective of this study is to assess the performance of a computer-aided diagnosis (CAD) system (INTACT system) for the automatic classification of high-resolution computed tomography images into 4 radiological diagnostic categories and to compare this with the performance of radiologists on the same task. MATERIALS AND METHODS For the comparison, a total of 105 cases of pulmonary fibrosis were studied (54 cases of nonspecific interstitial pneumonia and 51 cases of usual interstitial pneumonia). All diagnoses were interstitial lung disease board consensus diagnoses (radiologically or histologically proven cases) and were retrospectively selected from our database. Two subspecialized chest radiologists made a consensual ground truth radiological diagnosis, according to the Fleischner Society recommendations. A comparison analysis was performed between the INTACT system and 2 other radiologists with different years of experience (readers 1 and 2). The INTACT system consists of a sequential pipeline in which first the anatomical structures of the lung are segmented, then the various types of pathological lung tissue are identified and characterized, and this information is then fed to a random forest classifier able to recommend a radiological diagnosis. RESULTS Reader 1, reader 2, and INTACT achieved similar accuracy for classifying pulmonary fibrosis into the original 4 categories: 0.6, 0.54, and 0.56, respectively, with P > 0.45. The INTACT system achieved an F-score (harmonic mean for precision and recall) of 0.56, whereas the 2 readers, on average, achieved 0.57 (P = 0.991). For the pooled classification (2 groups, with and without the need for biopsy), reader 1, reader 2, and CAD had similar accuracies of 0.81, 0.70, and 0.81, respectively. The F-score was again similar for the CAD system and the radiologists. The CAD system and the average reader reached F-scores of 0.80 and 0.79 (P = 0.898). CONCLUSIONS We found that a computer-aided detection algorithm based on machine learning was able to classify idiopathic pulmonary fibrosis with similar accuracy to a human reader.
Collapse
|
83
|
Negahdar M, Coy A, Beymer D. An End-to-End Deep Learning Pipeline for Emphysema Quantification Using Multi-label Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:929-932. [PMID: 31946046 DOI: 10.1109/embc.2019.8857392] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
We propose and validate an end-to-end deep learning pipeline employing multi-label learning as a tool for creating differential diagnoses of lung pathology as well as quantifying the extent and distribution of emphysema in chest CT images. The proposed pipeline first employs deep learning based volumetric lung segmentation using a 3D CNN to extract the entire lung out of CT images. Then, a multi-label learning model is exploited for the classification creation differential diagnoses for emphysema and then used to correlate with the emphysema diagnosed by radiologists. The five lung tissue patterns which are involved in most lung disease differential diagnoses were classified as: ground glass, fibrosis, micronodules (random, perilymphatic and centrilobular lung nodules), normal appearing lung, and emphysematous lung tissue. To the best of our knowledge, this is the first end-to-end deep learning pipeline for the creation of differential diagnoses for lung disease and the quantification of emphysema. A comparative analysis shows the performance of the proposed pipeline on two publicly available datasets.
Collapse
|
84
|
Huang S, Lee F, Miao R, Si Q, Lu C, Chen Q. A deep convolutional neural network architecture for interstitial lung disease pattern classification. Med Biol Eng Comput 2020; 58:725-737. [DOI: 10.1007/s11517-019-02111-w] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 12/21/2019] [Indexed: 01/22/2023]
|
85
|
Wu L, Yang X, Cao W, Zhao K, Li W, Ye W, Chen X, Zhou Z, Liu Z, Liang C. Multiple Level CT Radiomics Features Preoperatively Predict Lymph Node Metastasis in Esophageal Cancer: A Multicentre Retrospective Study. Front Oncol 2020; 9:1548. [PMID: 32039021 PMCID: PMC6985546 DOI: 10.3389/fonc.2019.01548] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 12/20/2019] [Indexed: 12/24/2022] Open
Abstract
Background: Lymph node (LN) metastasis is the most important prognostic factor in esophageal squamous cell carcinoma (ESCC). Traditional clinical factor and existing methods based on CT images are insufficiently effective in diagnosing LN metastasis. A more efficient method to predict LN status based on CT image is needed. Methods: In this multicenter retrospective study, 411 patients with pathologically confirmed ESCC were registered from two hospitals. Quantitative image features including handcrafted-, computer vision-(CV-), and deep-features were extracted from preoperative arterial phase CT images for each patient. A handcrafted-, CV-, and deep-radiomics signature were built, respectively. Then, multiple radiomics models were constructed by merging independent clinical risk factor into radiomics signatures. The performance of models were evaluated with respect to the discrimination, calibration, and clinical usefulness. Finally, an independent external validation cohort was used to validate the model's predictive performance. Results: Five, seven, and nine features were selected for building handcrafted-, CV-, and deep-radiomics signatures from extracted features, respectively. Those signatures were statistically significant different between LN-positive and LN-negative patients in all cohorts (p < 0.001). The developed multiple level CT radiomics model that integrates multiple radiomics signatures with clinical risk factor, was superior to traditional clinical factors and the results reported by existing methods, and achieved satisfactory discrimination performance with C-statistic of 0.875 in development cohort, 0.874 in internal validation cohort and 0.840 in independent external validation cohort. Nomogram and decision curve analysis (DCA) further confirmed our method may serve as an effective tool for clinicians to evaluate the risk of LN metastasis in patients with ESCC and further choose treatment strategy. Conclusions: The proposed multiple level CT radiomics model which integrate multiple level radiomics features into clinical risk factor can be used for preoperative predicting LN metastasis of patients with ESCC.
Collapse
Affiliation(s)
- Lei Wu
- School of Medicine, South China University of Technology, Guangzhou, China.,Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Xiaojun Yang
- School of Medicine, South China University of Technology, Guangzhou, China.,Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wuteng Cao
- School of Medicine, South China University of Technology, Guangzhou, China.,Department of Radiology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Ke Zhao
- School of Medicine, South China University of Technology, Guangzhou, China.,Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wenli Li
- Department of Radiology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Weitao Ye
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Xin Chen
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Zhiyang Zhou
- Department of Radiology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Zaiyi Liu
- School of Medicine, South China University of Technology, Guangzhou, China.,Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Changhong Liang
- School of Medicine, South China University of Technology, Guangzhou, China.,Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| |
Collapse
|
86
|
Bermejo-Peláez D, Ash SY, Washko GR, San José Estépar R, Ledesma-Carbayo MJ. Classification of Interstitial Lung Abnormality Patterns with an Ensemble of Deep Convolutional Neural Networks. Sci Rep 2020; 10:338. [PMID: 31941918 PMCID: PMC6962320 DOI: 10.1038/s41598-019-56989-5] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 12/12/2019] [Indexed: 12/31/2022] Open
Abstract
Subtle interstitial changes in the lung parenchyma of smokers, known as Interstitial Lung Abnormalities (ILA), have been associated with clinical outcomes, including mortality, even in the absence of Interstitial Lung Disease (ILD). Although several methods have been proposed for the automatic identification of more advanced Interstitial Lung Disease (ILD) patterns, few have tackled ILA, which likely precedes the development ILD in some cases. In this context, we propose a novel methodology for automated identification and classification of ILA patterns in computed tomography (CT) images. The proposed method is an ensemble of deep convolutional neural networks (CNNs) that detect more discriminative features by incorporating two, two-and-a-half and three- dimensional architectures, thereby enabling more accurate classification. This technique is implemented by first training each individual CNN, and then combining its output responses to form the overall ensemble output. To train and test the system we used 37424 radiographic tissue samples corresponding to eight different parenchymal feature classes from 208 CT scans. The resulting ensemble performance including an average sensitivity of 91,41% and average specificity of 98,18% suggests it is potentially a viable method to identify radiographic patterns that precede the development of ILD.
Collapse
Affiliation(s)
- David Bermejo-Peláez
- Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid & CIBER-BBN, Madrid, Spain.
| | - Samuel Y Ash
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Brigham and Women's Hospital, Boston, MA, USA
| | - George R Washko
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Brigham and Women's Hospital, Boston, MA, USA
| | - Raúl San José Estépar
- Applied Chest Imaging Laboratory, Department of Radiology, Brigham and Women's Hospital, Boston, Massachusetts, United States of America
| | - María J Ledesma-Carbayo
- Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid & CIBER-BBN, Madrid, Spain
| |
Collapse
|
87
|
Ebner L, Christodoulidis S, Stathopoulou T, Geiser T, Stalder O, Limacher A, Heverhagen JT, Mougiakakou SG, Christe A. Meta-analysis of the radiological and clinical features of Usual Interstitial Pneumonia (UIP) and Nonspecific Interstitial Pneumonia (NSIP). PLoS One 2020; 15:e0226084. [PMID: 31929532 PMCID: PMC6957301 DOI: 10.1371/journal.pone.0226084] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2019] [Accepted: 11/18/2019] [Indexed: 02/02/2023] Open
Abstract
PURPOSE To conduct a meta-analysis to determine specific computed tomography (CT) patterns and clinical features that discriminate between nonspecific interstitial pneumonia (NSIP) and usual interstitial pneumonia (UIP). MATERIALS AND METHODS The PubMed/Medline and Embase databases were searched for studies describing the radiological patterns of UIP and NSIP in chest CT images. Only studies involving histologically confirmed diagnoses and a consensus diagnosis by an interstitial lung disease (ILD) board were included in this analysis. The radiological patterns and patient demographics were extracted from suitable articles. We used random-effects meta-analysis by DerSimonian & Laird and calculated pooled odds ratios for binary data and pooled mean differences for continuous data. RESULTS Of the 794 search results, 33 articles describing 2,318 patients met the inclusion criteria. Twelve of these studies included both NSIP (338 patients) and UIP (447 patients). NSIP-patients were significantly younger (NSIP: median age 54.8 years, UIP: 59.7 years; mean difference (MD) -4.4; p = 0.001; 95% CI: -6.97 to -1.77), less often male (NSIP: median 52.8%, UIP: 73.6%; pooled odds ratio (OR) 0.32; p<0.001; 95% CI: 0.17 to 0.60), and less often smokers (NSIP: median 55.1%, UIP: 73.9%; OR 0.42; p = 0.005; 95% CI: 0.23 to 0.77) than patients with UIP. The CT findings from patients with NSIP revealed significantly lower levels of the honeycombing pattern (NSIP: median 28.9%, UIP: 73.4%; OR 0.07; p<0.001; 95% CI: 0.02 to 0.30) with less peripheral predominance (NSIP: median 41.8%, UIP: 83.3%; OR 0.21; p<0.001; 95% CI: 0.11 to 0.38) and more subpleural sparing (NSIP: median 40.7%, UIP: 4.3%; OR 16.3; p = 0.005; 95% CI: 2.28 to 117). CONCLUSION Honeycombing with a peripheral predominance was significantly associated with a diagnosis of UIP. The NSIP pattern showed more subpleural sparing. The UIP pattern was predominantly observed in elderly males with a history of smoking, whereas NSIP occurred in a younger patient population.
Collapse
Affiliation(s)
- Lukas Ebner
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | | | - Thomai Stathopoulou
- ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland
| | - Thomas Geiser
- Department for Pulmonary Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Odile Stalder
- CTU Bern and Institute of Social and Preventive Medicine (ISPM), University of Bern, Switzerland
| | - Andreas Limacher
- CTU Bern and Institute of Social and Preventive Medicine (ISPM), University of Bern, Switzerland
| | - Johannes T. Heverhagen
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Stavroula G. Mougiakakou
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
- ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland
| | - Andreas Christe
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| |
Collapse
|
88
|
Yang X, Wu L, Zhao K, Ye W, Liu W, Wang Y, Li J, Li H, Huang X, Zhang W, Huang Y, Chen X, Yao S, Liu Z, Liang C. Evaluation of human epidermal growth factor receptor 2 status of breast cancer using preoperative multidetector computed tomography with deep learning and handcrafted radiomics features. Chin J Cancer Res 2020; 32:175-185. [PMID: 32410795 PMCID: PMC7219093 DOI: 10.21147/j.issn.1000-9604.2020.02.05] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Objective To evaluate the human epidermal growth factor receptor 2 (HER2) status in patients with breast cancer using multidetector computed tomography (MDCT)-based handcrafted and deep radiomics features. Methods This retrospective study enrolled 339 female patients (primary cohort, n=177; validation cohort, n=162) with pathologically confirmed invasive breast cancer. Handcrafted and deep radiomics features were extracted from the MDCT images during the arterial phase. After the feature selection procedures, handcrafted and deep radiomics signatures and the combined model were built using multivariate logistic regression analysis. Performance was assessed by measures of discrimination, calibration, and clinical usefulness in the primary cohort and validated in the validation cohort. Results The handcrafted radiomics signature had a discriminative ability with a C-index of 0.739 [95% confidence interval (95% CI): 0.661−0.818] in the primary cohort and 0.695 (95% CI: 0.609−0.781) in the validation cohort. The deep radiomics signature also had a discriminative ability with a C-index of 0.760 (95% CI: 0.690−0.831) in the primary cohort and 0.777 (95% CI: 0.696−0.857) in the validation cohort. The combined model, which incorporated both the handcrafted and deep radiomics signatures, showed good discriminative ability with a C-index of 0.829 (95% CI: 0.767−0.890) in the primary cohort and 0.809 (95% CI: 0.740−0.879) in the validation cohort. Conclusions Handcrafted and deep radiomics features from MDCT images were associated with HER2 status in patients with breast cancer. Thus, these features could provide complementary aid for the radiological evaluation of HER2 status in breast cancer.
Collapse
Affiliation(s)
- Xiaojun Yang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China.,School of Medicine, South China University of Technology, Guangzhou 510006, China
| | - Lei Wu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China.,School of Medicine, South China University of Technology, Guangzhou 510006, China
| | - Ke Zhao
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China.,School of Medicine, South China University of Technology, Guangzhou 510006, China
| | - Weitao Ye
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Weixiao Liu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Yingyi Wang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Jiao Li
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Hanxiao Li
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China.,School of Medicine, South China University of Technology, Guangzhou 510006, China
| | - Xiaomei Huang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Wen Zhang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Yanqi Huang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Xin Chen
- Department of Radiology, Guangzhou First People's Hospital, Guangzhou 510180, China
| | - Su Yao
- Department of Pathology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China.,School of Medicine, South China University of Technology, Guangzhou 510006, China
| | - Changhong Liang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China.,School of Medicine, South China University of Technology, Guangzhou 510006, China
| |
Collapse
|
89
|
Ma J, Song Y, Tian X, Hua Y, Zhang R, Wu J. Survey on deep learning for pulmonary medical imaging. Front Med 2019; 14:450-469. [PMID: 31840200 DOI: 10.1007/s11684-019-0726-4] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Accepted: 10/12/2019] [Indexed: 12/27/2022]
Abstract
As a promising method in artificial intelligence, deep learning has been proven successful in several domains ranging from acoustics and images to natural language processing. With medical imaging becoming an important part of disease screening and diagnosis, deep learning-based approaches have emerged as powerful techniques in medical image areas. In this process, feature representations are learned directly and automatically from data, leading to remarkable breakthroughs in the medical field. Deep learning has been widely applied in medical imaging for improved image analysis. This paper reviews the major deep learning techniques in this time of rapid evolution and summarizes some of its key contributions and state-of-the-art outcomes. The topics include classification, detection, and segmentation tasks on medical image analysis with respect to pulmonary medical images, datasets, and benchmarks. A comprehensive overview of these methods implemented on various lung diseases consisting of pulmonary nodule diseases, pulmonary embolism, pneumonia, and interstitial lung disease is also provided. Lastly, the application of deep learning techniques to the medical image and an analysis of their future challenges and potential directions are discussed.
Collapse
Affiliation(s)
| | - Yang Song
- Dalian Municipal Central Hospital Affiliated to Dalian Medical University, Dalian, 116033, China
| | - Xi Tian
- InferVision, Beijing, 100020, China
| | | | | | - Jianlin Wu
- Affiliated Zhongshan Hospital of Dalian University, Dalian, 116001, China.
| |
Collapse
|
90
|
Iqbal M, Al-Sahaf H, Xue B, Zhang M. Genetic programming with transfer learning for texture image classification. Soft comput 2019. [DOI: 10.1007/s00500-019-03843-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
91
|
Xu R, Cong Z, Ye X, Hirano Y, Kido S, Gyobu T, Kawata Y, Honda O, Tomiyama N. Pulmonary Textures Classification via a Multi-Scale Attention Network. IEEE J Biomed Health Inform 2019; 24:2041-2052. [PMID: 31689221 DOI: 10.1109/jbhi.2019.2950006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Precise classification of pulmonary textures is crucial to develop a computer aided diagnosis (CAD) system of diffuse lung diseases (DLDs). Although deep learning techniques have been applied to this task, the classification performance is not satisfied for clinical requirements, since commonly-used deep networks built by stacking convolutional blocks are not able to learn discriminative feature representation to distinguish complex pulmonary textures. For addressing this problem, we design a multi-scale attention network (MSAN) architecture comprised by several stacked residual attention modules followed by a multi-scale fusion module. Our deep network can not only exploit powerful information on different scales but also automatically select optimal features for more discriminative feature representation. Besides, we develop visualization techniques to make the proposed deep model transparent for humans. The proposed method is evaluated by using a large dataset. Experimental results show that our method has achieved the average classification accuracy of 94.78% and the average f-value of 0.9475 in the classification of 7 categories of pulmonary textures. Besides, visualization results intuitively explain the working behavior of the deep network. The proposed method has achieved the state-of-the-art performance to classify pulmonary textures on high resolution CT images.
Collapse
|
92
|
Sun C, Xu A, Liu D, Xiong Z, Zhao F, Ding W. Deep Learning-Based Classification of Liver Cancer Histopathology Images Using Only Global Labels. IEEE J Biomed Health Inform 2019; 24:1643-1651. [PMID: 31670686 DOI: 10.1109/jbhi.2019.2949837] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Liver cancer is a leading cause of cancer deaths worldwide due to its high morbidity and mortality. Histopathological image analysis (HIA) is a crucial step in the early diagnosis of liver cancer and is routinely performed manually. However, this process is time-consuming, error-prone, and easily affected by the expertise of pathologists. Recently, computer-aided methods have been widely applied to medical image analysis; however, the current medical image analysis studies have not yet focused on the histopathological morphology of liver cancer due to its complex features and the insufficiency of training images with detailed annotations. This paper proposes a deep learning method for liver cancer histopathological image classification using only global labels. To compensate for the lack of detailed cancer region annotations in those images, patch features are extracted and fully utilized. Transfer learning is used to obtain the patch-level features and then combined with multiple-instance learning to acquire the image-level features for classification. The method proposed here solves the processing of large-scale images and training sample insufficiency in liver cancer histopathological images for image classification. The proposed method can distinguish and classify liver histopathological images as abnormal or normal with high accuracy, thus providing support for the early diagnosis of liver cancer.
Collapse
|
93
|
Li J, Wu W, Xue D, Gao P. Multi-Source Deep Transfer Neural Network Algorithm. SENSORS 2019; 19:s19183992. [PMID: 31527437 PMCID: PMC6767847 DOI: 10.3390/s19183992] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Revised: 09/08/2019] [Accepted: 09/12/2019] [Indexed: 12/11/2022]
Abstract
Transfer learning can enhance classification performance of a target domain with insufficient training data by utilizing knowledge relating to the target domain from source domain. Nowadays, it is common to see two or more source domains available for knowledge transfer, which can improve performance of learning tasks in the target domain. However, the classification performance of the target domain decreases due to mismatching of probability distribution. Recent studies have shown that deep learning can build deep structures by extracting more effective features to resist the mismatching. In this paper, we propose a new multi-source deep transfer neural network algorithm, MultiDTNN, based on convolutional neural network and multi-source transfer learning. In MultiDTNN, joint probability distribution adaptation (JPDA) is used for reducing the mismatching between source and target domains to enhance features transferability of the source domain in deep neural networks. Then, the convolutional neural network is trained by utilizing the datasets of each source and target domain to obtain a set of classifiers. Finally, the designed selection strategy selects classifier with the smallest classification error on the target domain from the set to assemble the MultiDTNN framework. The effectiveness of the proposed MultiDTNN is verified by comparing it with other state-of-the-art deep transfer learning on three datasets.
Collapse
Affiliation(s)
- Jingmei Li
- College of Computer Science and Technology, Harbin Engineering University, No.145 Nantong Street, Harbin 150001, China.
| | - Weifei Wu
- College of Computer Science and Technology, Harbin Engineering University, No.145 Nantong Street, Harbin 150001, China.
| | - Di Xue
- College of Computer Science and Technology, Harbin Engineering University, No.145 Nantong Street, Harbin 150001, China.
| | - Peng Gao
- College of Computer Science and Technology, Harbin Engineering University, No.145 Nantong Street, Harbin 150001, China.
| |
Collapse
|
94
|
Chen H, Song Y, Li X. A deep learning framework for identifying children with ADHD using an EEG-based brain network. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.04.058] [Citation(s) in RCA: 64] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
95
|
Kuzina A, Egorov E, Burnaev E. Bayesian Generative Models for Knowledge Transfer in MRI Semantic Segmentation Problems. Front Neurosci 2019; 13:844. [PMID: 31496928 PMCID: PMC6712162 DOI: 10.3389/fnins.2019.00844] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 07/26/2019] [Indexed: 12/04/2022] Open
Abstract
Automatic segmentation methods based on deep learning have recently demonstrated state-of-the-art performance, outperforming the ordinary methods. Nevertheless, these methods are inapplicable for small datasets, which are very common in medical problems. To this end, we propose a knowledge transfer method between diseases via the Generative Bayesian Prior network. Our approach is compared to a pre-train approach and random initialization and obtains the best results in terms of Dice Similarity Coefficient metric for the small subsets of the Brain Tumor Segmentation 2018 database (BRATS2018).
Collapse
Affiliation(s)
- Anna Kuzina
- Center for Computational and Data-Intensive Science and Engineering, Skolkovo Institute of Science and Technology, Moscow, Russia
| | - Evgenii Egorov
- Center for Computational and Data-Intensive Science and Engineering, Skolkovo Institute of Science and Technology, Moscow, Russia
| | - Evgeny Burnaev
- Center for Computational and Data-Intensive Science and Engineering, Skolkovo Institute of Science and Technology, Moscow, Russia
| |
Collapse
|
96
|
Gao M, Jiang H, Zhang D, Ma H, Qian W. Quantitative pathologic analysis of pulmonary nodules using three-dimensional computed tomography images based on latent Dirichlet allocation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2019:6255-6258. [PMID: 31947272 DOI: 10.1109/embc.2019.8856964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The main purpose of this paper is to quantificationally predict the pathologic characteristics of pulmonary nodules using a novel and effective computer assisted diagnosis (CADx) scheme based on latent Dirichlet allocation (LDA) model. To make use of LDA model, we propose a novel 3D rotation invariant LBP feature to construct image words through the K-means algorithm from 3D pulmonary nodule slices. A topic distribution for each pulmonary nodule can be acquired by well-trained LDA model, which was used for pathologic analysis based on rank-based statistical analysis. Using the LIDC/IDRI database, this study made experiments based on different parameters, including topic number and size of vocabulary. Experiments demonstrate that the performance of all the characteristics reached to accuracies of more than 80%. Especially, this study obtained an accuracy of 84.2% with the root mean square error (RMSE) of 1.068 on quantitative assessment of malignancy likelihood. Compared with the latest study of multi-task convolutional neutral network regression, the proposed method can obtain more accurate results of characteristic prediction of a pulmonary nodule.
Collapse
|
97
|
Jang HJ, Cho KO. Applications of deep learning for the analysis of medical data. Arch Pharm Res 2019; 42:492-504. [PMID: 31140082 DOI: 10.1007/s12272-019-01162-9] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Accepted: 05/20/2019] [Indexed: 02/06/2023]
Abstract
Over the past decade, deep learning has demonstrated superior performances in solving many problems in various fields of medicine compared with other machine learning methods. To understand how deep learning has surpassed traditional machine learning techniques, in this review, we briefly explore the basic learning algorithms underlying deep learning. In addition, the procedures for building deep learning-based classifiers for seizure electroencephalograms and gastric tissue slides are described as examples to demonstrate the simplicity and effectiveness of deep learning applications. Finally, we review the clinical applications of deep learning in radiology, pathology, and drug discovery, where deep learning has been actively adopted. Considering the great advantages of deep learning techniques, deep learning will be increasingly and widely utilized in a wide variety of different areas in medicine in the coming decades.
Collapse
Affiliation(s)
- Hyun-Jong Jang
- Department of Physiology, Department of Biomedicine & Health Sciences, Catholic Neuroscience Institute, College of Medicine, The Catholic University of Korea, Seoul, 06591, South Korea
| | - Kyung-Ok Cho
- Department of Pharmacology, Department of Biomedicine & Health Sciences, Catholic Neuroscience Institute, Institute of Aging and Metabolic Diseases, College of Medicine, The Catholic University of Korea, 222 Banpo-Daero, Seocho-Gu, Seoul, 06591, South Korea.
| |
Collapse
|
98
|
Deep Learning in the Biomedical Applications: Recent and Future Status. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9081526] [Citation(s) in RCA: 75] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep neural networks represent, nowadays, the most effective machine learning technology in biomedical domain. In this domain, the different areas of interest concern the Omics (study of the genome—genomics—and proteins—transcriptomics, proteomics, and metabolomics), bioimaging (study of biological cell and tissue), medical imaging (study of the human organs by creating visual representations), BBMI (study of the brain and body machine interface) and public and medical health management (PmHM). This paper reviews the major deep learning concepts pertinent to such biomedical applications. Concise overviews are provided for the Omics and the BBMI. We end our analysis with a critical discussion, interpretation and relevant open challenges.
Collapse
|
99
|
Jeyaraj PR, Samuel Nadar ER. Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm. J Cancer Res Clin Oncol 2019; 145:829-837. [PMID: 30603908 DOI: 10.1007/s00432-018-02834-7] [Citation(s) in RCA: 106] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2018] [Accepted: 12/24/2018] [Indexed: 02/07/2023]
Abstract
PURPOSE Oral cancer is a complex wide spread cancer, which has high severity. Using advanced technology and deep learning algorithm early detection and classification are made possible. Medical imaging technique, computer-aided diagnosis and detection can make potential changes in cancer treatment. In this research work, we have developed a deep learning algorithm for automated, computer-aided oral cancer detecting system by investigating patient hyperspectral images. METHODS To validate the proposed regression-based partitioned deep learning algorithm, we compare the performance with other techniques by its classification accuracy, specificity, and sensitivity. For the accurate medical image classification objective, we demonstrate a new structure of partitioned deep Convolution Neural Network (CNN) with two partitioned layers for labeling and classify by labeling region of interest in multidimensional hyperspectral image. RESULTS The performance of the partitioned deep CNN was verified by classification accuracy. We have obtained classification accuracy of 91.4% with sensitivity 0.94 and a specificity of 0.91 for 100 image data sets training for task classification of cancerous tumor with benign and for task classification of cancerous tumor with normal tissue accuracy of 94.5% for 500 training patterns was obtained. CONCLUSIONS We compared the obtained results from another traditional medical image classification algorithm. From the obtained result, we identify that the quality of diagnosis is increased by proposed regression-based partitioned CNN learning algorithm for a complex medical image of oral cancer diagnosis.
Collapse
Affiliation(s)
- Pandia Rajan Jeyaraj
- Department of Electrical and Electronics Engineering, Mepco Schlenk Engineering College (Autonomous), Sivakasi, Tamil Nadu, India.
| | - Edward Rajan Samuel Nadar
- Department of Electrical and Electronics Engineering, Mepco Schlenk Engineering College (Autonomous), Sivakasi, Tamil Nadu, India
| |
Collapse
|
100
|
Cheplygina V, de Bruijne M, Pluim JPW. Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med Image Anal 2019; 54:280-296. [PMID: 30959445 DOI: 10.1016/j.media.2019.03.009] [Citation(s) in RCA: 361] [Impact Index Per Article: 60.2] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 12/20/2018] [Accepted: 03/25/2019] [Indexed: 02/07/2023]
Abstract
Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods that can learn with less/other types of supervision, have been proposed. We give an overview of semi-supervised, multiple instance, and transfer learning in medical imaging, both in diagnosis or segmentation tasks. We also discuss connections between these learning scenarios, and opportunities for future research. A dataset with the details of the surveyed papers is available via https://figshare.com/articles/Database_of_surveyed_literature_in_Not-so-supervised_a_survey_of_semi-supervised_multi-instance_and_transfer_learning_in_medical_image_analysis_/7479416.
Collapse
Affiliation(s)
- Veronika Cheplygina
- Medical Image Analysis, Department Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands.
| | - Marleen de Bruijne
- Biomedical Imaging Group Rotterdam, Departments Radiology and Medical Informatics, Erasmus Medical Center, Rotterdam, the Netherlands; The Image Section, Department Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Josien P W Pluim
- Medical Image Analysis, Department Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands; Image Sciences Institute, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|