351
|
Yanase J, Triantaphyllou E. The seven key challenges for the future of computer-aided diagnosis in medicine. Int J Med Inform 2019; 129:413-422. [DOI: 10.1016/j.ijmedinf.2019.06.017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2019] [Revised: 06/15/2019] [Accepted: 06/19/2019] [Indexed: 12/23/2022]
|
352
|
Tang P, Liang Q, Yan X, Xiang S, Sun W, Zhang D, Coppola G. Efficient skin lesion segmentation using separable-Unet with stochastic weight averaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 178:289-301. [PMID: 31416556 DOI: 10.1016/j.cmpb.2019.07.005] [Citation(s) in RCA: 61] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 07/04/2019] [Accepted: 07/04/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Efficient segmentation of skin lesion in dermoscopy images can improve the classification accuracy of skin diseases, which provides a powerful approach for the dermatologists in examining pigmented skin lesions. However, the segmentation is challenging due to the low contrast of skin lesions from a captured image, fuzzy and indistinct lesion boundaries, huge variety of interclass variation of melanomas, the existence of artifacts, etc. In this work, an efficient and accurate melanoma region segmentation method is proposed for computer-aided diagnostic systems. METHOD A skin lesion segmentation (SLS) method based on the separable-Unet with stochastic weight averaging is proposed in this work. Specifically, the proposed Separable-Unet framework takes advantage of the separable convolutional block and U-Net architectures, which can extremely capture the context feature channel correlation and higher semantic feature information to enhance the pixel-level discriminative representation capability of fully convolutional networks (FCN). Further, considering that the over-fitting is a local optimum (or sub-optimum) problem, a scheme based on stochastic weight averaging is introduced, which can obtain much broader optimum and better generalization. RESULTS The proposed method is evaluated in three publicly available datasets. The experimental results showed that the proposed approach segmented the skin lesions with an average Dice coefficient of 93.03% and Jaccard index of 89.25% for the International Skin Imaging Collaboration (ISIC) 2016 Skin Lesion Challenge (SLC) dataset, 86.93% and 79.26% for the ISIC 2017 SLC, and 94.13% and 89.40% for the PH2 dataset, respectively. The proposed approach is compared with other state-of-the-art methods, and the results demonstrate that the proposed approach outperforms them for SLS on both melanoma and non-melanoma cases. Segmentation of a potential lesion with the proposed approach in a dermoscopy image requires less than 0.05 s of processing time, which is roughly 30 times faster than the second best method (regarding the value of Jaccard index) for the ISIC 2017 dataset with the same hardware configuration. CONCLUSIONS We concluded that using the separable convolutional block and U-Net architectures with stochastic weight averaging strategy could enable to obtain better pixel-level discriminative representation capability. Moreover, the considerably decreased computation time suggests that the proposed approach has potential for practical computer-aided diagnose systems, besides provides a segmentation for the specific analysis with improved segmentation performance.
Collapse
Affiliation(s)
- Peng Tang
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China; Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Hunan University, Changsha 410082, China; National Engineering Laboratory for Robot Vision Perception and Control, Hunan University, Changsha 410082, China
| | - Qiaokang Liang
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China; Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Hunan University, Changsha 410082, China; National Engineering Laboratory for Robot Vision Perception and Control, Hunan University, Changsha 410082, China.
| | - Xintong Yan
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
| | - Shao Xiang
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China; Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Hunan University, Changsha 410082, China; National Engineering Laboratory for Robot Vision Perception and Control, Hunan University, Changsha 410082, China
| | - Wei Sun
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China; Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Hunan University, Changsha 410082, China; National Engineering Laboratory for Robot Vision Perception and Control, Hunan University, Changsha 410082, China
| | - Dan Zhang
- Department of Mechanical Engineering, York University, Toronto, ON M3J 1P3, Canada
| | - Gianmarc Coppola
- Faculty of Engineering and Applied Science, University of Ontario Institute of Technology, Oshawa, ON L1H 7K4, Canada
| |
Collapse
|
353
|
Sun L, Zhang D, Lian C, Wang L, Wu Z, Shao W, Lin W, Shen D, Li G. Topological correction of infant white matter surfaces using anatomically constrained convolutional neural network. Neuroimage 2019; 198:114-124. [PMID: 31112785 PMCID: PMC6602545 DOI: 10.1016/j.neuroimage.2019.05.037] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Revised: 05/08/2019] [Accepted: 05/14/2019] [Indexed: 01/02/2023] Open
Abstract
Reconstruction of accurate cortical surfaces without topological errors (i.e., handles and holes) from infant brain MR images is very important in early brain development studies. However, infant brain MR images typically suffer extremely low tissue contrast and dynamic imaging appearance patterns. Thus, it is inevitable to have large amounts of topological errors in the segmented infant brain tissue images, which lead to inaccurately reconstructed cortical surfaces with topological errors. To address this issue, inspired by recent advances in deep learning, we propose an anatomically constrained network for topological correction on infant cortical surfaces. Specifically, in our method, we first locate regions of potential topological defects by leveraging a topology-preserving level set method. Then, we propose an anatomically constrained network to correct those candidate voxels in the located regions. Since infant cortical surfaces often contain large and complex handles or holes, it is difficult to completely correct all errors using one-shot correction. Therefore, we further enroll these two steps into an iterative framework to gradually correct large topological errors. To the best of our knowledge, this is the first work to introduce deep learning approach for topological correction of infant cortical surfaces. We compare our method with the state-of-the-art methods on both simulated topological errors and real topological errors in human infant brain MR images. Moreover, we also validate our method on the infant brain MR images of macaques. All experimental results show the superior performance of the proposed method.
Collapse
Affiliation(s)
- Liang Sun
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, 211106, China; Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27599, USA
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, 211106, China.
| | - Chunfeng Lian
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27599, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27599, USA
| | - Zhengwang Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27599, USA
| | - Wei Shao
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, 211106, China
| | - Weili Lin
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27599, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea.
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27599, USA.
| |
Collapse
|
354
|
Deep learning in medical image analysis: A third eye for doctors. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2019; 120:279-288. [DOI: 10.1016/j.jormas.2019.06.002] [Citation(s) in RCA: 90] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 06/11/2019] [Accepted: 06/18/2019] [Indexed: 12/22/2022]
|
355
|
Wang Y, Guan Q, Lao I, Wang L, Wu Y, Li D, Ji Q, Wang Y, Zhu Y, Lu H, Xiang J. Using deep convolutional neural networks for multi-classification of thyroid tumor by histopathology: a large-scale pilot study. ANNALS OF TRANSLATIONAL MEDICINE 2019; 7:468. [PMID: 31700904 DOI: 10.21037/atm.2019.08.54] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Background To explore whether deep convolutional neural networks (DCNNs) have the potential to improve diagnostic efficiency and increase the level of interobserver agreement in the classification of thyroid nodules in histopathological slides. Methods A total of 11,715 fragmented images from 806 patients' original histological images were divided into a training dataset and a test dataset. Inception-ResNet-v2 and VGG-19 were trained using the training dataset and tested using the test dataset to determine the diagnostic efficiencies of different histologic types of thyroid nodules, including normal tissue, adenoma, nodular goiter, papillary thyroid carcinoma (PTC), follicular thyroid carcinoma (FTC), medullary thyroid carcinoma (MTC) and anaplastic thyroid carcinoma (ATC). Misdiagnoses were further analyzed. Results The total 11,715 fragmented images were divided into a training dataset and a test dataset for each pathology type at a ratio of 5:1. Using the test set, VGG-19 yielded a better average diagnostic accuracy than did Inception-ResNet-v2 (97.34% vs. 94.42%, respectively). The VGG-19 model applied to 7 pathology types showed a fragmentation accuracy of 88.33% for normal tissue, 98.57% for ATC, 98.89% for FTC, 100% for MTC, 97.77% for PTC, 100% for nodular goiter and 92.44% for adenoma. It achieved excellent diagnostic efficiencies for all the malignant types. Normal tissue and adenoma were the most challenging histological types to classify. Conclusions The DCNN models, especially VGG-19, achieved satisfactory accuracies on the task of differentiating thyroid tumors by histopathology. Analysis of the misdiagnosed cases revealed that normal tissue and adenoma were the most challenging histological types for the DCNN to differentiate, while all the malignant classifications achieved excellent diagnostic efficiencies. The results indicate that DCNN models may have potential for facilitating histopathologic thyroid disease diagnosis.
Collapse
Affiliation(s)
- Yunjun Wang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Qing Guan
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Iweng Lao
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China.,Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai 200032, China
| | - Li Wang
- Depertment of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yi Wu
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Duanshu Li
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Qinghai Ji
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Yu Wang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Yongxue Zhu
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Hongtao Lu
- Depertment of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Jun Xiang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| |
Collapse
|
356
|
Guan Q, Wang Y, Ping B, Li D, Du J, Qin Y, Lu H, Wan X, Xiang J. Deep convolutional neural network VGG-16 model for differential diagnosing of papillary thyroid carcinomas in cytological images: a pilot study. J Cancer 2019; 10:4876-4882. [PMID: 31598159 PMCID: PMC6775529 DOI: 10.7150/jca.28769] [Citation(s) in RCA: 90] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Accepted: 07/28/2019] [Indexed: 12/22/2022] Open
Abstract
Objective: In this study, we exploited a VGG-16 deep convolutional neural network (DCNN) model to differentiate papillary thyroid carcinoma (PTC) from benign thyroid nodules using cytological images. Methods: A pathology-proven dataset was built from 279 cytological images of thyroid nodules. The images were cropped into fragmented images and divided into a training dataset and a test dataset. VGG-16 and Inception-v3 DCNNs were trained and tested to make differential diagnoses. The characteristics of tumor cell nucleus were quantified as contours, perimeter, area and mean of pixel intensity and compared using independent Student's t-tests. Results: In the test group, the accuracy rates of the VGG-16 model and Inception-v3 on fragmented images were 97.66% and 92.75%, respectively, and the accuracy rates of VGG-16 and Inception-v3 in patients were 95% and 87.5%, respectively. The contours, perimeter, area and mean of pixel intensity of PTC in fragmented images were more than the benign nodules, which were 61.01±17.10 vs 47.00±24.08, p=0.000, 134.99±21.42 vs 62.40±29.15, p=0.000, 1770.89±627.22 vs 1157.27±722.23, p=0.013, 165.84±26.33 vs 132.94±28.73, p=0.000), respectively. Conclusion: In summary, after training with a large dataset, the DCNN VGG-16 model showed great potential in facilitating PTC diagnosis from cytological images. The contours, perimeter, area and mean of pixel intensity of PTC in fragmented images were more than the benign nodules.
Collapse
Affiliation(s)
- Qing Guan
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
- Department of Oncology, Shanghai Medical Colloge, Fudan University, Shanghai, 200032, China
| | - Yunjun Wang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
- Department of Oncology, Shanghai Medical Colloge, Fudan University, Shanghai, 200032, China
| | - Bo Ping
- Department of Oncology, Shanghai Medical Colloge, Fudan University, Shanghai, 200032, China
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
| | - Duanshu Li
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
- Department of Oncology, Shanghai Medical Colloge, Fudan University, Shanghai, 200032, China
| | - Jiajun Du
- Depertment of Computer Science and Engineering, Shanghai Jiaotong University, Shanghai, China
| | - Yu Qin
- Depertment of Computer Science and Engineering, Shanghai Jiaotong University, Shanghai, China
| | - Hongtao Lu
- Depertment of Computer Science and Engineering, Shanghai Jiaotong University, Shanghai, China
| | - Xiaochun Wan
- Department of Oncology, Shanghai Medical Colloge, Fudan University, Shanghai, 200032, China
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
| | - Jun Xiang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
- Department of Oncology, Shanghai Medical Colloge, Fudan University, Shanghai, 200032, China
| |
Collapse
|
357
|
Joyseeree R, Otálora S, Müller H, Depeursinge A. Fusing learned representations from Riesz Filters and Deep CNN for lung tissue classification. Med Image Anal 2019; 56:172-183. [DOI: 10.1016/j.media.2019.06.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 12/23/2018] [Accepted: 06/11/2019] [Indexed: 10/26/2022]
|
358
|
Fang L, Wang C, Li S, Rabbani H, Chen X, Liu Z. Attention to Lesion: Lesion-Aware Convolutional Neural Network for Retinal Optical Coherence Tomography Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1959-1970. [PMID: 30763240 DOI: 10.1109/tmi.2019.2898414] [Citation(s) in RCA: 89] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Automatic and accurate classification of retinal optical coherence tomography (OCT) images is essential to assist ophthalmologist in the diagnosis and grading of macular diseases. Clinically, ophthalmologists usually diagnose macular diseases according to the structures of macular lesions, whose morphologies, size, and numbers are important criteria. In this paper, we propose a novel lesion-aware convolutional neural network (LACNN) method for retinal OCT image classification, in which retinal lesions within OCT images are utilized to guide the CNN to achieve more accurate classification. The LACNN simulates the ophthalmologists' diagnosis that focuses on local lesion-related regions when analyzing the OCT image. Specifically, we first design a lesion detection network to generate a soft attention map from the whole OCT image. The attention map is then incorporated into a classification network to weight the contributions of local convolutional representations. Guided by the lesion attention map, the classification network can utilize the information from local lesion-related regions to further accelerate the network training process and improve the OCT classification. Our experimental results on two clinically acquired OCT datasets demonstrate the effectiveness and efficiency of the proposed LACNN method for retinal OCT image classification.
Collapse
|
359
|
Huang Z, Li Y, Jin L, Li H. Evaluating flatfoot based on gait plantar pressure data in juveniles by a neural network method. FOOTWEAR SCIENCE 2019. [DOI: 10.1080/19424280.2019.1606302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Zhiguan Huang
- Guangdong Provincial Engineering Technology Research Center for Sports Assistive Device, Guangzhou Sport University, Guangzhou, China
| | - Yuhe Li
- Guangdong Provincial Engineering Technology Research Center for Sports Assistive Device, Guangzhou Sport University, Guangzhou, China
| | - Long Jin
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China
| | - Hongwei Li
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China
| |
Collapse
|
360
|
Xu S, Zou X, Ma B, Chen J, Yu L, Zou W. Deep-learning-powered photonic analog-to-digital conversion. LIGHT, SCIENCE & APPLICATIONS 2019; 8:66. [PMID: 31645915 PMCID: PMC6804794 DOI: 10.1038/s41377-019-0176-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2019] [Revised: 06/19/2019] [Accepted: 06/20/2019] [Indexed: 05/31/2023]
Abstract
Analog-to-digital converters (ADCs) must be high speed, broadband, and accurate for the development of modern information systems, such as radar, imaging, and communications systems; photonic technologies are regarded as promising technologies for realizing these advanced requirements. Here, we present a deep-learning-powered photonic ADC architecture that simultaneously exploits the advantages of electronics and photonics and overcomes the bottlenecks of the two technologies, thereby overcoming the ADC tradeoff among speed, bandwidth, and accuracy. Via supervised training, the adopted deep neural networks learn the patterns of photonic system defects and recover the distorted data, thereby maintaining the high quality of the electronic quantized data succinctly and adaptively. The numerical and experimental results demonstrate that the proposed architecture outperforms state-of-the-art ADCs with developable high throughput; hence, deep learning performs well in photonic ADC systems. We anticipate that the proposed architecture will inspire future high-performance photonic ADC design and provide opportunities for substantial performance enhancement for the next-generation information systems.
Collapse
Affiliation(s)
- Shaofu Xu
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Intelligent Microwave Lightwave Integration Innovation Center (iMLic), Department of Electronic Engineering, Shanghai Jiao Tong University, 200240 Shanghai, China
| | - Xiuting Zou
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Intelligent Microwave Lightwave Integration Innovation Center (iMLic), Department of Electronic Engineering, Shanghai Jiao Tong University, 200240 Shanghai, China
| | - Bowen Ma
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Intelligent Microwave Lightwave Integration Innovation Center (iMLic), Department of Electronic Engineering, Shanghai Jiao Tong University, 200240 Shanghai, China
| | - Jianping Chen
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Intelligent Microwave Lightwave Integration Innovation Center (iMLic), Department of Electronic Engineering, Shanghai Jiao Tong University, 200240 Shanghai, China
| | - Lei Yu
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Intelligent Microwave Lightwave Integration Innovation Center (iMLic), Department of Electronic Engineering, Shanghai Jiao Tong University, 200240 Shanghai, China
| | - Weiwen Zou
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Intelligent Microwave Lightwave Integration Innovation Center (iMLic), Department of Electronic Engineering, Shanghai Jiao Tong University, 200240 Shanghai, China
| |
Collapse
|
361
|
Peng H, Dong D, Fang MJ, Li L, Tang LL, Chen L, Li WF, Mao YP, Fan W, Liu LZ, Tian L, Lin AH, Sun Y, Tian J, Ma J. Prognostic Value of Deep Learning PET/CT-Based Radiomics: Potential Role for Future Individual Induction Chemotherapy in Advanced Nasopharyngeal Carcinoma. Clin Cancer Res 2019; 25:4271-4279. [PMID: 30975664 DOI: 10.1158/1078-0432.ccr-18-3065] [Citation(s) in RCA: 223] [Impact Index Per Article: 37.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 02/28/2019] [Accepted: 04/08/2019] [Indexed: 11/16/2022]
Abstract
PURPOSE We aimed to evaluate the value of deep learning on positron emission tomography with computed tomography (PET/CT)-based radiomics for individual induction chemotherapy (IC) in advanced nasopharyngeal carcinoma (NPC). EXPERIMENTAL DESIGN We constructed radiomics signatures and nomogram for predicting disease-free survival (DFS) based on the extracted features from PET and CT images in a training set (n = 470), and then validated it on a test set (n = 237). Harrell's concordance indices (C-index) and time-independent receiver operating characteristic (ROC) analysis were applied to evaluate the discriminatory ability of radiomics nomogram, and compare radiomics signatures with plasma Epstein-Barr virus (EBV) DNA. RESULTS A total of 18 features were selected to construct CT-based and PET-based signatures, which were significantly associated with DFS (P < 0.001). Using these signatures, we proposed a radiomics nomogram with a C-index of 0.754 [95% confidence interval (95% CI), 0.709-0.800] in the training set and 0.722 (95% CI, 0.652-0.792) in the test set. Consequently, 206 (29.1%) patients were stratified as high-risk group and the other 501 (70.9%) as low-risk group by the radiomics nomogram, and the corresponding 5-year DFS rates were 50.1% and 87.6%, respectively (P < 0.0001). High-risk patients could benefit from IC while the low-risk could not. Moreover, radiomics nomogram performed significantly better than the EBV DNA-based model (C-index: 0.754 vs. 0.675 in the training set and 0.722 vs. 0.671 in the test set) in risk stratification and guiding IC. CONCLUSIONS Deep learning PET/CT-based radiomics could serve as a reliable and powerful tool for prognosis prediction and may act as a potential indicator for individual IC in advanced NPC.
Collapse
Affiliation(s)
- Hao Peng
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, P. R. China
- University of Chinese Academy of Sciences, Beijing, P. R. China
| | - Meng-Jie Fang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, P. R. China
- University of Chinese Academy of Sciences, Beijing, P. R. China
| | - Lu Li
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, P. R. China
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, P. R. China
| | - Ling-Long Tang
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China
| | - Lei Chen
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China
| | - Wen-Fei Li
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China
| | - Yan-Ping Mao
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, P. R. China
| | - Li-Zhi Liu
- Imaging Diagnosis and Interventional Center, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, P. R. China
| | - Li Tian
- Imaging Diagnosis and Interventional Center, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, P. R. China
| | - Ai-Hua Lin
- Department of Medical Statistics and Epidemiology, School of Public Health, Sun Yat-sen University, P. R. China
| | - Ying Sun
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, P. R. China.
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine, Beihang University, Beijing, P. R. China
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, P. R. China
| | - Jun Ma
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China.
| |
Collapse
|
362
|
Xu S, Wang J, Wang R, Chen J, Zou W. High-accuracy optical convolution unit architecture for convolutional neural networks by cascaded acousto-optical modulator arrays. OPTICS EXPRESS 2019; 27:19778-19787. [PMID: 31503733 DOI: 10.1364/oe.27.019778] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 06/13/2019] [Indexed: 06/10/2023]
Abstract
Optical neural networks (ONNs) have become competitive candidates for the next generation of high-performance neural network accelerators because of their low power consumption and high-speed nature. Beyond fully-connected neural networks demonstrated in pioneer works, optical computing hardwares can also conduct convolutional neural networks (CNNs) by hardware reusing. Following this concept, we propose an optical convolution unit (OCU) architecture. By reusing the OCU architecture with different inputs and weights, convolutions with arbitrary input sizes can be done. A proof-of-concept experiment is carried out by cascaded acousto-optical modulator arrays. When the neural network parameters are ex-situ trained, the OCU conducts convolutions with SDR up to 28.22 dBc and performs well on inferences of typical CNN tasks. Furthermore, we conduct in situ training and get higher SDR at 36.27 dBc, verifying the OCU could be further refined by in situ training. Besides the effectiveness and high accuracy, the simplified OCU architecture served as a building block could be easily duplicated and integrated to future chip-scale optical CNNs.
Collapse
|
363
|
|
364
|
Milanese G, Mannil M, Martini K, Maurer B, Alkadhi H, Frauenfelder T. Quantitative CT texture analysis for diagnosing systemic sclerosis: Effect of iterative reconstructions and radiation doses. Medicine (Baltimore) 2019; 98:e16423. [PMID: 31335694 PMCID: PMC6709180 DOI: 10.1097/md.0000000000016423] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
To test whether texture analysis (TA) can discriminate between Systemic Sclerosis (SSc) and non-SSc patients in computed tomography (CT) with different radiation doses and reconstruction algorithms.In this IRB-approved retrospective study, 85 CT scans at different radiation doses [49 standard dose CT (SDCT) with a volume CT dose index (CTDIvol) of 4.86 ± 2.1 mGy and 36 low-dose (LDCT) with a CTDIvol of 2.5 ± 1.5 mGy] were selected; 61 patients had Ssc ("cases"), and 24 patients had no SSc ("controls"). CT scans were reconstructed with filtered-back projection (FBP) and with sinogram-affirmed iterative reconstruction (SAFIRE) algorithms. 304 TA features were extracted from each manually drawn region-of-interest at 6 pre-defined levels: at the midpoint between lung apices and tracheal carina, at the level of the tracheal carina, and 4 between the carina and pleural recesses. Each TA feature was averaged between these 6 pre-defined levels and was used as input in the machine learning algorithm artificial neural network (ANN) with backpropagation (MultilayerPerceptron) for differentiating between SSc and non-SSc patients.Results were compared regarding correctly/incorrectly classified instances and ROC-AUCs.ANN correctly classified individuals in 93.8% (AUC = 0.981) of FBP-LDCT, in 78.5% (AUC = 0.859) of FBP-SDCT, in 91.1% (AUC = 0.922) of SAFIRE3-LDCT and 75.7% (AUC = 0.815) of SAFIRE3-SDCT, in 88.1% (AUC = 0.929) of SAFIRE5-LDCT and 74% (AUC = 0.815) of SAFIRE5-SDCT.Quantitative TA-based discrimination of CT of SSc patients is possible showing highest discriminatory power in FBP-LDCT images.
Collapse
Affiliation(s)
- Gianluca Milanese
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Ramistrasse, Zurich, Switzerland
- Division of Radiology, Department of Medicine and Surgery (DiMeC), University of Parma, Parma, Italy
| | - Manoj Mannil
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Ramistrasse, Zurich, Switzerland
| | - Katharina Martini
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Ramistrasse, Zurich, Switzerland
| | - Britta Maurer
- Division of Rheumatology, University Hospital Zurich, Ramistrasse, Zurich, Switzerland
| | - Hatem Alkadhi
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Ramistrasse, Zurich, Switzerland
| | - Thomas Frauenfelder
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Ramistrasse, Zurich, Switzerland
| |
Collapse
|
365
|
Kitrungrotsakul T, Han XH, Iwamoto Y, Lin L, Foruzan AH, Xiong W, Chen YW. VesselNet: A deep convolutional neural network with multi pathways for robust hepatic vessel segmentation. Comput Med Imaging Graph 2019; 75:74-83. [DOI: 10.1016/j.compmedimag.2019.05.002] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2018] [Revised: 03/20/2019] [Accepted: 05/13/2019] [Indexed: 11/26/2022]
|
366
|
Kistenev YV, Vrazhnov DA, Nikolaev VV, Sandykova EA, Krivova NA. Analysis of Collagen Spatial Structure Using Multiphoton Microscopy and Machine Learning Methods. BIOCHEMISTRY (MOSCOW) 2019; 84:S108-S123. [PMID: 31213198 DOI: 10.1134/s0006297919140074] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Pathogenesis of many diseases is associated with changes in the collagen spatial structure. Traditionally, the 3D structure of collagen in biological tissues is analyzed using histochemistry, immunohistochemistry, magnetic resonance imaging, and X-radiography. At present, multiphoton microscopy (MPM) is commonly used to study the structure of biological tissues. MPM has a high spatial resolution comparable to histological analysis and can be used for direct visualization of collagen spatial structure. Because of a large volume of data accumulated due to the high spatial resolution of MPM, special analytical methods should be used for identification of informative features in the images and quantitative evaluation of relationship between these features and pathological processes resulting in the destruction of collagen structure. Here, we describe current approaches and achievements in the identification of informative features in the MPM images of collagen in biological tissues, as well as the development on this basis of algorithms for computer-aided classification of collagen structures using machine learning as a type of artificial intelligence methods.
Collapse
Affiliation(s)
- Yu V Kistenev
- Tomsk State University, Tomsk, 634050, Russia. .,Siberian State Medical University, Tomsk, 634050, Russia.,Institute of Strength Physics and Materials Science, Siberian Branch of the Russian Academy of Sciences, Tomsk, 634055, Russia
| | - D A Vrazhnov
- Tomsk State University, Tomsk, 634050, Russia.,Siberian State Medical University, Tomsk, 634050, Russia
| | - V V Nikolaev
- Tomsk State University, Tomsk, 634050, Russia.,Siberian State Medical University, Tomsk, 634050, Russia
| | - E A Sandykova
- Tomsk State University, Tomsk, 634050, Russia.,Siberian State Medical University, Tomsk, 634050, Russia
| | - N A Krivova
- Tomsk State University, Tomsk, 634050, Russia
| |
Collapse
|
367
|
Kumar A, Fulham M, Feng D, Kim J. Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 39:204-217. [PMID: 31217099 DOI: 10.1109/tmi.2019.2923601] [Citation(s) in RCA: 83] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images for computer aided diagnosis applications (e.g., detection and segmentation) requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. Current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis. We evaluated the ability of our CNN to detect and segment multiple regions (lungs, mediastinum, tumors) with different fusion requirements using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image fusion (fused inputs (FS), multi-branch (MB) techniques, and multichannel (MC) techniques) and segmentation. Our findings show that our CNN had a significantly higher foreground detection accuracy (99.29%, p < 0:05) than the fusion baselines (FS: 99.00%, MB: 99.08%, TC: 98.92%) and a significantly higher Dice score (63.85%) than recent PET-CT tumor segmentation methods.
Collapse
|
368
|
Liu H, Wang L, Nan Y, Jin F, Wang Q, Pu J. SDFN: Segmentation-based deep fusion network for thoracic disease classification in chest X-ray images. Comput Med Imaging Graph 2019; 75:66-73. [PMID: 31174100 DOI: 10.1016/j.compmedimag.2019.05.005] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 04/15/2019] [Accepted: 05/24/2019] [Indexed: 10/26/2022]
Abstract
This study aims to automatically diagnose thoracic diseases depicted on the chest x-ray (CXR) images using deep convolutional neural networks. The existing methods generally used the entire CXR images for training purposes, but this strategy may suffer from two drawbacks. First, potential misalignment or the existence of irrelevant objects in the entire CXR images may cause unnecessary noise and thus limit the network performance. Second, the relatively low image resolution caused by the resizing operation, which is a common pre-processing procedure for training neural networks, may lead to the loss of image details, making it difficult to detect pathologies with small lesion regions. To address these issues, we present a novel method termed as segmentation-based deep fusion network (SDFN), which leverages the domain knowledge and the higher-resolution information of local lung regions. Specifically, the local lung regions were identified and cropped by the Lung Region Generator (LRG). Two CNN-based classification models were then used as feature extractors to obtain the discriminative features of the entire CXR images and the cropped lung region images. Lastly, the obtained features were fused by the feature fusion module for disease classification. Evaluated by the NIH benchmark split on the Chest X-ray 14 Dataset, our experimental result demonstrated that the developed method achieved more accurate disease classification compared with the available approaches via the receiver operating characteristic (ROC) analyses. It was also found that the SDFN could localize the lesion regions more precisely as compared to the traditional method.
Collapse
Affiliation(s)
- Han Liu
- Department of Bioengineering and Radiology, University of Pittsburgh, PA, 15213.
| | - Lei Wang
- Department of Bioengineering and Radiology, University of Pittsburgh, PA, 15213.
| | - Yandong Nan
- Department of Respiratory and Critical Care Medicine, Tangdu Hospital, Xi'an, 710038, China
| | - Faguang Jin
- Department of Respiratory and Critical Care Medicine, Tangdu Hospital, Xi'an, 710038, China.
| | - Qi Wang
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Hebei, 050020, China
| | - Jiantao Pu
- Department of Bioengineering and Radiology, University of Pittsburgh, PA, 15213.
| |
Collapse
|
369
|
Cheon S, Kim J, Lim J. The Use of Deep Learning to Predict Stroke Patient Mortality. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2019; 16:E1876. [PMID: 31141892 PMCID: PMC6603534 DOI: 10.3390/ijerph16111876] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 05/23/2019] [Accepted: 05/24/2019] [Indexed: 12/21/2022]
Abstract
The increase in stroke incidence with the aging of the Korean population will rapidly impose an economic burden on society. Timely treatment can improve stroke prognosis. Awareness of stroke warning signs and appropriate actions in the event of a stroke improve outcomes. Medical service use and health behavior data are easier to collect than medical imaging data. Here, we used a deep neural network to detect stroke using medical service use and health behavior data; we identified 15,099 patients with stroke. Principal component analysis (PCA) featuring quantile scaling was used to extract relevant background features from medical records; we used these to predict stroke. We compared our method (a scaled PCA/deep neural network [DNN] approach) to five other machine-learning methods. The area under the curve (AUC) value of our method was 83.48%; hence; it can be used by both patients and doctors to prescreen for possible stroke.
Collapse
Affiliation(s)
- Songhee Cheon
- Department of Physical Therapy, Youngsan University, Yangsan 626-790, Korea.
| | - Jungyoon Kim
- Department of Computer Science, Kent State University, Kent, OH 44242, USA.
| | - Jihye Lim
- Department of Healthcare Management, Youngsan University, Yangsan 626-790, Korea.
| |
Collapse
|
370
|
Shen WC, Chen SW, Wu KC, Hsieh TC, Liang JA, Hung YC, Yeh LS, Chang WC, Lin WC, Yen KY, Kao CH. Prediction of local relapse and distant metastasis in patients with definitive chemoradiotherapy-treated cervical cancer by deep learning from [ 18F]-fluorodeoxyglucose positron emission tomography/computed tomography. Eur Radiol 2019; 29:6741-6749. [PMID: 31134366 DOI: 10.1007/s00330-019-06265-x] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 04/18/2019] [Accepted: 05/03/2019] [Indexed: 12/22/2022]
Abstract
BACKGROUND We designed a deep learning model for assessing 18F-FDG PET/CT for early prediction of local and distant failures for patients with locally advanced cervical cancer. METHODS All 142 patients with cervical cancer underwent 18F-FDG PET/CT for pretreatment staging and received allocated treatment. To augment the amount of image data, each tumor was represented as 11 slice sets each of which contains 3 2D orthogonal slices to acquire a total of 1562 slice sets. In each round of k-fold cross-validation, a well-trained proposed model and a slice-based optimal threshold were derived from a training set and used to classify each slice set in the test set into the categories of with or without local or distant failure. The classification results of each tumor were aggregated to summarize a tumor-based prediction result. RESULTS In total, 21 and 26 patients experienced local and distant failures, respectively. Regarding local recurrence, the tumor-based prediction result summarized from all test sets demonstrated that the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were 71%, 93%, 63%, 95%, and 89%, respectively. The corresponding values for distant metastasis were 77%, 90%, 63%, 95%, and 87%, respectively. CONCLUSION This is the first study to use deep learning model for assessing 18F-FDG PET/CT images which is capable of predicting treatment outcomes in cervical cancer patients. KEY POINTS • This is the first study to use deep learning model for assessing 18 F-FDG PET/CT images which is capable of predicting treatment outcomes in cervical cancer patients. • All 142 patients with cervical cancer underwent 18 F-FDG PET/CT for pretreatment staging and received allocated treatment. To augment the amount of image data, each tumor was represented as 11 slice sets each of which contains 3 2D orthogonal slices to acquire a total of 1562 slice sets. • For local recurrence, all test sets demonstrated that the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were 71%, 93%, 63%, 95%, and 89%, respectively. The corresponding values for distant metastasis were 77%, 90%, 63%, 95%, and 87%, respectively.
Collapse
Affiliation(s)
- Wei-Chih Shen
- Department of Computer Science and Information Engineering, Asia University, Taichung, Taiwan
| | - Shang-Wen Chen
- Department of Radiation Oncology, China Medical University Hospital, Taichung, Taiwan.,School of Medicine, College of Medicine, China Medical University, Taichung, Taiwan.,Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
| | - Kuo-Chen Wu
- Department of Computer Science and Information Engineering, Asia University, Taichung, Taiwan
| | - Te-Chun Hsieh
- Department of Nuclear Medicine and PET Center, China Medical University Hospital, Taichung, Taiwan.,Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan
| | - Ji-An Liang
- Department of Radiation Oncology, China Medical University Hospital, Taichung, Taiwan.,Graduate Institute of Biomedical Sciences, School of Medicine, College of Medicine, China Medical University, No. 2, Yuh-Der Road, Taichung, 404, Taiwan
| | - Yao-Ching Hung
- School of Medicine, College of Medicine, China Medical University, Taichung, Taiwan.,Department of Obstetrics and Gynecology, China Medical University Hospital, Taichung, Taiwan
| | - Lian-Shung Yeh
- School of Medicine, College of Medicine, China Medical University, Taichung, Taiwan.,Department of Obstetrics and Gynecology, China Medical University Hospital, Taichung, Taiwan
| | - Wei-Chun Chang
- School of Medicine, College of Medicine, China Medical University, Taichung, Taiwan.,Department of Obstetrics and Gynecology, China Medical University Hospital, Taichung, Taiwan
| | - Wu-Chou Lin
- School of Medicine, College of Medicine, China Medical University, Taichung, Taiwan.,Department of Obstetrics and Gynecology, China Medical University Hospital, Taichung, Taiwan
| | - Kuo-Yang Yen
- Department of Nuclear Medicine and PET Center, China Medical University Hospital, Taichung, Taiwan.,Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan
| | - Chia-Hung Kao
- Department of Nuclear Medicine and PET Center, China Medical University Hospital, Taichung, Taiwan. .,Graduate Institute of Biomedical Sciences, School of Medicine, College of Medicine, China Medical University, No. 2, Yuh-Der Road, Taichung, 404, Taiwan. .,Department of Bioinformatics and Medical Engineering, Asia University, Taichung, Taiwan.
| |
Collapse
|
371
|
Wang Y, Fang Z, Hong H. Comparison of convolutional neural networks for landslide susceptibility mapping in Yanshan County, China. THE SCIENCE OF THE TOTAL ENVIRONMENT 2019; 666:975-993. [PMID: 30970504 DOI: 10.1016/j.scitotenv.2019.02.263] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2018] [Revised: 01/17/2019] [Accepted: 02/16/2019] [Indexed: 06/09/2023]
Abstract
Assessments of landslide disasters are becoming increasingly urgent. The aim of this study is to investigate a convolutional neural network (CNN) framework for landslide susceptibility mapping (LSM) in Yanshan County, China. The two primary contributions of this study are summarized as follows. First, to the best of our knowledge, this report describes the first time that the CNN framework is used for LSM. Second, different data representation algorithms are developed to construct three novel CNN architectures. In this work, sixteen influencing factors associated with landslide occurrence were considered and historical landslide locations were randomly divided into training (70% of the total) and validation (30%) sets. Validation of these CNNs was performed using different commonly used measures in comparison to several of the most popular machine learning and deep learning methods. The experimental results demonstrated that the proportions of highly susceptible zones in all of the CNN landslide susceptibility maps are highly similar and lower than 30%, which indicates that these CNNs are more practical for landslide prevention and management than conventional methods. Furthermore, the proposed CNN framework achieved higher or comparable prediction accuracy. Specifically, the proposed CNNs were 3.94%-7.45% and 0.079-0.151 higher than those of the optimized support vector machine (SVM) in terms of overall accuracy (OA) and Matthews correlation coefficient (MCC), respectively.
Collapse
Affiliation(s)
- Yi Wang
- Institute of Geophysics and Geomatics, China University of Geosciences, Wuhan 430074, China.
| | - Zhice Fang
- Institute of Geophysics and Geomatics, China University of Geosciences, Wuhan 430074, China
| | - Haoyuan Hong
- Key Laboratory of Virtual Geographic Environment (Nanjing Normal University), Ministry of Education, Nanjing, 210023, China; State Key Laboratory Cultivation Base of Geographical Environment Evolution (Jiangsu Province), Nanjing 210023, China; Jiangsu Centre for Collaborative Innovation in Geographic Information Resource Development and Application, Nanjing, Jiangsu 210023, China.
| |
Collapse
|
372
|
Kokil P, Sudharson S. Automatic Detection of Renal Abnormalities by Off-the-shelf CNN Features. ACTA ACUST UNITED AC 2019. [DOI: 10.1080/09747338.2019.1613936] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Affiliation(s)
- Priyanka Kokil
- Department of Electronics and Communication Engineering, Indian Institute of Information Technology, Design and Manufacturing, Kancheepuram, 600127 Chennai, India
| | - S. Sudharson
- Department of Electronics and Communication Engineering, Indian Institute of Information Technology, Design and Manufacturing, Kancheepuram, 600127 Chennai, India
| |
Collapse
|
373
|
Deep learning for liver tumor diagnosis part II: convolutional neural network interpretation using radiologic imaging features. Eur Radiol 2019; 29:3348-3357. [PMID: 31093705 DOI: 10.1007/s00330-019-06214-8] [Citation(s) in RCA: 93] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Accepted: 04/02/2019] [Indexed: 02/07/2023]
Abstract
OBJECTIVES To develop a proof-of-concept "interpretable" deep learning prototype that justifies aspects of its predictions from a pre-trained hepatic lesion classifier. METHODS A convolutional neural network (CNN) was engineered and trained to classify six hepatic tumor entities using 494 lesions on multi-phasic MRI, described in Part 1. A subset of each lesion class was labeled with up to four key imaging features per lesion. A post hoc algorithm inferred the presence of these features in a test set of 60 lesions by analyzing activation patterns of the pre-trained CNN model. Feature maps were generated that highlight regions in the original image that correspond to particular features. Additionally, relevance scores were assigned to each identified feature, denoting the relative contribution of a feature to the predicted lesion classification. RESULTS The interpretable deep learning system achieved 76.5% positive predictive value and 82.9% sensitivity in identifying the correct radiological features present in each test lesion. The model misclassified 12% of lesions. Incorrect features were found more often in misclassified lesions than correctly identified lesions (60.4% vs. 85.6%). Feature maps were consistent with original image voxels contributing to each imaging feature. Feature relevance scores tended to reflect the most prominent imaging criteria for each class. CONCLUSIONS This interpretable deep learning system demonstrates proof of principle for illuminating portions of a pre-trained deep neural network's decision-making, by analyzing inner layers and automatically describing features contributing to predictions. KEY POINTS • An interpretable deep learning system prototype can explain aspects of its decision-making by identifying relevant imaging features and showing where these features are found on an image, facilitating clinical translation. • By providing feedback on the importance of various radiological features in performing differential diagnosis, interpretable deep learning systems have the potential to interface with standardized reporting systems such as LI-RADS, validating ancillary features and improving clinical practicality. • An interpretable deep learning system could potentially add quantitative data to radiologic reports and serve radiologists with evidence-based decision support.
Collapse
|
374
|
Gupta RK, Chen M, Malcolm GPA, Hempler N, Dholakia K, Powis SJ. Label-free optical hemogram of granulocytes enhanced by artificial neural networks. OPTICS EXPRESS 2019; 27:13706-13720. [PMID: 31163830 DOI: 10.1364/oe.27.013706] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Accepted: 03/23/2019] [Indexed: 06/09/2023]
Abstract
An outstanding challenge for immunology is the classification of immune cells in a label-free fashion with high speed. For this purpose, optical techniques such as Raman spectroscopy or digital holographic microscopy have been used successfully to identify immune cell subsets. To achieve high accuracy, these techniques require a post-processing step using linear methods of multivariate processing, such as principal component analysis. Here we demonstrate for the first time a comparison between artificial neural networks and principal component analysis (PCA) to classify the key granulocyte cell lineages of neutrophils and eosinophils using both digital holographic microscopy and Raman spectroscopy. Artificial neural networks can offer advantages in terms of classification accuracy and speed over a PCA approach. We conclude that digital holographic microscopy with convolutional neural networks based analysis provides a route to a robust, stand-alone and high-throughput hemogram with a classification accuracy of 91.3 % at a throughput rate of greater than 100 cells per second.
Collapse
|
375
|
Hu J, Chen Y, Yi Z. Automated segmentation of macular edema in OCT using deep neural networks. Med Image Anal 2019; 55:216-227. [PMID: 31096135 DOI: 10.1016/j.media.2019.05.002] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2018] [Revised: 04/23/2019] [Accepted: 05/09/2019] [Indexed: 11/29/2022]
Abstract
Macular edema is an eye disease that can affect visual acuity. Typical disease symptoms include subretinal fluid (SRF) and pigment epithelium detachment (PED). Optical coherence tomography (OCT) has been widely used for diagnosing macular edema because of its non-invasive and high resolution properties. Segmentation for macular edema lesions from OCT images plays an important role in clinical diagnosis. Many computer-aided systems have been proposed for the segmentation. Most traditional segmentation methods used in these systems are based on low-level hand-crafted features, which require significant domain knowledge and are sensitive to the variations of lesions. To overcome these shortcomings, this paper proposes to use deep neural networks (DNNs) together with atrous spatial pyramid pooling (ASPP) to automatically segment the SRF and PED lesions. Lesions-related features are first extracted by DNNs, then processed by ASPP which is composed of multiple atrous convolutions with different fields of view to accommodate the various scales of the lesions. Based on ASPP, a novel module called stochastic ASPP (sASPP) is proposed to combat the co-adaptation of multiple atrous convolutions. A large OCT dataset provided by a competition platform called "AI Challenger" are used to train and evaluate the proposed model. Experimental results demonstrate that the DNNs together with ASPP achieve higher segmentation accuracy compared with the state-of-the-art method. The stochastic operation added in sASPP is empirically verified as an effective regularization method that can alleviate the overfitting problem and significantly reduce the validation error.
Collapse
Affiliation(s)
- Junjie Hu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Yuanyuan Chen
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China.
| |
Collapse
|
376
|
Chemchem A, Alin F, Krajecki M. Improving the Cognitive Agent Intelligence by Deep Knowledge Classification. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS 2019. [DOI: 10.1142/s1469026819500056] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, a new idea is developed for improving the agent intelligence. In fact with the presented convolutional neural network (CNN) approach for knowledge classification, the agent will be able to manage its knowledge. This new concept allows the agent to select only the actionable rule class, instead of trying to infer its whole rule base exhaustively. In addition, through this research, we developed a comparative study between the proposed CNN approach and the classical classification approaches. As foreseeable the deep learning method outperforms the others in term of classification accuracy.
Collapse
Affiliation(s)
- Amine Chemchem
- CReSTIC Center, University of Reims Champagne-Ardenne, Campus Moulin de la Housse BP 1039, 51687 Reims Cedex 2, France
| | - François Alin
- CReSTIC Center, University of Reims Champagne-Ardenne, Campus Moulin de la Housse BP 1039, 51687 Reims Cedex 2, France
| | - Michael Krajecki
- CReSTIC Center, University of Reims Champagne-Ardenne, Campus Moulin de la Housse BP 1039, 51687 Reims Cedex 2, France
| |
Collapse
|
377
|
Liu F, Guan B, Zhou Z, Samsonov A, Rosas H, Lian K, Sharma R, Kanarek A, Kim J, Guermazi A, Kijowski R. Fully Automated Diagnosis of Anterior Cruciate Ligament Tears on Knee MR Images by Using Deep Learning. Radiol Artif Intell 2019; 1:180091. [PMID: 32076658 DOI: 10.1148/ryai.2019180091] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2018] [Revised: 03/26/2019] [Accepted: 04/04/2019] [Indexed: 12/21/2022]
Abstract
Purpose To investigate the feasibility of using a deep learning-based approach to detect an anterior cruciate ligament (ACL) tear within the knee joint at MRI by using arthroscopy as the reference standard. Materials and Methods A fully automated deep learning-based diagnosis system was developed by using two deep convolutional neural networks (CNNs) to isolate the ACL on MR images followed by a classification CNN to detect structural abnormalities within the isolated ligament. With institutional review board approval, sagittal proton density-weighted and fat-suppressed T2-weighted fast spin-echo MR images of the knee in 175 subjects with a full-thickness ACL tear (98 male subjects and 77 female subjects; average age, 27.5 years) and 175 subjects with an intact ACL (100 male subjects and 75 female subjects; average age, 39.4 years) were retrospectively analyzed by using the deep learning approach. Sensitivity and specificity of the ACL tear detection system and five clinical radiologists for detecting an ACL tear were determined by using arthroscopic results as the reference standard. Receiver operating characteristic (ROC) analysis and two-sided exact binomial tests were used to further assess diagnostic performance. Results The sensitivity and specificity of the ACL tear detection system at the optimal threshold were 0.96 and 0.96, respectively. In comparison, the sensitivity of the clinical radiologists ranged between 0.96 and 0.98, while the specificity ranged between 0.90 and 0.98. There was no statistically significant difference in diagnostic performance between the ACL tear detection system and clinical radiologists at P < .05. The area under the ROC curve for the ACL tear detection system was 0.98, indicating high overall diagnostic accuracy. Conclusion There was no significant difference between the diagnostic performance of the ACL tear detection system and clinical radiologists for determining the presence or absence of an ACL tear at MRI.© RSNA, 2019Supplemental material is available for this article.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, WI 53705 (F.L., B.G., A.S., H.R., K.L., R.S., A.K., J.K., R.K.); Department of Electrical and Computer Engineering, University of Wisconsin School of Engineering, Madison, Wis (B.G.); Department of Biomedical Engineering, University of Minnesota, Minneapolis, Minn (Z.Z.); and Department of Radiology, Boston University School of Medicine, Boston, Mass (A.G.)
| | - Bochen Guan
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, WI 53705 (F.L., B.G., A.S., H.R., K.L., R.S., A.K., J.K., R.K.); Department of Electrical and Computer Engineering, University of Wisconsin School of Engineering, Madison, Wis (B.G.); Department of Biomedical Engineering, University of Minnesota, Minneapolis, Minn (Z.Z.); and Department of Radiology, Boston University School of Medicine, Boston, Mass (A.G.)
| | - Zhaoye Zhou
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, WI 53705 (F.L., B.G., A.S., H.R., K.L., R.S., A.K., J.K., R.K.); Department of Electrical and Computer Engineering, University of Wisconsin School of Engineering, Madison, Wis (B.G.); Department of Biomedical Engineering, University of Minnesota, Minneapolis, Minn (Z.Z.); and Department of Radiology, Boston University School of Medicine, Boston, Mass (A.G.)
| | - Alexey Samsonov
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, WI 53705 (F.L., B.G., A.S., H.R., K.L., R.S., A.K., J.K., R.K.); Department of Electrical and Computer Engineering, University of Wisconsin School of Engineering, Madison, Wis (B.G.); Department of Biomedical Engineering, University of Minnesota, Minneapolis, Minn (Z.Z.); and Department of Radiology, Boston University School of Medicine, Boston, Mass (A.G.)
| | - Humberto Rosas
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, WI 53705 (F.L., B.G., A.S., H.R., K.L., R.S., A.K., J.K., R.K.); Department of Electrical and Computer Engineering, University of Wisconsin School of Engineering, Madison, Wis (B.G.); Department of Biomedical Engineering, University of Minnesota, Minneapolis, Minn (Z.Z.); and Department of Radiology, Boston University School of Medicine, Boston, Mass (A.G.)
| | - Kevin Lian
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, WI 53705 (F.L., B.G., A.S., H.R., K.L., R.S., A.K., J.K., R.K.); Department of Electrical and Computer Engineering, University of Wisconsin School of Engineering, Madison, Wis (B.G.); Department of Biomedical Engineering, University of Minnesota, Minneapolis, Minn (Z.Z.); and Department of Radiology, Boston University School of Medicine, Boston, Mass (A.G.)
| | - Ruchi Sharma
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, WI 53705 (F.L., B.G., A.S., H.R., K.L., R.S., A.K., J.K., R.K.); Department of Electrical and Computer Engineering, University of Wisconsin School of Engineering, Madison, Wis (B.G.); Department of Biomedical Engineering, University of Minnesota, Minneapolis, Minn (Z.Z.); and Department of Radiology, Boston University School of Medicine, Boston, Mass (A.G.)
| | - Andrew Kanarek
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, WI 53705 (F.L., B.G., A.S., H.R., K.L., R.S., A.K., J.K., R.K.); Department of Electrical and Computer Engineering, University of Wisconsin School of Engineering, Madison, Wis (B.G.); Department of Biomedical Engineering, University of Minnesota, Minneapolis, Minn (Z.Z.); and Department of Radiology, Boston University School of Medicine, Boston, Mass (A.G.)
| | - John Kim
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, WI 53705 (F.L., B.G., A.S., H.R., K.L., R.S., A.K., J.K., R.K.); Department of Electrical and Computer Engineering, University of Wisconsin School of Engineering, Madison, Wis (B.G.); Department of Biomedical Engineering, University of Minnesota, Minneapolis, Minn (Z.Z.); and Department of Radiology, Boston University School of Medicine, Boston, Mass (A.G.)
| | - Ali Guermazi
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, WI 53705 (F.L., B.G., A.S., H.R., K.L., R.S., A.K., J.K., R.K.); Department of Electrical and Computer Engineering, University of Wisconsin School of Engineering, Madison, Wis (B.G.); Department of Biomedical Engineering, University of Minnesota, Minneapolis, Minn (Z.Z.); and Department of Radiology, Boston University School of Medicine, Boston, Mass (A.G.)
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, WI 53705 (F.L., B.G., A.S., H.R., K.L., R.S., A.K., J.K., R.K.); Department of Electrical and Computer Engineering, University of Wisconsin School of Engineering, Madison, Wis (B.G.); Department of Biomedical Engineering, University of Minnesota, Minneapolis, Minn (Z.Z.); and Department of Radiology, Boston University School of Medicine, Boston, Mass (A.G.)
| |
Collapse
|
378
|
Ning Z, Luo J, Li Y, Han S, Feng Q, Xu Y, Chen W, Chen T, Zhang Y. Pattern Classification for Gastrointestinal Stromal Tumors by Integration of Radiomics and Deep Convolutional Features. IEEE J Biomed Health Inform 2019; 23:1181-1191. [PMID: 29993591 DOI: 10.1109/jbhi.2018.2841992] [Citation(s) in RCA: 69] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Predicting malignant potential is one of the most critical components of a computer-aided diagnosis system for gastrointestinal stromal tumors (GISTs). These tumors have been studied only on the basis of subjective computed tomography findings. Among various methodologies, radiomics, and deep learning algorithms, specifically convolutional neural networks (CNNs), have recently been confirmed to achieve significant success by outperforming the state-of-the-art performance in medical image pattern classification and have rapidly become leading methodologies in this field. However, the existing methods generally use radiomics or deep convolutional features independently for pattern classification, which tend to take into account only global or local features, respectively. In this paper, we introduce and evaluate a hybrid structure that includes different features selected with radiomics model and CNNs and integrates these features to deal with GISTs classification. The Radiomics model and CNNs are constructed for global radiomics and local convolutional feature selection, respectively. Subsequently, we utilize distinct radiomics and deep convolutional features to perform pattern classification for GISTs. Specifically, we propose a new pooling strategy to assemble the deep convolutional features of 54 three-dimensional patches from the same case and integrate these features with the radiomics features for independent case, followed by random forest classifier. Our method can be extensively evaluated using multiple clinical datasets. The classification performance (area under the curve (AUC): 0.882; 95% confidence interval (CI): 0.816-0.947) consistently outperforms those of independent radiomics (AUC: 0.807; 95% CI: 0.724-0.892) and CNNs (AUC: 0.826; 95% CI: 0.795-0.856) approaches.
Collapse
|
379
|
Abstract
OBJECTIVE. The goal of this article is to examine some of the current cardiothoracic radiology applications of artificial intelligence in general and deep learning in particular. CONCLUSION. Artificial intelligence has been used for the analysis of medical images for decades. Recent advances in computer algorithms and hardware, coupled with the availability of larger labeled datasets, have brought about rapid advances in this field. Many of the more notable recent advances have been in the artificial intelligence subfield of deep learning.
Collapse
|
380
|
Zhang Z, Sejdić E. Radiological images and machine learning: Trends, perspectives, and prospects. Comput Biol Med 2019; 108:354-370. [PMID: 31054502 PMCID: PMC6531364 DOI: 10.1016/j.compbiomed.2019.02.017] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 01/18/2023]
Abstract
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Collapse
Affiliation(s)
- Zhenwei Zhang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - Ervin Sejdić
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA.
| |
Collapse
|
381
|
Sridar P, Kumar A, Quinton A, Nanan R, Kim J, Krishnakumar R. Decision Fusion-Based Fetal Ultrasound Image Plane Classification Using Convolutional Neural Networks. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:1259-1273. [PMID: 30826153 DOI: 10.1016/j.ultrasmedbio.2018.11.016] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Revised: 11/26/2018] [Accepted: 11/29/2018] [Indexed: 06/09/2023]
Abstract
Machine learning for ultrasound image analysis and interpretation can be helpful in automated image classification in large-scale retrospective analyses to objectively derive new indicators of abnormal fetal development that are embedded in ultrasound images. Current approaches to automatic classification are limited to the use of either image patches (cropped images) or the global (whole) image. As many fetal organs have similar visual features, cropped images can misclassify certain structures such as the kidneys and abdomen. Also, the whole image does not encode sufficient local information about structures to identify different structures in different locations. Here we propose a method to automatically classify 14 different fetal structures in 2-D fetal ultrasound images by fusing information from both cropped regions of fetal structures and the whole image. Our method trains two feature extractors by fine-tuning pre-trained convolutional neural networks with the whole ultrasound fetal images and the discriminant regions of the fetal structures found in the whole image. The novelty of our method is in integrating the classification decisions made from the global and local features without relying on priors. In addition, our method can use the classification outcome to localize the fetal structures in the image. Our experiments on a data set of 4074 2-D ultrasound images (training: 3109, test: 965) achieved a mean accuracy of 97.05%, mean precision of 76.47% and mean recall of 75.41%. The Cohen κ of 0.72 revealed the highest agreement between the ground truth and the proposed method. The superiority of the proposed method over the other non-fusion-based methods is statistically significant (p < 0.05). We found that our method is capable of predicting images without ultrasound scanner overlays with a mean accuracy of 92%. The proposed method can be leveraged to retrospectively classify any ultrasound images in clinical research.
Collapse
Affiliation(s)
- Pradeeba Sridar
- Department of Engineering Design, Indian Institute of Technology Madras, India; School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Ashnil Kumar
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Ann Quinton
- Sydney Medical School, University of Sydney, Sydney, New South Wales, Australia
| | - Ralph Nanan
- Sydney Medical School, University of Sydney, Sydney, New South Wales, Australia
| | - Jinman Kim
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | | |
Collapse
|
382
|
Vigneron V, Maaref H. M-ary Rank Classifier Combination: A Binary Linear Programming Problem. ENTROPY 2019; 21:e21050440. [PMID: 33267154 PMCID: PMC7514928 DOI: 10.3390/e21050440] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Revised: 04/11/2019] [Accepted: 04/18/2019] [Indexed: 11/16/2022]
Abstract
The goal of classifier combination can be briefly stated as combining the decisions of individual classifiers to obtain a better classifier. In this paper, we propose a method based on the combination of weak rank classifiers because rankings contain more information than unique choices for a many-class problem. The problem of combining the decisions of more than one classifier with raw outputs in the form of candidate class rankings is considered and formulated as a general discrete optimization problem with an objective function based on the distance between the data and the consensus decision. This formulation uses certain performance statistics about the joint behavior of the ensemble of classifiers. Assuming that each classifier produces a ranking list of classes, an initial approach leads to a binary linear programming problem with a simple and global optimum solution. The consensus function can be considered as a mapping from a set of individual rankings to a combined ranking, leading to the most relevant decision. We also propose an information measure that quantifies the degree of consensus between the classifiers to assess the strength of the combination rule that is used. It is easy to implement and does not require any training. The main conclusion is that the classification rate is strongly improved by combining rank classifiers globally. The proposed algorithm is tested on real cytology image data to detect cervical cancer.
Collapse
Affiliation(s)
| | - Hichem Maaref
- Correspondence: (V.V.); (H.M.); Tel.: +33-6-635-687-60 (V.V.)
| |
Collapse
|
383
|
Aprupe L, Litjens G, Brinker TJ, van der Laak J, Grabe N. Robust and accurate quantification of biomarkers of immune cells in lung cancer micro-environment using deep convolutional neural networks. PeerJ 2019; 7:e6335. [PMID: 30993030 PMCID: PMC6462181 DOI: 10.7717/peerj.6335] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Accepted: 12/23/2018] [Indexed: 01/24/2023] Open
Abstract
Recent years have seen a growing awareness of the role the immune system plays in successful cancer treatment, especially in novel therapies like immunotherapy. The characterization of the immunological composition of tumors and their micro-environment is thus becoming a necessity. In this paper we introduce a deep learning-based immune cell detection and quantification method, which is based on supervised learning, i.e., the input data for training comprises labeled images. Our approach objectively deals with staining variation and staining artifacts in immunohistochemically stained lung cancer tissue and is as precise as humans. This is evidenced by the low cell count difference to humans of 0.033 cells on average. This method, which is based on convolutional neural networks, has the potential to provide a new quantitative basis for research on immunotherapy.
Collapse
Affiliation(s)
- Lilija Aprupe
- Hamamatsu Tissue Imaging and Analysis (TIGA) Center, BioQuant, Heidelberg University, Heidelberg, Germany.,Department of Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands.,Steinbeis Center for Medical Systems Biology (STCMSB), Heidelberg, Germany
| | - Titus J Brinker
- Department of Dermatology and National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| | - Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Niels Grabe
- Hamamatsu Tissue Imaging and Analysis (TIGA) Center, BioQuant, Heidelberg University, Heidelberg, Germany.,Department of Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.,Steinbeis Center for Medical Systems Biology (STCMSB), Heidelberg, Germany
| |
Collapse
|
384
|
Savadjiev P, Chong J, Dohan A, Agnus V, Forghani R, Reinhold C, Gallix B. Image-based biomarkers for solid tumor quantification. Eur Radiol 2019; 29:5431-5440. [PMID: 30963275 DOI: 10.1007/s00330-019-06169-w] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 02/25/2019] [Accepted: 03/14/2019] [Indexed: 02/06/2023]
Abstract
The last few decades have witnessed tremendous technological developments in image-based biomarkers for tumor quantification and characterization. Initially limited to manual one- and two-dimensional size measurements, image biomarkers have evolved to harness developments not only in image acquisition technology but also in image processing and analysis algorithms. At the same time, clinical validation remains a major challenge for the vast majority of these novel techniques, and there is still a major gap between the latest technological developments and image biomarkers used in everyday clinical practice. Currently, the imaging biomarker field is attracting increasing attention not only because of the tremendous interest in cutting-edge therapeutic developments and personalized medicine but also because of the recent progress in the application of artificial intelligence (AI) algorithms to large-scale datasets. Thus, the goal of the present article is to review the current state of the art for image biomarkers and their use for characterization and predictive quantification of solid tumors. Beginning with an overview of validated imaging biomarkers in current clinical practice, we proceed to a review of AI-based methods for tumor characterization, such as radiomics-based approaches and deep learning.Key Points• Recent years have seen tremendous technological developments in image-based biomarkers for tumor quantification and characterization.• Image-based biomarkers can be used on an ongoing basis, in a non-invasive (or mildly invasive) way, to monitor the development and progression of the disease or its response to therapy.• We review the current state of the art for image biomarkers, as well as the recent developments in artificial intelligence (AI) algorithms for image processing and analysis.
Collapse
Affiliation(s)
- Peter Savadjiev
- Department of Diagnostic Radiology, McGill University, Montreal, QC, Canada
| | - Jaron Chong
- Department of Diagnostic Radiology, McGill University Health Centre, McGill University, 1001 Décarie Boulevard, Montreal, QC, H4A 3J1, Canada
| | - Anthony Dohan
- Department of Diagnostic Radiology, McGill University Health Centre, McGill University, 1001 Décarie Boulevard, Montreal, QC, H4A 3J1, Canada.,Department of Body and Interventional Imaging, Hôpital Lariboisière-AP-HP, Université Diderot-Paris 7 and INSERM U965, 2 rue Ambroise Paré, 75475, Paris Cedex 10, France
| | - Vincent Agnus
- Institut de chirurgie guidée par l'image IHU Strasbourg, 1, place de l'Hôpital, 67091, Strasbourg Cedex, France
| | - Reza Forghani
- Department of Diagnostic Radiology, McGill University Health Centre, McGill University, 1001 Décarie Boulevard, Montreal, QC, H4A 3J1, Canada.,Department of Radiology, Jewish General Hospital, 3755 Chemin de la Côte-Sainte-Catherine, Montreal, QC, H3T 1E2, Canada
| | - Caroline Reinhold
- Department of Diagnostic Radiology, McGill University Health Centre, McGill University, 1001 Décarie Boulevard, Montreal, QC, H4A 3J1, Canada
| | - Benoit Gallix
- Department of Diagnostic Radiology, McGill University Health Centre, McGill University, 1001 Décarie Boulevard, Montreal, QC, H4A 3J1, Canada. .,Institut de chirurgie guidée par l'image IHU Strasbourg, 1, place de l'Hôpital, 67091, Strasbourg Cedex, France.
| |
Collapse
|
385
|
Tatsugami F, Higaki T, Nakamura Y, Yu Z, Zhou J, Lu Y, Fujioka C, Kitagawa T, Kihara Y, Iida M, Awai K. Deep learning-based image restoration algorithm for coronary CT angiography. Eur Radiol 2019; 29:5322-5329. [PMID: 30963270 DOI: 10.1007/s00330-019-06183-y] [Citation(s) in RCA: 172] [Impact Index Per Article: 28.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 03/09/2019] [Accepted: 03/19/2019] [Indexed: 12/22/2022]
Abstract
OBJECTIVES The purpose of this study was to compare the image quality of coronary computed tomography angiography (CTA) subjected to deep learning-based image restoration (DLR) method with images subjected to hybrid iterative reconstruction (IR). METHODS We enrolled 30 patients (22 men, 8 women) who underwent coronary CTA on a 320-slice CT scanner. The images were reconstructed with hybrid IR and with DLR. The image noise in the ascending aorta, left atrium, and septal wall of the ventricle was measured on all images and the contrast-to-noise ratio (CNR) in the proximal coronary arteries was calculated. We also generated CT attenuation profiles across the proximal coronary arteries and measured the width of the edge rise distance (ERD) and the edge rise slope (ERS). Two observers visually evaluated the overall image quality using a 4-point scale (1 = poor, 4 = excellent). RESULTS On DLR images, the mean image noise was lower than that on hybrid IR images (18.5 ± 2.8 HU vs. 23.0 ± 4.6 HU, p < 0.01) and the CNR was significantly higher (p < 0.01). The mean ERD was significantly shorter on DLR than on hybrid IR images, whereas the mean ERS was steeper on DLR than on hybrid IR images. The mean image quality score for hybrid IR and DLR images was 2.96 and 3.58, respectively (p < 0.01). CONCLUSIONS DLR reduces the image noise and improves the image quality at coronary CTA. KEY POINTS • Deep learning-based image restoration is a new technique that employs the deep convolutional neural network for image quality improvement. • Deep learning-based restoration reduces the image noise and improves image quality at coronary CT angiography. • This method may allow for a reduction in radiation exposure.
Collapse
Affiliation(s)
- Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan.
| | - Toru Higaki
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Yuko Nakamura
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Zhou Yu
- Canon Medical Research USA, Inc., 706 N Deerpath Drive, Vernon Hills, IL, 60061, USA
| | - Jian Zhou
- Canon Medical Research USA, Inc., 706 N Deerpath Drive, Vernon Hills, IL, 60061, USA
| | - Yujie Lu
- Canon Medical Research USA, Inc., 706 N Deerpath Drive, Vernon Hills, IL, 60061, USA
| | - Chikako Fujioka
- Department of Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Toshiro Kitagawa
- Department of Cardiovascular Medicine, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Yasuki Kihara
- Department of Cardiovascular Medicine, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Makoto Iida
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Kazuo Awai
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| |
Collapse
|
386
|
Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence. J Clin Med 2019; 8:jcm8040462. [PMID: 30959798 PMCID: PMC6518303 DOI: 10.3390/jcm8040462] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 04/02/2019] [Accepted: 04/03/2019] [Indexed: 02/07/2023] Open
Abstract
Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).
Collapse
|
387
|
Zhang K, Zhang L, Wu Q. Identification of Cherry Leaf Disease Infected by Podosphaera Pannosa via Convolutional Neural Network. INTERNATIONAL JOURNAL OF AGRICULTURAL AND ENVIRONMENTAL INFORMATION SYSTEMS 2019. [DOI: 10.4018/ijaeis.2019040105] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The cherry leaves infected by Podosphaera pannosa will suffer powdery mildew, which is a serious disease threatening the cherry production industry. In order to identify the diseased cherry leaves in early stage, the authors formulate the cherry leaf disease infected identification as a classification problem and propose a fully automatic identification method based on convolutional neural network (CNN). The GoogLeNet is used as backbone of the CNN. Then, transferred learning techniques are applied to fine-tune the CNN from pre-trained GoogLeNet on ImageNet dataset. This article compares the proposed method against three traditional machine learning methods i.e., support vector machine (SVM), k-nearest neighbor (KNN) and back propagation (BP) neural network. Quantitative evaluations conducted on a data set of 1,200 images collected by smart phones, demonstrates that the CNN achieves best precise performance in identifying diseased cherry leaves, with the testing accuracy of 99.6%. Thus, a CNN can be used effectively in identifying the diseased cherry leaves.
Collapse
Affiliation(s)
- Keke Zhang
- College of Engineering, Northeast Agricultural University, Harbin, China
| | - Lei Zhang
- Department of Radiology, University of Pittsburgh, Pittsburgh, USA
| | - Qiufeng Wu
- College of Science, Northeast Agricultural University, Harbin, China
| |
Collapse
|
388
|
Nida N, Irtaza A, Javed A, Yousaf MH, Mahmood MT. Melanoma lesion detection and segmentation using deep region based convolutional neural network and fuzzy C-means clustering. Int J Med Inform 2019; 124:37-48. [DOI: 10.1016/j.ijmedinf.2019.01.005] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2018] [Revised: 01/05/2019] [Accepted: 01/08/2019] [Indexed: 10/27/2022]
|
389
|
Tang B, Pan Z, Yin K, Khateeb A. Recent Advances of Deep Learning in Bioinformatics and Computational Biology. Front Genet 2019; 10:214. [PMID: 30972100 PMCID: PMC6443823 DOI: 10.3389/fgene.2019.00214] [Citation(s) in RCA: 89] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Accepted: 02/27/2019] [Indexed: 01/18/2023] Open
Abstract
Extracting inherent valuable knowledge from omics big data remains as a daunting problem in bioinformatics and computational biology. Deep learning, as an emerging branch from machine learning, has exhibited unprecedented performance in quite a few applications from academia and industry. We highlight the difference and similarity in widely utilized models in deep learning studies, through discussing their basic structures, and reviewing diverse applications and disadvantages. We anticipate the work can serve as a meaningful perspective for further development of its theory, algorithm and application in bioinformatic and computational biology.
Collapse
Affiliation(s)
- Binhua Tang
- Epigenetics & Function Group, Hohai University, Nanjing, China.,School of Public Health, Shanghai Jiao Tong University, Shanghai, China
| | - Zixiang Pan
- Epigenetics & Function Group, Hohai University, Nanjing, China
| | - Kang Yin
- Epigenetics & Function Group, Hohai University, Nanjing, China
| | - Asif Khateeb
- Epigenetics & Function Group, Hohai University, Nanjing, China
| |
Collapse
|
390
|
Classification of Pulmonary CT Images by Using Hybrid 3D-Deep Convolutional Neural Network Architecture. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9050940] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Lung cancer is the most common cause of cancer-related deaths worldwide. Hence, the survival rate of patients can be increased by early diagnosis. Recently, machine learning methods on Computed Tomography (CT) images have been used in the diagnosis of lung cancer to accelerate the diagnosis process and assist physicians. However, in conventional machine learning techniques, using handcrafted feature extraction methods on CT images are complicated processes. Hence, deep learning as an effective area of machine learning methods by using automatic feature extraction methods could minimize the process of feature extraction. In this study, two Convolutional Neural Network (CNN)-based models were proposed as deep learning methods to diagnose lung cancer on lung CT images. To investigate the performance of the two proposed models (Straight 3D-CNN with conventional softmax and hybrid 3D-CNN with Radial Basis Function (RBF)-based SVM), the altered models of two-well known CNN architectures (3D-AlexNet and 3D-GoogleNet) were considered. Experimental results showed that the performance of the two proposed models surpassed 3D-AlexNet and 3D-GoogleNet. Furthermore, the proposed hybrid 3D-CNN with SVM achieved more satisfying results (91.81%, 88.53% and 91.91% for accuracy rate, sensitivity and precision respectively) compared to straight 3D-CNN with softmax in the diagnosis of lung cancer.
Collapse
|
391
|
Porcu M, De Silva P, Solinas C, Battaglia A, Schena M, Scartozzi M, Bron D, Suri JS, Willard-Gallo K, Sangiolo D, Saba L. Immunotherapy Associated Pulmonary Toxicity: Biology Behind Clinical and Radiological Features. Cancers (Basel) 2019; 11:cancers11030305. [PMID: 30841554 PMCID: PMC6468855 DOI: 10.3390/cancers11030305] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Revised: 02/17/2019] [Accepted: 02/26/2019] [Indexed: 12/22/2022] Open
Abstract
The broader use of immune checkpoint blockade in clinical routine challenges clinicians in the diagnosis and management of side effects which are caused by inflammation generated by the activation of the immune response. Nearly all organs can be affected by immune-related toxicities. However, the most frequently reported are: fatigue, rash, pruritus, diarrhea, nausea/vomiting, arthralgia, decreased appetite and abdominal pain. Although these adverse events are usually mild, reversible and not frequent, an early diagnosis is crucial. Immune-related pulmonary toxicity was most frequently observed in trials of lung cancer and of melanoma patients treated with the combination of the anti-cytotoxic T lymphocyte antigen (CTLA)-4 and the anti-programmed cell death-1 (PD-1) antibodies. The most frequent immune-related adverse event in the lung is represented by pneumonitis due to the development of infiltrates in the interstitium and in the alveoli. Clinical symptoms and radiological patterns are the key elements to be considered for an early diagnosis, rendering the differential diagnosis crucial. Diagnosis of immune-related pneumonitis may imply the temporary or definitive suspension of immunotherapy, along with the start of immuno-suppressive treatments. The aim of this work is to summarize the biological bases, clinical and radiological findings of lung toxicity under immune checkpoint blockade, underlining the importance of multidisciplinary teams for an optimal early diagnosis of this side effect, with the aim to reach an improved patient care.
Collapse
Affiliation(s)
- Michele Porcu
- Department of Radiology, University Hospital of Cagliari, 09042 Monserrato (Cagliari), Italy.
| | - Pushpamali De Silva
- Molecular Immunology Unit, Institut Jules Bordet, Universitè Libre de Bruxelles (ULB), 1000 Brussels, Belgium.
- Clinical and Experimental Hematology, Institute Jules Bordet, Universitè Libre de Bruxelles (ULB), 1000 Brussels, Belgium.
| | - Cinzia Solinas
- Molecular Immunology Unit, Institut Jules Bordet, Universitè Libre de Bruxelles (ULB), 1000 Brussels, Belgium.
- Department of Medical Oncology and Hematology, Regional Hospital of Aosta, 11100 Aosta, Italy.
| | - Angelo Battaglia
- Department of Medical Oncology and Hematology, Regional Hospital of Aosta, 11100 Aosta, Italy.
| | - Marina Schena
- Department of Medical Oncology and Hematology, Regional Hospital of Aosta, 11100 Aosta, Italy.
| | - Mario Scartozzi
- Department of Medical Oncology, University Hospital of Cagliari, 09042 Monserrato (Cagliari), Italy.
| | - Dominique Bron
- Clinical and Experimental Hematology, Institute Jules Bordet, Universitè Libre de Bruxelles (ULB), 1000 Brussels, Belgium.
| | - Jasjit S Suri
- Lung Diagnostic Division, Global Biomedical Technologies, Inc., Roseville, CA 95661, USA.
- AtheroPoint™ LLC, Roseville, CA 95661, USA.
| | - Karen Willard-Gallo
- Molecular Immunology Unit, Institut Jules Bordet, Universitè Libre de Bruxelles (ULB), 1000 Brussels, Belgium.
| | - Dario Sangiolo
- Department of Oncology, University of Torino, 10043 Orbassano (Torino), Italy.
- Division of Medical Oncology, Experimental Cell Therapy, Candiolo Cancer Institute FPO-IRCCS, 10060 Candiolo (Torino), Italy.
| | - Luca Saba
- Department of Radiology, University Hospital of Cagliari, 09042 Monserrato (Cagliari), Italy.
| |
Collapse
|
392
|
The present and future of deep learning in radiology. Eur J Radiol 2019; 114:14-24. [PMID: 31005165 DOI: 10.1016/j.ejrad.2019.02.038] [Citation(s) in RCA: 182] [Impact Index Per Article: 30.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2018] [Revised: 02/17/2019] [Accepted: 02/26/2019] [Indexed: 12/18/2022]
Abstract
The advent of Deep Learning (DL) is poised to dramatically change the delivery of healthcare in the near future. Not only has DL profoundly affected the healthcare industry it has also influenced global businesses. Within a span of very few years, advances such as self-driving cars, robots performing jobs that are hazardous to human, and chat bots talking with human operators have proved that DL has already made large impact on our lives. The open source nature of DL and decreasing prices of computer hardware will further propel such changes. In healthcare, the potential is immense due to the need to automate the processes and evolve error free paradigms. The sheer quantum of DL publications in healthcare has surpassed other domains growing at a very fast pace, particular in radiology. It is therefore imperative for the radiologists to learn about DL and how it differs from other approaches of Artificial Intelligence (AI). The next generation of radiology will see a significant role of DL and will likely serve as the base for augmented radiology (AR). Better clinical judgement by AR will help in improving the quality of life and help in life saving decisions, while lowering healthcare costs. A comprehensive review of DL as well as its implications upon the healthcare is presented in this review. We had analysed 150 articles of DL in healthcare domain from PubMed, Google Scholar, and IEEE EXPLORE focused in medical imagery only. We have further examined the ethic, moral and legal issues surrounding the use of DL in medical imaging.
Collapse
|
393
|
Kim MC, Okada K, Ryner AM, Amza A, Tadesse Z, Cotter SY, Gaynor BD, Keenan JD, Lietman TM, Porco TC. Sensitivity and specificity of computer vision classification of eyelid photographs for programmatic trachoma assessment. PLoS One 2019; 14:e0210463. [PMID: 30742639 PMCID: PMC6370195 DOI: 10.1371/journal.pone.0210463] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Accepted: 12/24/2018] [Indexed: 11/27/2022] Open
Abstract
Background/aims Trachoma programs base treatment decisions on the community prevalence of the clinical signs of trachoma, assessed by direct examination of the conjunctiva. Automated assessment could be more standardized and more cost-effective. We tested the hypothesis that an automated algorithm could classify eyelid photographs better than chance. Methods A total of 1,656 field-collected conjunctival images were obtained from clinical trial participants in Niger and Ethiopia. Images were scored for trachomatous inflammation—follicular (TF) and trachomatous inflammation—intense (TI) according to the simplified World Health Organization grading system by expert raters. We developed an automated procedure for image enhancement followed by application of a convolutional neural net classifier for TF and separately for TI. One hundred images were selected for testing TF and TI, and these images were not used for training. Results The agreement score for TF and TI tasks for the automated algorithm relative to expert graders was κ = 0.44 (95% CI: 0.26 to 0.62, P < 0.001) and κ = 0.69 (95% CI: 0.55 to 0.84, P < 0.001), respectively. Discussion For assessing the clinical signs of trachoma, a convolutional neural net performed well above chance when tested against expert consensus. Further improvements in specificity may render this method suitable for field use.
Collapse
Affiliation(s)
- Matthew C. Kim
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
- Department of Mathematics, San Francisco State University, San Francisco, CA, United States of America
| | - Kazunori Okada
- Department of Computer Science, San Francisco State University, San Francisco, CA, United States of America
| | - Alexander M. Ryner
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Abdou Amza
- Programme FSS/Université Abdou Moumouni de Niamey, Programme National de Santé Oculaire, Niamey, Niger
| | | | - Sun Y. Cotter
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Bruce D. Gaynor
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Jeremy D. Keenan
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
- Department of Ophthalmology, University of California San Francisco, San Francisco, CA, United States of America
| | - Thomas M. Lietman
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
- Department of Ophthalmology, University of California San Francisco, San Francisco, CA, United States of America
- Department of Epidemiology and Biostatistics, University of California San Francisco, San Francisco, CA, United States of America
| | - Travis C. Porco
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
- Department of Ophthalmology, University of California San Francisco, San Francisco, CA, United States of America
- Department of Epidemiology and Biostatistics, University of California San Francisco, San Francisco, CA, United States of America
- * E-mail:
| |
Collapse
|
394
|
Wang H, Li S, Song L, Cui L. A novel convolutional neural network based fault recognition method via image fusion of multi-vibration-signals. COMPUT IND 2019. [DOI: 10.1016/j.compind.2018.12.013] [Citation(s) in RCA: 184] [Impact Index Per Article: 30.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
395
|
Cai J, Xing F, Batra A, Liu F, Walter GA, Vandenborne K, Yang L. Texture Analysis for Muscular Dystrophy Classification in MRI with Improved Class Activation Mapping. PATTERN RECOGNITION 2019; 86:368-375. [PMID: 31105339 PMCID: PMC6521874 DOI: 10.1016/j.patcog.2018.08.012] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The muscular dystrophies are made up of a diverse group of rare genetic diseases characterized by progressive loss of muscle strength and muscle damage. Since there is no cure for muscular dystrophy and clinical outcome measures are limited, it is critical to assess the progression of MD objectively. Imaging muscle replacement by fibrofatty tissue has been shown to be a robust biomarker to monitor disease progression in DMD. In magnetic resonance imaging (MRI) data, specific texture patterns are found to correlate to certain MD subtypes and thus present a potential way for automatic assessment. In this paper, we first apply state-of-the-art convolutional neural networks (CNNs) to perform accurate MD image classification and then propose an effective visualization method to highlight the important image textures. With a dystrophic MRI dataset, we found that the best CNN model delivers an 91.7% classification accuracy, which significantly outperforms non-deep learning methods, e.g., >40% improvement has been found over the traditional mean fat fraction (MFF) criterion for DMD and CMD classification. After investigating every single neuron at the top layer of CNN model, we found the superior classification ability of CNN can be explained by its 91 and 118 neurons were performing better than the MFF criterion under the measurements of Euclidean and Chi-square distance, respectively. In order to further interpret CNNs predictions, we tested an improved class activation mapping (ICAM) method to visualize the important regions in the MRI images. With this ICAM, CNNs are able to locate the most discriminative texture patterns of DMD in soleus, lateral gastrocnemius, and medial gastrocnemius; for CMD, the critical texture patterns are highlighted in soleus, tibialis posterior, and peroneus.
Collapse
Affiliation(s)
- Jinzheng Cai
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida
| | - Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Denver
| | - Abhinandan Batra
- Department of Physiology and Functional Genomics, University of Florida
| | - Fujun Liu
- Department of Electrical and Computer Engineering, University of Florida
| | - Glenn A. Walter
- Department of Physiology and Functional Genomics, University of Florida
| | | | - Lin Yang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida
- Department of Electrical and Computer Engineering, University of Florida
| |
Collapse
|
396
|
Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional Neural Networks for Radiologic Images: A Radiologist's Guide. Radiology 2019; 290:590-606. [PMID: 30694159 DOI: 10.1148/radiol.2018180547] [Citation(s) in RCA: 308] [Impact Index Per Article: 51.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep learning has rapidly advanced in various fields within the past few years and has recently gained particular attention in the radiology community. This article provides an introduction to deep learning technology and presents the stages that are entailed in the design process of deep learning radiology research. In addition, the article details the results of a survey of the application of deep learning-specifically, the application of convolutional neural networks-to radiologic imaging that was focused on the following five major system organs: chest, breast, brain, musculoskeletal system, and abdomen and pelvis. The survey of the studies is followed by a discussion about current challenges and future trends and their potential implications for radiology. This article may be used as a guide for radiologists planning research in the field of radiologic image analysis using convolutional neural networks.
Collapse
Affiliation(s)
- Shelly Soffer
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Avi Ben-Cohen
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Orit Shimon
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Michal Marianne Amitai
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Hayit Greenspan
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Eyal Klang
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| |
Collapse
|
397
|
van Royen FS, Moll SA, van Laar JM, van Montfrans JM, de Jong PA, Mohamed Hoesein FAA. Automated CT quantification methods for the assessment of interstitial lung disease in collagen vascular diseases: A systematic review. Eur J Radiol 2019; 112:200-206. [PMID: 30777211 DOI: 10.1016/j.ejrad.2019.01.024] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 12/17/2018] [Accepted: 01/21/2019] [Indexed: 02/01/2023]
Abstract
Interstitial lung disease (ILD) is highly prevalent in collagen vascular diseases and reduction of ILD is an important therapeutic target. To that end, reliable quantification of pulmonary disease severity is of great significance. This study systematically reviewed the literature on automated computed tomography (CT) quantification methods for assessing ILD in collagen vascular diseases. PRISMA-DTA guidelines for systematic reviews were used and 19 original research articles up to January 2018 were included based on a MEDLINE/Pubmed and Embase search. Quantitative CT methods were categorized as histogram assessment (12 studies) or pattern/texture recognition (7 studies). R2 for correlation with visual ILD scoring ranged from 0.143 (p < 0.01) to 0.687 (p < 0.0001), for FVC from 0.048 (p < 0.0001) to 0.504 (p < 0.0001) and for DLCO from 0.015 (p = 0.61) to 0.449 (p < 0.0001). Automated CT methods are independent of reader's expertise and are a promising tool in the quantification of ILD in collagen vascular disease patients.
Collapse
Affiliation(s)
- Florien S van Royen
- Department of Radiology, Division of Imaging, University Medical Centre Utrecht and Utrecht University, Utrecht, the Netherlands.
| | - Sofia A Moll
- Department of Paediatric Immunology and Infectious Diseases, Wilhelmina Children's Hospital Utrecht, Utrecht, the Netherlands
| | - Jacob M van Laar
- Department of Rheumatology and Clinical Immunology, University Medical Centre Utrecht and Utrecht University, Utrecht, the Netherlands
| | - Joris M van Montfrans
- Department of Paediatric Immunology and Infectious Diseases, Wilhelmina Children's Hospital Utrecht, Utrecht, the Netherlands
| | - Pim A de Jong
- Department of Radiology, Division of Imaging, University Medical Centre Utrecht and Utrecht University, Utrecht, the Netherlands
| | - Firdaus A A Mohamed Hoesein
- Department of Radiology, Division of Imaging, University Medical Centre Utrecht and Utrecht University, Utrecht, the Netherlands
| |
Collapse
|
398
|
Shafique S, Tehsin S. Acute Lymphoblastic Leukemia Detection and Classification of Its Subtypes Using Pretrained Deep Convolutional Neural Networks. Technol Cancer Res Treat 2019; 17:1533033818802789. [PMID: 30261827 PMCID: PMC6161200 DOI: 10.1177/1533033818802789] [Citation(s) in RCA: 82] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
Leukemia is a fatal disease of white blood cells which affects the blood and bone marrow in human body. We deployed deep convolutional neural network for automated detection of acute lymphoblastic leukemia and classification of its subtypes into 4 classes, that is, L1, L2, L3, and Normal which were mostly neglected in previous literature. In contrary to the training from scratch, we deployed pretrained AlexNet which was fine-tuned on our data set. Last layers of the pretrained network were replaced with new layers which can classify the input images into 4 classes. To reduce overtraining, data augmentation technique was used. We also compared the data sets with different color models to check the performance over different color images. For acute lymphoblastic leukemia detection, we achieved a sensitivity of 100%, specificity of 98.11%, and accuracy of 99.50%; and for acute lymphoblastic leukemia subtype classification the sensitivity was 96.74%, specificity was 99.03%, and accuracy was 96.06%. Unlike the standard methods, our proposed method was able to achieve high accuracy without any need of microscopic image segmentation.
Collapse
Affiliation(s)
- Sarmad Shafique
- Department of Computer Science, Bahria University, Islamabad, Pakistan
| | - Samabia Tehsin
- Department of Computer Science, Bahria University, Islamabad, Pakistan
- Samabia Tehsin, PhD, Department of Computer Science, Bahria University, Islamabad 46000, Pakistan.
| |
Collapse
|
399
|
Xu M, Qi S, Yue Y, Teng Y, Xu L, Yao Y, Qian W. Segmentation of lung parenchyma in CT images using CNN trained with the clustering algorithm generated dataset. Biomed Eng Online 2019; 18:2. [PMID: 30602393 PMCID: PMC6317251 DOI: 10.1186/s12938-018-0619-9] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 12/19/2018] [Indexed: 11/24/2022] Open
Abstract
Background Lung segmentation constitutes a critical procedure for any clinical-decision supporting system aimed to improve the early diagnosis and treatment of lung diseases. Abnormal lungs mainly include lung parenchyma with commonalities on CT images across subjects, diseases and CT scanners, and lung lesions presenting various appearances. Segmentation of lung parenchyma can help locate and analyze the neighboring lesions, but is not well studied in the framework of machine learning. Methods We proposed to segment lung parenchyma using a convolutional neural network (CNN) model. To reduce the workload of manually preparing the dataset for training the CNN, one clustering algorithm based method is proposed firstly. Specifically, after splitting CT slices into image patches, the k-means clustering algorithm with two categories is performed twice using the mean and minimum intensity of image patch, respectively. A cross-shaped verification, a volume intersection, a connected component analysis and a patch expansion are followed to generate final dataset. Secondly, we design a CNN architecture consisting of only one convolutional layer with six kernels, followed by one maximum pooling layer and two fully connected layers. Using the generated dataset, a variety of CNN models are trained and optimized, and their performances are evaluated by eightfold cross-validation. A separate validation experiment is further conducted using a dataset of 201 subjects (4.62 billion patches) with lung cancer or chronic obstructive pulmonary disease, scanned by CT or PET/CT. The segmentation results by our method are compared with those yielded by manual segmentation and some available methods. Results A total of 121,728 patches are generated to train and validate the CNN models. After the parameter optimization, our CNN model achieves an average F-score of 0.9917 and an area of curve up to 0.9991 for classification of lung parenchyma and non-lung-parenchyma. The obtain model can segment the lung parenchyma accurately for 201 subjects with heterogeneous lung diseases and CT scanners. The overlap ratio between the manual segmentation and the one by our method reaches 0.96. Conclusions The results demonstrated that the proposed clustering algorithm based method can generate the training dataset for CNN models. The obtained CNN model can segment lung parenchyma with very satisfactory performance and have the potential to locate and analyze lung lesions.
Collapse
Affiliation(s)
- Mingjie Xu
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China
| | - Shouliang Qi
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China. .,Key Laboratory of Medical Image Computing of Northeastern University (Ministry of Education), Shenyang, China.
| | - Yong Yue
- Department of Radiology, Shengjing Hospital of China Medical University, No. 36 Sanhao Street, Shenyang, 110004, China
| | - Yueyang Teng
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China
| | - Lisheng Xu
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China
| | - Yudong Yao
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China.,Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, 07030, USA
| | - Wei Qian
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China.,College of Engineering, University of Texas at El Paso, 500 W University, El Paso, TX, 79902, USA
| |
Collapse
|
400
|
Gu Y, Shen M, Yang J, Yang GZ. Reliable Label-Efficient Learning for Biomedical Image Recognition. IEEE Trans Biomed Eng 2019; 66:2423-2432. [PMID: 30596566 DOI: 10.1109/tbme.2018.2889915] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The use of deep neural networks for biomedical image analysis requires a sufficient number of labeled datasets. To acquire accurate labels as the gold standard, multiple observers with specific expertise are required for both annotation and proofreading. This process can be time-consuming and labor-intensive, making high-quality, and large-annotated biomedical datasets difficult. To address this problem, we propose a deep active learning framework that enables the active selection of both informative queries and reliable experts. To measure the uncertainty of the unlabeled data, a dropout-based strategy is integrated with a similarity criterion for both data selection and random error elimination. To select the reliable labelers, we adopt an expertise estimator to learn the expertise levels of labelers via offline-testing and online consistency evaluation. The proposed method is applied to classification tasks on two types of medical images including confocal endomicroscopy images and gastrointestinal endoscopic images. The annotations are acquired from multiple labelers with diverse levels of expertise. The experiments demonstrate the efficiency and promising performance of the proposed method compared to a set of baseline methods.
Collapse
|