1
|
Xu Y, Liang J, Zhuo Y, Liu L, Xiao Y, Zhou L. TDASD: Generating medically significant fine-grained lung adenocarcinoma nodule CT images based on stable diffusion models with limited sample size. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108103. [PMID: 38484410 DOI: 10.1016/j.cmpb.2024.108103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 01/06/2024] [Accepted: 02/26/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND AND OBJECTIVES Spread through air spaces (STAS) is an emerging lung cancer infiltration pattern. Predicting its spread through CT scans is crucial. However, limited STAS data makes this prediction task highly challenging. Stable diffusion is capable of generating more diverse and higher-quality images compared to traditional GAN models, surpassing the dominating GAN family models in image synthesis over the past few years. To alleviate the issue of limited STAS data, we propose a method TDASD based on stable diffusion, which is able to generate high-resolution CT images of pulmonary nodules corresponding to specific nodular signs according to the medical professionals. METHODS First, we apply the stable diffusion method for fine-tuning training on publicly available lung datasets. Subsequently, we extract nodules from our hospital's lung adenocarcinoma data and apply slight rotations to the original nodule CT slices within a reasonable range before undergoing another round of fine-tuning through stable diffusion. Finally, employing DDIM and Ksample sampling methods, we generate lung adenocarcinoma nodule CT images with signs based on prompts provided by doctors. The method we propose not only safeguards patient privacy but also enhances the diversity of medical images under limited data conditions. Furthermore, our approach to generating medical images incorporates medical knowledge, resulting in images that exhibit pertinent medical features, thus holding significant value in tumor discrimination diagnostics. RESULTS Our TDASD method has the capability to generate medically meaningful images by optimizing input prompts based on medical descriptions provided by experts. The images generated by our method can improve the model's classification accuracy. Furthermore, Utilizing solely the data generated by our method for model training, the test results on the original real dataset reveal an accuracy rate that closely aligns with the testing accuracy achieved through training on real data. CONCLUSIONS The method we propose not only safeguards patient privacy but also enhances the diversity of medical images under limited data conditions. Furthermore, our approach to generating medical images incorporates medical knowledge, resulting in images that exhibit pertinent medical features, thus holding significant value in tumor discrimination diagnostics.
Collapse
Affiliation(s)
- Yidan Xu
- Institutes of Biomedical Sciences, Fudan University, 138 Yi xue yuan Road, Shanghai, 200032, China.
| | - Jiaqing Liang
- School of Data Science, Fudan University, 220 Handan Road, Shanghai, 200433, China.
| | - Yaoyao Zhuo
- Department of Radiology, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai, 200032, China; Shanghai Institute of Medical Imaging, 180 Fenglin Road, Shanghai, 200032, China.
| | - Lei Liu
- Institutes of Biomedical Sciences, Fudan University, 138 Yi xue yuan Road, Shanghai, 200032, China; Intelligent Medicine Institute, Fudan University, 131 Dongan Road, Shanghai, 200032, China; Shanghai Institute of Stem Cell Research and Clinical Translation, Shanghai, 200120, China.
| | - Yanghua Xiao
- School of Computer Science, Fudan University, 2005 Songhu Road, Shanghai, 200438, China; Shanghai Key Laboratory of Data Science, Fudan University, 2005 Songhu Road, Shanghai, 200438, China.
| | - Lingxiao Zhou
- Institute of Microscale Optoelectronics, Shenzhen University, 3688 Nanhai Avenue, Shenzhen, 518000, China.
| |
Collapse
|
2
|
He M, Cao Y, Chi C, Yang X, Ramin R, Wang S, Yang G, Mukhtorov O, Zhang L, Kazantsev A, Enikeev M, Hu K. Research progress on deep learning in magnetic resonance imaging-based diagnosis and treatment of prostate cancer: a review on the current status and perspectives. Front Oncol 2023; 13:1189370. [PMID: 37546423 PMCID: PMC10400334 DOI: 10.3389/fonc.2023.1189370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) has emerged as a first-line screening and diagnostic tool for prostate cancer, aiding in treatment selection and noninvasive radiotherapy guidance. However, the manual interpretation of MRI data is challenging and time-consuming, which may impact sensitivity and specificity. With recent technological advances, artificial intelligence (AI) in the form of computer-aided diagnosis (CAD) based on MRI data has been applied to prostate cancer diagnosis and treatment. Among AI techniques, deep learning involving convolutional neural networks contributes to detection, segmentation, scoring, grading, and prognostic evaluation of prostate cancer. CAD systems have automatic operation, rapid processing, and accuracy, incorporating multiple sequences of multiparametric MRI data of the prostate gland into the deep learning model. Thus, they have become a research direction of great interest, especially in smart healthcare. This review highlights the current progress of deep learning technology in MRI-based diagnosis and treatment of prostate cancer. The key elements of deep learning-based MRI image processing in CAD systems and radiotherapy of prostate cancer are briefly described, making it understandable not only for radiologists but also for general physicians without specialized imaging interpretation training. Deep learning technology enables lesion identification, detection, and segmentation, grading and scoring of prostate cancer, and prediction of postoperative recurrence and prognostic outcomes. The diagnostic accuracy of deep learning can be improved by optimizing models and algorithms, expanding medical database resources, and combining multi-omics data and comprehensive analysis of various morphological data. Deep learning has the potential to become the key diagnostic method in prostate cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Mingze He
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Yu Cao
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Changliang Chi
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| | - Xinyi Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Rzayev Ramin
- Department of Radiology, The Second University Clinic, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shuowen Wang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Guodong Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Otabek Mukhtorov
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Liqun Zhang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, Liaoning, China
| | - Anton Kazantsev
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Mikhail Enikeev
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Kebang Hu
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| |
Collapse
|
3
|
Shargh AK, Abdolrahim N. An interpretable deep learning approach for designing nanoporous silicon nitride membranes with tunable mechanical properties. NPJ COMPUTATIONAL MATERIALS 2023; 9:82. [PMID: 37273663 PMCID: PMC10221757 DOI: 10.1038/s41524-023-01037-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 05/07/2023] [Indexed: 06/06/2023]
Abstract
The high permeability and strong selectivity of nanoporous silicon nitride (NPN) membranes make them attractive in a broad range of applications. Despite their growing use, the strength of NPN membranes needs to be improved for further extending their biomedical applications. In this work, we implement a deep learning framework to design NPN membranes with improved or prescribed strength values. We examine the predictions of our framework using physics-based simulations. Our results confirm that the proposed framework is not only able to predict the strength of NPN membranes with a wide range of microstructures, but also can design NPN membranes with prescribed or improved strength. Our simulations further demonstrate that the microstructural heterogeneity that our framework suggests for the optimized design, lowers the stress concentration around the pores and leads to the strength improvement of NPN membranes as compared to conventional membranes with homogenous microstructures.
Collapse
Affiliation(s)
- Ali K. Shargh
- Department of Mechanical Engineering, University of Rochester, Rochester, NY 14627 USA
| | - Niaz Abdolrahim
- Department of Mechanical Engineering, University of Rochester, Rochester, NY 14627 USA
- Materials Science program, University of Rochester, Rochester, NY 14627 USA
- Laboratory for Laser Energetics, University of Rochester, Rochester, NY 14627 USA
| |
Collapse
|
4
|
Kengkard P, Choovuthayakorn J, Mahakkanukrauh C, Chitapanarux N, Intasuwan P, Malatong Y, Sinthubua A, Palee P, Lampang SN, Mahakkanukrauh P. Convolutional neural network of age-related trends digital radiographs of medial clavicle in a Thai population: a preliminary study. Anat Cell Biol 2023; 56:86-93. [PMID: 36655305 PMCID: PMC9989796 DOI: 10.5115/acb.22.205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 11/26/2022] [Accepted: 12/05/2022] [Indexed: 01/20/2023] Open
Abstract
Age at death estimation has always been a crucial yet challenging part of identification process in forensic field. The use of human skeletons have long been explored using the principle of macro and micro-architecture change in correlation with increasing age. The clavicle is recommended as the best candidate for accurate age estimation because of its accessibility, time to maturation and minimal effect from weight. Our study applies pre-trained convolutional neural network in order to achieve the most accurate and cost effective age estimation model using clavicular bone. The total of 988 clavicles of Thai population with known age and sex were radiographed using Kodak 9000 Extra-oral Imaging System. The radiographs then went through preprocessing protocol which include region of interest selection and quality assessment. Additional samples were generated using generative adversarial network. The total clavicular images used in this study were 3,999 which were then separated into training and test set, and the test set were subsequently categorized into 7 age groups. GoogLeNet was modified at two layers and fine tuned the parameters. The highest validation accuracy was 89.02% but the test set achieved only 30% accuracy. Our results show that the use of medial clavicular radiographs has a potential in the field of age at death estimation, thus, further study is recommended.
Collapse
Affiliation(s)
| | | | | | | | - Pittayarat Intasuwan
- Department of Anatomy, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Yanumart Malatong
- Department of Anatomy, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Apichat Sinthubua
- Department of Anatomy, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand.,Excellence in Osteology Research and Training Center, Chiang Mai University, Chiang Mai, Thailand
| | - Patison Palee
- College of Arts, Media and Technology, Chiang Mai University, Chiang Mai, Thailand
| | - Sakarat Na Lampang
- Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai, Thailand
| | - Pasuk Mahakkanukrauh
- Department of Anatomy, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand.,Excellence in Osteology Research and Training Center, Chiang Mai University, Chiang Mai, Thailand
| |
Collapse
|
5
|
Xiao Z, Cai H, Wang Y, Cui R, Huo L, Lee EYP, Liang Y, Li X, Hu Z, Chen L, Zhang N. Deep learning for predicting epidermal growth factor receptor mutations of non-small cell lung cancer on PET/CT images. Quant Imaging Med Surg 2023; 13:1286-1299. [PMID: 36915325 PMCID: PMC10006109 DOI: 10.21037/qims-22-760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 12/08/2022] [Indexed: 02/09/2023]
Abstract
Background Predicting the mutation status of the epidermal growth factor receptor (EGFR) gene based on an integrated positron emission tomography/computed tomography (PET/CT) image of non-small cell lung cancer (NSCLC) is a noninvasive, low-cost method which is valuable for targeted therapy. Although deep learning has been very successful in robotic vision, it is still challenging to predict gene mutations in PET/CT-derived studies because of the small amount of medical data and the different parameters of PET/CT devices. Methods We used the advanced EfficientNet-V2 model to predict the EGFR mutation based on fused PET/CT images. First, we extracted 3-dimensional (3D) pulmonary nodules from PET and CT as regions of interest (ROIs). We then fused each single PET and CT image. The network model was used to predict the mutation status of lung nodules by the new data after fusion, and the model was weighted adaptively. The EfficientNet-V2 model used multiple channels to represent nodules comprehensively. Results We trained the EfficientNet-V2 model through our PET/CT fusion algorithm using a dataset of 150 patients. The prediction accuracy of EGFR and non-EGFR mutations was 86.25% in the training dataset, and the accuracy rate was 81.92% in the validation set. Conclusions Combined with experiments, the demonstrated PET/CT fusion algorithm outperformed radiomics methods in predicting EGFR and non-EGFR mutations in NSCLC.
Collapse
Affiliation(s)
- Zhenghui Xiao
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Southern University of Science and Technology, Shenzhen, China
| | - Haihua Cai
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yue Wang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Ruixue Cui
- Nuclear Medicine Department, State Key Laboratory of Complex Severe and Rare Diseases, Center for Rare Diseases Research, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Li Huo
- Nuclear Medicine Department, State Key Laboratory of Complex Severe and Rare Diseases, Center for Rare Diseases Research, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Elaine Yuen-Phin Lee
- Department of Diagnostic Radiology, Clinical School of Medicine, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Ying Liang
- Department of Nuclear Medicine, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, the Hong Kong University of Science and Technology, Hong Kong, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Long Chen
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
6
|
Thirumagal E, Saruladha K. Lung cancer diagnosis using Hessian adaptive learning optimization in generative adversarial networks. Soft comput 2023. [DOI: 10.1007/s00500-023-07877-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
7
|
Osuala R, Kushibar K, Garrucho L, Linardos A, Szafranowska Z, Klein S, Glocker B, Diaz O, Lekadir K. Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging. Med Image Anal 2023; 84:102704. [PMID: 36473414 DOI: 10.1016/j.media.2022.102704] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/02/2022] [Accepted: 11/21/2022] [Indexed: 11/26/2022]
Abstract
Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community.
Collapse
Affiliation(s)
- Richard Osuala
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain.
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Lidia Garrucho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Akis Linardos
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Zuzanna Szafranowska
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Ben Glocker
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, UK
| | - Oliver Diaz
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| |
Collapse
|
8
|
Zhang C, Fan L, Zhang S, Zhao J, Gu Y. Deep learning based dental implant failure prediction from periapical and panoramic films. Quant Imaging Med Surg 2023; 13:935-945. [PMID: 36819274 PMCID: PMC9929426 DOI: 10.21037/qims-22-457] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 11/27/2022] [Indexed: 01/11/2023]
Abstract
Background Dental implant failure is a critical condition that can seriously compromise therapeutic efficacy. Insufficient bone volume, unfavorable bone quality, periodontal bone loss, and systemic conditions, including osteopenia/osteoporosis and diabetes mellitus, have been associated with implant failure. Early indicators of potential implant failure could help mitigate the risk of severe complications. This study aimed to develop an effective implant outcome prediction model using dental periapical and panoramic films. Methods A total of 248 patients (89 with failed implants and 159 with successful implants) were examined. A total of 529 periapical images and 551 panoramic images were collected from the patients for a deep learning-based model. Based on radiographic peri-implant alveolar bone pattern, implant outcome was divided into three categories: implant failure with marginal bone loss, implant failure without marginal bone loss, and implant success. We extracted features using a deep convolutional neural network (CNN) and built a hybrid model to combine periapical and panoramic images. A comparison among three categories of receiver operating characteristic (ROC) curves was performed. The diagnostic accuracy, precision, recall and F1-score of the dataset were assessed. Results Our model achieved an AUC (area under the ROC curve) of 0.972 for failure with marginal bone loss, 0.947 for failure without marginal bone loss and 0.975 for success. In all conditions, for periapical images alone, the diagnostic accuracy was 78.6%; the precision was 0.84, recall was 0.73, and F1-score was 0.75. For panoramic images alone, the diagnostic accuracy was 78.7%; the precision was 0.87, recall was 0.63, and F1-score was 0.66. Both periapical and panoramic images were used in our novel method, and the prediction accuracy was 87%. The precision was 0.85, recall was 0.88, and F1-score was 0.85. Conclusions The deep learning model used features from periapical and panoramic images to effectively predict the occurrence of implant failure and might facilitate early clinical intervention for potential dental implant failures.
Collapse
Affiliation(s)
- Chunan Zhang
- Department of Implant Dentistry, Shanghai Ninth People’s Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai Key Laboratory of Stomatology, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai, China
| | - Linfeng Fan
- Department of Radiology, Shanghai Ninth’s People Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | | | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiaotong University, Shanghai, China
| | - Yingxin Gu
- Department of Implant Dentistry, Shanghai Ninth People’s Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai Key Laboratory of Stomatology, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai, China
| |
Collapse
|
9
|
Li J, Zhou L, Zhan Y, Xu H, Zhang C, Shan F, Liu L. How does the artificial intelligence-based image-assisted technique help physicians in diagnosis of pulmonary adenocarcinoma? A randomized controlled experiment of multicenter physicians in China. J Am Med Inform Assoc 2022; 29:2041-2049. [PMID: 36228127 PMCID: PMC9667181 DOI: 10.1093/jamia/ocac179] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 08/24/2022] [Accepted: 09/24/2022] [Indexed: 11/14/2022] Open
Abstract
OBJECTIVE Although artificial intelligence (AI) has achieved high levels of accuracy in the diagnosis of various diseases, its impact on physicians' decision-making performance in clinical practice is uncertain. This study aims to assess the impact of AI on the diagnostic performance of physicians with differing levels of self-efficacy under working conditions involving different time pressures. MATERIALS AND METHODS A 2 (independent diagnosis vs AI-assisted diagnosis) × 2 (no time pressure vs 2-minute time limit) randomized controlled experiment of multicenter physicians was conducted. Participants diagnosed 10 pulmonary adenocarcinoma cases and their diagnostic accuracy, sensitivity, and specificity were evaluated. Data analysis was performed using multilevel logistic regression. RESULTS One hundred and four radiologists from 102 hospitals completed the experiment. The results reveal (1) AI greatly increases physicians' diagnostic accuracy, either with or without time pressure; (2) when no time pressure, AI significantly improves physicians' diagnostic sensitivity but no significant change in specificity, while under time pressure, physicians' diagnostic sensitivity and specificity are both improved with the aid of AI; (3) when no time pressure, physicians with low self-efficacy benefit from AI assistance thus improving diagnostic accuracy but those with high self-efficacy do not, whereas physicians with low and high levels of self-efficacy both benefit from AI under time pressure. DISCUSSION This study is one of the first to provide real-world evidence regarding the impact of AI on physicians' decision-making performance, taking into account 2 boundary factors: clinical time pressure and physicians' self-efficacy. CONCLUSION AI-assisted diagnosis should be prioritized for physicians working under time pressure or with low self-efficacy.
Collapse
Affiliation(s)
- Jiaoyang Li
- School of Business Administration, Faculty of Business Administration, Southwestern University of Finance and Economics, Chengdu 611130, China
| | - Lingxiao Zhou
- Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen 518060, China
| | - Yi Zhan
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai 201508, China
| | - Haifeng Xu
- Antai College of Economics and Management, Shanghai Jiao Tong University, Shanghai 200030, China
| | - Cheng Zhang
- School of Management, Fudan University, Shanghai 200433, China
| | - Fei Shan
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai 201508, China
| | - Lei Liu
- Intelligent Medicine Institute, Fudan University, Shanghai 200030, China
| |
Collapse
|
10
|
Intasuwan P, Malatong Y, Palee P, Sinthubua A, Mahakkanukrauh P. Applying general adversarial networks in convolutional neural networks of the 2D whole os coxae image classification for sex estimation in a Thai population. AUST J FORENSIC SCI 2022. [DOI: 10.1080/00450618.2022.2131909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Pittayarat Intasuwan
- Department of Anatomy, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Yanumart Malatong
- Department of Anatomy, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Patison Palee
- College of Arts, Media and Technology, Chiang Mai University, Chiang Mai, Thailand
| | - Apichat Sinthubua
- Department of Anatomy, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Pasuk Mahakkanukrauh
- Department of Anatomy, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
- Excellence Center in Osteology Research and Training Center (ORTC), Chiang Mai University, Chiang Mai, Thailand
| |
Collapse
|
11
|
Duan YY, Qin J, Qiu WQ, Li SY, Li C, Liu AS, Chen X, Zhang CX. Performance of a generative adversarial network using ultrasound images to stage liver fibrosis and predict cirrhosis based on a deep-learning radiomics nomogram. Clin Radiol 2022; 77:e723-e731. [PMID: 35811157 DOI: 10.1016/j.crad.2022.06.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 05/31/2022] [Accepted: 06/07/2022] [Indexed: 12/18/2022]
Abstract
AIM To investigate the performance of a generative adversarial network (GAN) model for staging liver fibrosis and its radiomics-based nomogram for predicting cirrhosis. MATERIALS AND METHODS This two-centre retrospective study included 434 patients for whom input data of ultrasound images and histopathological data (obtained within 1 month of ultrasound examinations) were assigned to the training cohort (249 patients), the internal cohort (92 patients), and the external (93 patients) cohort. A data augmentation method based on a GAN model was used. The discriminative performance was evaluated for classifying fibrosis of S4 and ≥S3. Deep-learning radiomics features were extracted for the prediction of cirrhosis (S4). To perform feature reduction and selection, the least absolute shrinkage and selection operator (LASSO) algorithm was applied. Radiomics scores, along with clinical factors, were incorporated into a nomogram using multivariable logistic regression analysis. The performance of the models was estimated with respect to discrimination power, calibration, and clinical benefits. RESULTS The areas under the receiver operating characteristic curve (AUCs) values of the GAN were 0.832/0.762 (≥S3), and 0.867/0.835 (S4) for internal/external test sets, respectively. The radiomics nomogram that intergrated radiomics scores and clinical factors showed good calibration and discrimination ability of 0.922 (AUC) in the training dataset, 0.896 in the internal dataset, and 0.861 in the external dataset. Decision curve analysis (DCA) demonstrated that the nomogram outperformed radiologist and haematological indices in terms of the most clinical benefits. CONCLUSIONS The GAN model could be applied to discriminate fibrosis stages, and a favourable predictive accuracy for diagnosing cirrhosis was achieved using a deep-learning radiomics nomogram.
Collapse
Affiliation(s)
- Y-Y Duan
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, No. 218 Jixi Road, Shushan District, Hefei 230022, Anhui Province, China
| | - J Qin
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, No. 218 Jixi Road, Shushan District, Hefei 230022, Anhui Province, China
| | - W-Q Qiu
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, No. 218 Jixi Road, Shushan District, Hefei 230022, Anhui Province, China
| | - S-Y Li
- Department of Ultrasound, The Affiliated Yantai Yuhuangding Hospital of Qingdao University, No. 20 Yuhuangdingdong Road, Zhifu District, Yantai 264099, Shandong Province, China
| | - C Li
- Department of Biomedical Engineering, Hefei University of Technology, No. 193 Tunxi Road, Baohe District, Hefei 230009, Anhui Province, China
| | - A-S Liu
- Department of Ultrasound, The First Affiliated Hospital of Anhui University of Chinese Medicine, No. 117 Meishan Road, Shushan District, Hefei 230022, Anhui Province, China
| | - X Chen
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, No. 93 Jinzhai Road, Baohe District, Hefei 230026, Anhui Province, China
| | - C-X Zhang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, No. 218 Jixi Road, Shushan District, Hefei 230022, Anhui Province, China.
| |
Collapse
|
12
|
Sun J, Du Y, Li C, Wu TH, Yang B, Mok GSP. Pix2Pix generative adversarial network for low dose myocardial perfusion SPECT denoising. Quant Imaging Med Surg 2022; 12:3539-3555. [PMID: 35782241 PMCID: PMC9246746 DOI: 10.21037/qims-21-1042] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 02/18/2022] [Indexed: 11/12/2023]
Abstract
BACKGROUND Myocardial perfusion (MP) SPECT is a well-established method for diagnosing cardiac disease, yet its radiation risk poses safety concern. This study aims to apply and evaluate the use of Pix2Pix generative adversarial network (Pix2Pix GAN) in denoising low dose MP SPECT images. METHODS One hundred male and female patients with different 99mTc-sestamibi activity distributions, organ and body sizes were simulated by a population of digital 4D Extended Cardiac Torso (XCAT) phantoms. Realistic noisy SPECT projections of full dose of 987 MBq injection and 16 min acquisition, and low dose ranged from 1/20 to 1/2 of the full dose, were generated by an analytical projector from the right anterior oblique (RAO) to the left posterior oblique (LPO) positions. Additionally, twenty patients underwent ~1,184 MBq 99mTc-sestamibi stress SPECT/CT scan were also retrospectively recruited for the study. For each patient, low dose SPECT images (7/10 to 1/10 of full dose) were generated from the full dose list mode data. Our Pix2Pix GAN model was trained with full dose and low dose reconstructed SPECT image pairs. Normalized mean square error (NMSE), structural similarity index (SSIM), coefficient of variation (CV), full-width-at-half-maximum (FWHM) and relative defect size differences (RSD) of Pix2Pix GAN processed images were evaluated along with a reference convolutional auto encoder (CAE) network and post-reconstruction filters. RESULTS NMSE values of 0.0233±0.004 vs. 0.0249±0.004 and 0.0313±0.007 vs. 0.0579±0.016 were obtained on 1/2 and 1/20 dose level for Pix2Pix GAN and CAE in the simulation study, while they were 0.0376±0.010 vs. 0.0433±0.010 and 0.0907±0.020 vs. 0.1186±0.025 on 7/10 and 1/10 dose level in the clinical study. Similar results were also obtained from the SSIM, CV, FWHM and RSD values. Overall, the use of Pix2Pix GAN was superior to other denoising methods in all physical indices, particular in the lower dose levels in the simulation and clinical study. CONCLUSIONS The Pix2Pix GAN method is effective to reduce the noise level of low dose MP SPECT. Further studies on clinical performance are warranted to demonstrate its full clinical effectiveness.
Collapse
Affiliation(s)
- Jingzhang Sun
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, China
| | - Yu Du
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, China
- Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau, China
| | - ChienYing Li
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei
| | - Tung-Hsin Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei
| | - BangHung Yang
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei
| | - Greta S. P. Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, China
- Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau, China
| |
Collapse
|
13
|
Design and Implementation of Obstetric Central Monitoring System Based on Medical Image Segmentation Algorithm. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:3545831. [PMID: 35529540 PMCID: PMC9072048 DOI: 10.1155/2022/3545831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Accepted: 08/13/2021] [Indexed: 11/17/2022]
Abstract
At present, the incidence of emergencies in obstetric care environment is gradually increasing, and different obstetric wards often have a variety of situations. Therefore, it can provide great help in clinical medicine to give early warning and plan coping plans according to different situations. This paper studied an obstetrics central surveillance system based on a medical image segmentation algorithm. Images obtained by central obstetrics monitoring are segmented, magnified in detail, and image features are extracted, collated, and trained. The normal distribution rule is used to classify the features, which are included in the feature library of the obstetric central monitoring system. In the gray space of the medical image, the statistical distribution of gray features of the medical image is described by the mixture model of Rayleigh distribution and Gaussian distribution. In the gray space of the medical image, Taylor series expansion is used to describe the linear geometric structure of medicine. The eigenvalues of Hessian matrix are introduced to obtain high-order multiscale features of medicine. The multiscale feature energy function is introduced into Markov random energy objective function to realize medical image segmentation. Compared with other segmentation algorithms, the accuracy and sensitivity of the proposed algorithm are 87.98% and 86.58%, respectively, which can clearly segment small medical features.
Collapse
|
14
|
[Chinese Experts Consensus on Artificial Intelligence Assisted Management for
Pulmonary Nodule (2022 Version)]. ZHONGGUO FEI AI ZA ZHI = CHINESE JOURNAL OF LUNG CANCER 2022; 25:219-225. [PMID: 35340198 PMCID: PMC9051301 DOI: 10.3779/j.issn.1009-3419.2022.102.08] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Low-dose computed tomography (CT) for lung cancer screening has been proven to reduce lung cancer deaths in the screening group compared with the control group. The increasing number of pulmonary nodules being detected by CT scans significantly increase the workload of the radiologists for scan interpretation. Artificial intelligence (AI) has the potential to increase the efficiency of pulmonary nodule discrimination and has been tested in preliminary studies for nodule management. As more and more artificial AI products are commercialized, the consensus statement has been organized in a collaborative effort by Thoracic Surgery Committee, Department of Simulated Medicine, Wu Jieping Medical Foundation to aid clinicians in the application of AI-assisted management for pulmonary nodules.
.
Collapse
|
15
|
Chen S. Models of Artificial Intelligence-Assisted Diagnosis of Lung Cancer Pathology Based on Deep Learning Algorithms. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:3972298. [PMID: 35378943 PMCID: PMC8976635 DOI: 10.1155/2022/3972298] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 01/19/2022] [Accepted: 02/18/2022] [Indexed: 11/17/2022]
Abstract
In this article, in order to explore the application of a diagnosis system for lung cancer, we use an auxiliary diagnostic system to predict and diagnose the good and evil attributes of chest CT pulmonary nodules. This research improves the new diagnosis method based on the convolutional neural network (CNN) and the recurrent neural network (RNN) and combines the dual effects of the two algorithms to process the classification of benign and malignant nodules. By collecting H-E-stained pathological slices of 652 patients' lung lesions from two hospitals between January 2018 and January 2019, the output results of the improved 3D U-net system and the consistent results of two-person reading were compared. This article analyzes the sensitivity, specificity, positive flammability rate, and negative flammability rate of different lung nodule detection methods. In addition, the artificial intelligence system's and the radiologist's judgment results of benign and malignant pulmonary nodules are used to draw ROC curves for further analysis. The improved model has an accuracy rate of 92.3% for predicting malignant lung nodules and an accuracy rate of 82.8% for benign lung nodules. The new diagnostic method using the convolutional neural network and the recurrent neural network can be very effective for improving the accuracy of predicting lung cancer diagnosis. It can play a very effective role in the disease prediction of lung cancer patients, thereby improving the treatment effect.
Collapse
Affiliation(s)
- Su Chen
- The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510030, Guangdong, China
| |
Collapse
|
16
|
Kumar A, Dhara AK, Thakur SB, Sadhu A, Nandi D. Special Convolutional Neural Network for Identification and Positioning of Interstitial Lung Disease Patterns in Computed Tomography Images. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [PMCID: PMC8711684 DOI: 10.1134/s1054661821040027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
In this paper, automated detection of interstitial lung disease patterns in high resolution computed tomography images is achieved by developing a faster region-based convolutional network based detector with GoogLeNet as a backbone. GoogLeNet is simplified by removing few inception models and used as the backbone of the detector network. The proposed framework is developed to detect several interstitial lung disease patterns without doing lung field segmentation. The proposed method is able to detect the five most prevalent interstitial lung disease patterns: fibrosis, emphysema, consolidation, micronodules and ground-glass opacity, as well as normal. Five-fold cross-validation has been used to avoid bias and reduce over-fitting. The proposed framework performance is measured in terms of F-score on the publicly available MedGIFT database. It outperforms state-of-the-art techniques. The detection is performed at slice level and could be used for screening and differential diagnosis of interstitial lung disease patterns using high resolution computed tomography images.
Collapse
Affiliation(s)
- Abhishek Kumar
- School of Computer and Information Sciences University of Hyderabad, 500046 Hyderabad, India
| | - Ashis Kumar Dhara
- Electrical Engineering National Institute of Technology, 713209 Durgapur, India
| | - Sumitra Basu Thakur
- Department of Chest and Respiratory Care Medicine, Medical College, 700073 Kolkata, India
| | - Anup Sadhu
- EKO Diagnostic, Medical College, 700073 Kolkata, India
| | - Debashis Nandi
- Computer Science and Engineering National Institute of Technology, 713209 Durgapur, India
| |
Collapse
|
17
|
Chlap P, Min H, Vandenberg N, Dowling J, Holloway L, Haworth A. A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol 2021; 65:545-563. [PMID: 34145766 DOI: 10.1111/1754-9485.13261] [Citation(s) in RCA: 174] [Impact Index Per Article: 58.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 05/23/2021] [Indexed: 12/21/2022]
Abstract
Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning-based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state-of-the-art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.
Collapse
Affiliation(s)
- Phillip Chlap
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia
| | - Hang Min
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Nym Vandenberg
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Jason Dowling
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia.,Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Annette Haworth
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
18
|
Dong Y, Hou L, Yang W, Han J, Wang J, Qiang Y, Zhao J, Hou J, Song K, Ma Y, Kazihise NGF, Cui Y, Yang X. Multi-channel multi-task deep learning for predicting EGFR and KRAS mutations of non-small cell lung cancer on CT images. Quant Imaging Med Surg 2021; 11:2354-2375. [PMID: 34079707 PMCID: PMC8107307 DOI: 10.21037/qims-20-600] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 01/27/2021] [Indexed: 12/17/2022]
Abstract
BACKGROUND Predicting the mutation statuses of 2 essential pathogenic genes [epidermal growth factor receptor (EGFR) and Kirsten rat sarcoma (KRAS)] in non-small cell lung cancer (NSCLC) based on CT is valuable for targeted therapy because it is a non-invasive and less costly method. Although deep learning technology has realized substantial computer vision achievements, CT imaging being used to predict gene mutations remains challenging due to small dataset limitations. METHODS We propose a multi-channel and multi-task deep learning (MMDL) model for the simultaneous prediction of EGFR and KRAS mutation statuses based on CT images. First, we decomposed each 3D lung nodule into 9 views. Then, we used the pre-trained inception-attention-resnet model for each view to learn the features of the nodules. By combining 9 inception-attention-resnet models to predict the types of gene mutations in lung nodules, the models were adaptively weighted, and the proposed MMDL model could be trained end-to-end. The MMDL model utilized multiple channels to characterize the nodule more comprehensively and integrate patient personal information into our learning process. RESULTS We trained the proposed MMDL model using a dataset of 363 patients collected by our partner hospital and conducted a multi-center validation on 162 patients in The Cancer Imaging Archive (TCIA) public dataset. The accuracies for the prediction of EGFR and KRAS mutations were, respectively, 79.43% and 72.25% in the training dataset and 75.06% and 69.64% in the validation dataset. CONCLUSIONS The experimental results demonstrated that the proposed MMDL model outperformed the latest methods in predicting EGFR and KRAS mutations in NSCLC.
Collapse
Affiliation(s)
- Yunyun Dong
- School of Software, Taiyuan University of Technology, Taiyuan, China
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Lina Hou
- Department of Radiology, Shanxi Province Cancer Hospital, Taiyuan, China
| | - Wenkai Yang
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Jiahao Han
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Jiawen Wang
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Yan Qiang
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Juanjuan Zhao
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Jiaxin Hou
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Kai Song
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Yulan Ma
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | | | - Yanfen Cui
- Department of Radiology, Shanxi Province Cancer Hospital, Taiyuan, China
| | - Xiaotang Yang
- Department of Radiology, Shanxi Province Cancer Hospital, Taiyuan, China
| |
Collapse
|