1
|
Quanyang W, Yao H, Sicong W, Linlin Q, Zewei Z, Donghui H, Hongjia L, Shijun Z. Artificial intelligence in lung cancer screening: Detection, classification, prediction, and prognosis. Cancer Med 2024; 13:e7140. [PMID: 38581113 PMCID: PMC10997848 DOI: 10.1002/cam4.7140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 03/15/2024] [Accepted: 03/16/2024] [Indexed: 04/08/2024] Open
Abstract
BACKGROUND The exceptional capabilities of artificial intelligence (AI) in extracting image information and processing complex models have led to its recognition across various medical fields. With the continuous evolution of AI technologies based on deep learning, particularly the advent of convolutional neural networks (CNNs), AI presents an expanded horizon of applications in lung cancer screening, including lung segmentation, nodule detection, false-positive reduction, nodule classification, and prognosis. METHODOLOGY This review initially analyzes the current status of AI technologies. It then explores the applications of AI in lung cancer screening, including lung segmentation, nodule detection, and classification, and assesses the potential of AI in enhancing the sensitivity of nodule detection and reducing false-positive rates. Finally, it addresses the challenges and future directions of AI in lung cancer screening. RESULTS AI holds substantial prospects in lung cancer screening. It demonstrates significant potential in improving nodule detection sensitivity, reducing false-positive rates, and classifying nodules, while also showing value in predicting nodule growth and pathological/genetic typing. CONCLUSIONS AI offers a promising supportive approach to lung cancer screening, presenting considerable potential in enhancing nodule detection sensitivity, reducing false-positive rates, and classifying nodules. However, the universality and interpretability of AI results need further enhancement. Future research should focus on the large-scale validation of new deep learning-based algorithms and multi-center studies to improve the efficacy of AI in lung cancer screening.
Collapse
Affiliation(s)
- Wu Quanyang
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Huang Yao
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Wang Sicong
- Magnetic Resonance Imaging ResearchGeneral Electric Healthcare (China)BeijingChina
| | - Qi Linlin
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhang Zewei
- PET‐CT CenterNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Hou Donghui
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Li Hongjia
- PET‐CT CenterNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhao Shijun
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| |
Collapse
|
2
|
Zhang H, Meng Z, Ru J, Meng Y, Wang K. Application and prospects of AI-based radiomics in ultrasound diagnosis. Vis Comput Ind Biomed Art 2023; 6:20. [PMID: 37828411 PMCID: PMC10570254 DOI: 10.1186/s42492-023-00147-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 09/20/2023] [Indexed: 10/14/2023] Open
Abstract
Artificial intelligence (AI)-based radiomics has attracted considerable research attention in the field of medical imaging, including ultrasound diagnosis. Ultrasound imaging has unique advantages such as high temporal resolution, low cost, and no radiation exposure. This renders it a preferred imaging modality for several clinical scenarios. This review includes a detailed introduction to imaging modalities, including Brightness-mode ultrasound, color Doppler flow imaging, ultrasound elastography, contrast-enhanced ultrasound, and multi-modal fusion analysis. It provides an overview of the current status and prospects of AI-based radiomics in ultrasound diagnosis, highlighting the application of AI-based radiomics to static ultrasound images, dynamic ultrasound videos, and multi-modal ultrasound fusion analysis.
Collapse
Affiliation(s)
- Haoyan Zhang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Zheling Meng
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Jinyu Ru
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Yaqing Meng
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100190, China.
| |
Collapse
|
3
|
Mulé S, Lawrance L, Belkouchi Y, Vilgrain V, Lewin M, Trillaud H, Hoeffel C, Laurent V, Ammari S, Morand E, Faucoz O, Tenenhaus A, Cotten A, Meder JF, Talbot H, Luciani A, Lassau N. Generative adversarial networks (GAN)-based data augmentation of rare liver cancers: The SFR 2021 Artificial Intelligence Data Challenge. Diagn Interv Imaging 2023; 104:43-48. [PMID: 36207277 DOI: 10.1016/j.diii.2022.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 09/20/2022] [Indexed: 01/10/2023]
Abstract
PURPOSE The 2021 edition of the Artificial Intelligence Data Challenge was organized by the French Society of Radiology together with the Centre National d'Études Spatiales and CentraleSupélec with the aim to implement generative adversarial networks (GANs) techniques to provide 1000 magnetic resonance imaging (MRI) cases of macrotrabecular-massive (MTM) hepatocellular carcinoma (HCC), a rare and aggressive subtype of HCC, generated from a limited number of real cases from multiple French centers. MATERIALS AND METHODS A dedicated platform was used by the seven inclusion centers to securely upload their anonymized MRI examinations including all three cross-sectional images (one late arterial and one portal-venous phase T1-weighted images and one fat-saturated T2-weighted image) in compliance with general data protection regulation. The quality of the database was checked by experts and manual delineation of the lesions was performed by the expert radiologists involved in each center. Multidisciplinary teams competed between October 11th, 2021 and February 13th, 2022. RESULTS A total of 91 MTM-HCC datasets of three images each were collected from seven French academic centers. Six teams with a total of 28 individuals participated in this challenge. Each participating team was asked to generate one thousand 3-image cases. The qualitative evaluation was performed by three radiologists using the Likert scale on ten randomly selected cases generated by each participant. A quantitative evaluation was also performed using two metrics, the Frechet inception distance and a leave-one-out accuracy of a 1-Nearest Neighbor algorithm. CONCLUSION This data challenge demonstrates the ability of GANs techniques to generate a large number of images from a small sample of imaging examinations of a rare malignant tumor.
Collapse
Affiliation(s)
- Sébastien Mulé
- Medical Imaging Department, AP-HP, Henri Mondor University Hospital, Créteil 94000, France; INSERM, U955, Team 18, Créteil 94000, France.
| | - Littisha Lawrance
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay, Inserm, CNRS, CEA, BIOMAPS, UMR 1281, Université Paris-Saclay, Villejuif 94800, France
| | - Younes Belkouchi
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay, Inserm, CNRS, CEA, BIOMAPS, UMR 1281, Université Paris-Saclay, Villejuif 94800, France; OPIS-Optimisation Imagerie et Santé, Inria, CentraleSupélec, CVN-Centre de Vision Numérique, Université Paris-Saclay, Gif-Sur-Yvette 91190, France
| | - Valérie Vilgrain
- Department of Radiology, APHP, University Hospitals Paris Nord Val de Seine, Hôpital Beaujon, Clichy 92110, France; CRI INSERM, Université Paris Cité, Paris 75018, France
| | - Maité Lewin
- Department of Radiology, AP-HP Hôpital Paul Brousse, Villejuif 94800, France; Faculté de Médecine, Université Paris-Saclay, Le Kremlin-Bicêtre 94270, France
| | - Hervé Trillaud
- CHU de Bordeaux, Department of Radiology, Université de Bordeaux, Bordeaux 33000, France
| | - Christine Hoeffel
- Department of Radiology, Reims University Hospital, Reims 51092, France; CRESTIC, University of Reims Champagne-Ardenne, Reims 51100, France
| | - Valérie Laurent
- Department of Radiology, Nancy University Hospital, University of Lorraine, Vandoeuvre-ls-Nancy 54500, France
| | - Samy Ammari
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay, Inserm, CNRS, CEA, BIOMAPS, UMR 1281, Université Paris-Saclay, Villejuif 94800, France; Department of Imaging, Institut Gustave Roussy, Université Paris-Saclay, Villejuif 94800, France
| | - Eric Morand
- Centre National d'Etudes Spatiales-CNES, Centre Spatial de Toulouse, Toulouse 31401 CEDEX 9 University, France
| | - Orphée Faucoz
- Centre National d'Etudes Spatiales-CNES, Centre Spatial de Toulouse, Toulouse 31401 CEDEX 9 University, France
| | - Arthur Tenenhaus
- CentraleSupélec, Laboratoire des Signaux et Systèmes, Université Paris-Saclay, Gif-sur-Yvette 91190, France
| | - Anne Cotten
- Department of Musculoskeletal Radiology, Centre de Consultations Et D'imagerie de L'appareil Locomoteur, Lille 59037, France; Lille University School of Medicine, Lille, France
| | - Jean-François Meder
- Department of Neuroimaging, Sainte-Anne Hospital, Paris 75013 University, France; Université Paris Cité, Paris 75006, France
| | - Hugues Talbot
- OPIS-Optimisation Imagerie et Santé, Inria, CentraleSupélec, CVN-Centre de Vision Numérique, Université Paris-Saclay, Gif-Sur-Yvette 91190, France
| | - Alain Luciani
- Medical Imaging Department, AP-HP, Henri Mondor University Hospital, Créteil 94000, France; INSERM, U955, Team 18, Créteil 94000, France
| | - Nathalie Lassau
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay, Inserm, CNRS, CEA, BIOMAPS, UMR 1281, Université Paris-Saclay, Villejuif 94800, France; Department of Imaging, Institut Gustave Roussy, Université Paris-Saclay, Villejuif 94800, France
| |
Collapse
|
4
|
Artificial intelligence in lung cancer: current applications and perspectives. Jpn J Radiol 2023; 41:235-244. [PMID: 36350524 PMCID: PMC9643917 DOI: 10.1007/s11604-022-01359-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 10/30/2022] [Indexed: 11/10/2022]
Abstract
Artificial intelligence (AI) has been a very active research topic over the last years and thoracic imaging has particularly benefited from the development of AI and in particular deep learning. We have now entered a phase of adopting AI into clinical practice. The objective of this article was to review the current applications and perspectives of AI in thoracic oncology. For pulmonary nodule detection, computer-aided detection (CADe) tools have been commercially available since the early 2000s. The more recent rise of deep learning and the availability of large annotated lung nodule datasets have allowed the development of new CADe tools with fewer false-positive results per examination. Classical machine learning and deep-learning methods were also used for pulmonary nodule segmentation allowing nodule volumetry and pulmonary nodule characterization. For pulmonary nodule characterization, radiomics and deep-learning approaches were used. Data from the National Lung Cancer Screening Trial (NLST) allowed the development of several computer-aided diagnostic (CADx) tools for diagnosing lung cancer on chest computed tomography. Finally, AI has been used as a means to perform virtual biopsies and to predict response to treatment or survival. Thus, many detection, characterization and stratification tools have been proposed, some of which are commercially available.
Collapse
|
5
|
de Margerie-Mellon C, Chassagnon G. Artificial intelligence: A critical review of applications for lung nodule and lung cancer. Diagn Interv Imaging 2023; 104:11-17. [PMID: 36513593 DOI: 10.1016/j.diii.2022.11.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) is a broad concept that usually refers to computer programs that can learn from data and perform certain specific tasks. In the recent years, the growth of deep learning, a successful technique for computer vision tasks that does not require explicit programming, coupled with the availability of large imaging databases fostered the development of multiple applications in the medical imaging field, especially for lung nodules and lung cancer, mostly through convolutional neural networks (CNN). Some of the first applications of AI is this field were dedicated to automated detection of lung nodules on X-ray and computed tomography (CT) examinations, with performances now reaching or exceeding those of radiologists. For lung nodule segmentation, CNN-based algorithms applied to CT images show excellent spatial overlap index with manual segmentation, even for irregular and ground glass nodules. A third application of AI is the classification of lung nodules between malignant and benign, which could limit the number of follow-up CT examinations for less suspicious lesions. Several algorithms have demonstrated excellent capabilities for the prediction of the malignancy risk when a nodule is discovered. These different applications of AI for lung nodules are particularly appealing in the context of lung cancer screening. In the field of lung cancer, AI tools applied to lung imaging have been investigated for distinct aims. First, they could play a role for the non-invasive characterization of tumors, especially for histological subtype and somatic mutation predictions, with a potential therapeutic impact. Additionally, they could help predict the patient prognosis, in combination to clinical data. Despite these encouraging perspectives, clinical implementation of AI tools is only beginning because of the lack of generalizability of published studies, of an inner obscure working and because of limited data about the impact of such tools on the radiologists' decision and on the patient outcome. Radiologists must be active participants in the process of evaluating AI tools, as such tools could support their daily work and offer them more time for high added value tasks.
Collapse
Affiliation(s)
- Constance de Margerie-Mellon
- Université Paris Cité, Laboratory of Imaging Biomarkers, Center for Research on Inflammation, UMR 1149, INSERM, 75018 Paris, France; Department of Radiology, Hôpital Saint-Louis APHP, 75010 Paris, France
| | - Guillaume Chassagnon
- Université Paris Cité, Faculté de Médecine, 75006 Paris, France; Department of Radiology, Hôpital Cochin APHP, 75014 Paris, France
| |
Collapse
|
6
|
Cellina M, Cè M, Khenkina N, Sinichich P, Cervelli M, Poggi V, Boemi S, Ierardi AM, Carrafiello G. Artificial Intellgence in the Era of Precision Oncological Imaging. Technol Cancer Res Treat 2022; 21:15330338221141793. [PMID: 36426565 PMCID: PMC9703524 DOI: 10.1177/15330338221141793] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Rapid-paced development and adaptability of artificial intelligence algorithms have secured their almost ubiquitous presence in the field of oncological imaging. Artificial intelligence models have been created for a variety of tasks, including risk stratification, automated detection, and segmentation of lesions, characterization, grading and staging, prediction of prognosis, and treatment response. Soon, artificial intelligence could become an essential part of every step of oncological workup and patient management. Integration of neural networks and deep learning into radiological artificial intelligence algorithms allow for extrapolating imaging features otherwise inaccessible to human operators and pave the way to truly personalized management of oncological patients.Although a significant proportion of currently available artificial intelligence solutions belong to basic and translational cancer imaging research, their progressive transfer to clinical routine is imminent, contributing to the development of a personalized approach in oncology. We thereby review the main applications of artificial intelligence in oncological imaging, describe the example of their successful integration into research and clinical practice, and highlight the challenges and future perspectives that will shape the field of oncological radiology.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, Milano, Italy,Michaela Cellina, MD, Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121, Milano, Italy.
| | - Maurizio Cè
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Natallia Khenkina
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Polina Sinichich
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Marco Cervelli
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Vittoria Poggi
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Sara Boemi
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | | | - Gianpaolo Carrafiello
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy,Radiology Department, Fondazione IRCCS Cà Granda, Milan, Italy
| |
Collapse
|
7
|
Karrar A, Mabrouk MS, Abdel Wahed M, Sayed AY. Auto diagnostic system for detecting solitary and juxtapleural pulmonary nodules in computed tomography images using machine learning. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07844-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/07/2022]
Abstract
AbstractLung cancer is one of the most serious cancers in the world with the minimum survival rate after the diagnosis as it appears in Computed Tomography scans. Lung nodules may be isolated from (solitary) or attached to (juxtapleural) other structures such as blood vessels or the pleura. Diagnosis of lung nodules according to their location increases the survival rate as it achieves diagnostic and therapeutic quality assurance. In this paper, a Computer Aided Diagnosis (CADx) system is proposed to classify solitary nodules and juxtapleural nodules inside the lungs. Two main auto-diagnostic schemes of supervised learning for lung nodules classification are achieved. In the first scheme, (bounding box + Maximum intensity projection) and (Thresholding + K-means clustering) segmentation approaches are proposed then first- and second-order features are extracted. Fisher score ranking is also used in the first scheme as a feature selection method. The higher five, ten, and fifteen ranks of the feature set are selected. In the first scheme, Support Vector Machine (SVM) classifier is used. In the second scheme, the same segmentation approaches are used with Deep Convolutional neural networks (DCNN) which is a successful tool for deep learning classification. Because of the limited data sample and imbalanced data, tenfold cross-validation and random oversampling are used for the two schemes. For diagnosis of the solitary nodule, the first scheme with SVM achieved the highest accuracy and sensitivity 91.4% and 89.3%, respectively, with radial basis function and applying the (Thresholding + Kmeans clustering) segmentation approach and the higher 15 ranks of the feature set. In the second scheme, DCNN achieved the highest accuracy and sensitivity 96% and 95%, respectively, to detect the solitary nodule when applying the bounding box and maximum intensity projection segmentation approach. Receiver operating characteristic curve is used to evaluate the classifier’s performance. The max. AUC = 90.3% is achieved with DCNN classifier for detecting solitary nodules. This CAD system acts as a second opinion for the radiologist to help in the early diagnosis of lung cancer. The accuracy, sensitivity, and specificity of scheme I (SVM) and scheme II (DCNN) showed promising results in comparison to other published studies.
Collapse
|
8
|
Tao J, Liang C, Yin K, Fang J, Chen B, Wang Z, Lan X, Zhang J. 3D convolutional neural network model from contrast-enhanced CT to predict spread through air spaces in non-small cell lung cancer. Diagn Interv Imaging 2022; 103:535-544. [PMID: 35773100 DOI: 10.1016/j.diii.2022.06.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 06/11/2022] [Accepted: 06/13/2022] [Indexed: 11/26/2022]
Abstract
PURPOSE The purpose of this study was to compare the efficacy of five non-invasive models, including three-dimensional (3D) convolutional neural network (CNN) model, to predict the spread through air spaces (STAS) status of non-small cell lung cancer (NSCLC), and to obtain the best prediction model to provide a basis for clinical surgery planning. MATERIALS AND METHODS A total of 203 patients (112 men, 91 women; mean age, 60 years; age range 22-80 years) with NSCLC were retrospectively included. Of these, 153 were used for training cohort and 50 for validation cohort. According to the image biomarker standardization initiative reference manual, the image processing and feature extraction were standardized using PyRadiomics. The logistic regression classifier was used to build the model. Five models (clinicopathological/CT model, conventional radiomics model, computer vision (CV) model, 3D CNN model and combined model) were constructed to predict STAS by NSCLC. Area under the receiver operating characteristic curves (AUC) were used to validate the capability of the five models to predict STAS. RESULTS For predicting STAS, the 3D CNN model was superior to the clinicopathological/CT model, conventional radiomics model, CV model and combined model and achieved satisfactory discrimination performance, with an AUC of 0.93 (95% CI: 0.70-0.82) in the training cohort and 0.80 (95% CI: 0.65-0.86) in the validation cohort. Decision curve analysis indicated that, when the probability of the threshold was over 10%, the 3D CNN model was beneficial for predicting STAS status compared to either treating all or treating none of the patients within certain ranges of risk threshold CONCLUSION: The 3D CNN model can be used for the preoperative prediction of STAS in patients with NSCLC, and was superior to the other four models in predicting patients' risk of developing STAS.
Collapse
Affiliation(s)
- Junli Tao
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Changyu Liang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Ke Yin
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Jiayang Fang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Bohui Chen
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Zhenyu Wang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Xiaosong Lan
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Jiuquan Zhang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China.
| |
Collapse
|
9
|
Schwyzer M, Messerli M, Eberhard M, Skawran S, Martini K, Frauenfelder T. Impact of dose reduction and iterative reconstruction algorithm on the detectability of pulmonary nodules by artificial intelligence. Diagn Interv Imaging 2022; 103:273-280. [PMID: 34991993 DOI: 10.1016/j.diii.2021.12.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Revised: 11/11/2021] [Accepted: 12/05/2021] [Indexed: 12/24/2022]
Abstract
PURPOSE The purpose of this study was to assess whether the performances of an automated software for lung nodule detection with computed tomography (CT) are affected by radiation dose and the use of iterative reconstruction algorithm. MATERIALS AND METHODS A chest phantom (Multipurpose Chest Phantom N1; Kyoto Kagaku Co. Ltd, Kyoto, Japan) with 15 pulmonary nodules was scanned with a total of five CT protocol settings with up to 20-fold dose reduction. All CT examinations were reconstructed with iterative reconstruction algorithms ADMIRE 3 and ADMIRE 5 and were then analyzed for the presence of pulmonary nodules with a fully automated computer aided detection software system (InferReadTM CT Lung, Infervision), which is based on deep neural networks. RESULTS The sensitivity of fully automated pulmonary nodule detection for ground-glass nodules at standard dose CT was greater (70.0%; 14/20; 95% CI: 51.6-88.4%) than at 10-fold and 20-fold dose reduction (30.0%; 6/20; 95% CI: 0.0%-62.5%). There were less false positive findings when ADMIRE 5 reconstruction was used (4.0 ± 2.8 [SD]; range: 2-6) instead of ADMIRE 3 reconstruction (25.0 ± 15.6 [SD]; range: 14-36). There was no difference in the sensitivity of detection of solid and subsolid nodules between standard dose (100%; 95% CI: 100-100%) and 10- and 20-fold reduced dose CT (92.5%; 95% CI: 83.8-100.0%). Image noise was significantly greater with ADMIRE 3 (81 ± 2 [SD] [range: 79-84]; 104 ± 3 [SD] [range: 101-107]; 114 ± 5 [SD] [range: 110-119]; 193 ± 10 [SD] [range: 183-203]; 220 ± 16 [SD] [range: 210-238]) compared to ADMIRE 5 (44 ± 2 [SD] [range: 42-46]; 60 ± 2 [SD] [range: 57-61]; 66 ± 1 [SD] [range: 65-67]; 103 ± 4 [SD] [range: 98-106]; 110 ± 1 [SD] [range: 109-111]), respectively in each of the five CT protocols. CONCLUSION This phantom study suggests that dose reduction and iterative reconstruction settings have an impact on detectability of pulmonary nodules by artificial intelligence software and we therefore encourage adaption of dose levels and reconstruction methods prior to widespread implementation of fully automatic nodule detection software for lung cancer screening purposes.
Collapse
Affiliation(s)
- Moritz Schwyzer
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland; Health Sciences and Technology, Institute of Food, Nutrition and Health, ETH Zurich, 8603 Schwerzenbach, Switzerland; University of Zurich, 8006 Zurich, Switzerland; School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Michael Messerli
- University of Zurich, 8006 Zurich, Switzerland; Department of Nuclear Medicine, University Hospital Zurich, 8091 Zurich, Switzerland
| | - Matthias Eberhard
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland; University of Zurich, 8006 Zurich, Switzerland
| | - Stephan Skawran
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland; University of Zurich, 8006 Zurich, Switzerland; Department of Nuclear Medicine, University Hospital Zurich, 8091 Zurich, Switzerland
| | - Katharina Martini
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland; University of Zurich, 8006 Zurich, Switzerland.
| | - Thomas Frauenfelder
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland; University of Zurich, 8006 Zurich, Switzerland
| |
Collapse
|
10
|
Dupuis M, Delbos L, Veil R, Adamsbaum C. External validation of a commercially available deep learning algorithm for fracture detection in children: Fracture detection with a deep learning algorithm. Diagn Interv Imaging 2021; 103:151-159. [PMID: 34810137 DOI: 10.1016/j.diii.2021.10.007] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 10/06/2021] [Accepted: 10/24/2021] [Indexed: 02/06/2023]
Abstract
PURPOSE The purpose of this study was to conduct an external validation of a fracture assessment deep learning algorithm (Rayvolve®) using digital radiographs from a real-life cohort of children presenting routinely to the emergency room. MATERIALS AND METHODS This retrospective study was conducted on 2634 radiography sets (5865 images) from 2549 children (1459 boys, 1090 girls; mean age, 8.5 ± 4.5 [SD] years; age range: 0-17 years) referred by the pediatric emergency room for trauma. For each set was recorded whether one or more fractures were found, the number of fractures, and their location found by the senior radiologists and the algorithm. Using the senior radiologist diagnosis as the standard of reference, the diagnostic performance of deep learning algorithm (Rayvolve®) was calculated via three different approaches: a detection approach (presence/absence of a fracture as a binary variable), an enumeration approach (exact number of fractures detected) and a localization approach (focusing on whether the detected fractures were correctly localized). Subgroup analyses were performed according to the presence of a cast or not, age category (0-4 vs. 5-18 years) and anatomical region. RESULTS Regarding detection approach, the deep learning algorithm yielded 95.7% sensitivity (95% CI: 94.0-96.9), 91.2% specificity (95% CI: 89.8-92.5) and 92.6% accuracy (95% CI: 91.5-93.6). Regarding enumeration and localization approaches, the deep learning algorithm yielded 94.1% sensitivity (95% CI: 92.1-95.6), 88.8% specificity (95% CI: 87.3-90.2) and 90.4% accuracy (95% CI: 89.2-91.5) for both approaches. Regarding age-related subgroup analyses, the deep learning algorithm yielded greater sensitivity and negative predictive value in the 5-18-years age group than in the 0-4-years age group for the detection approach (P < 0.001 and P = 0.002) and for the enumeration and localization approaches (P = 0.012 and P = 0.028). The high negative predictive value was robust, persisting in all of the subgroup analyses, except for patients with casts (P = 0.001 for the detection approach and P < 0.001 for the enumeration and localization approaches). CONCLUSION The Rayvolve® deep learning algorithm is very reliable for detecting fractures in children, especially in those older than 4 years and without cast.
Collapse
Affiliation(s)
- Michel Dupuis
- AP-HP, Bicêtre Hospital, Pediatric Imaging Department, 94270 Le Kremlin Bicêtre, France
| | - Léo Delbos
- AP-HP, Bicêtre Hospital, Epidemiology and Public Health Department, 94270 Le Kremlin Bicêtre, France
| | - Raphael Veil
- AP-HP, Bicêtre Hospital, Epidemiology and Public Health Department, 94270 Le Kremlin Bicêtre, France
| | - Catherine Adamsbaum
- AP-HP, Bicêtre Hospital, Pediatric Imaging Department, 94270 Le Kremlin Bicêtre, France; Paris Saclay University, Faculty of Medicine, 94270 Le Kremlin Bicêtre, France.
| |
Collapse
|
11
|
Hoang-Thi TN, Vakalopoulou M, Christodoulidis S, Paragios N, Revel MP, Chassagnon G. Deep learning for lung disease segmentation on CT: Which reconstruction kernel should be used? Diagn Interv Imaging 2021; 102:691-695. [PMID: 34686464 DOI: 10.1016/j.diii.2021.10.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 09/30/2021] [Accepted: 10/01/2021] [Indexed: 12/30/2022]
Abstract
PURPOSE The purpose of this study was to determine whether a single reconstruction kernel or both high and low frequency kernels should be used for training deep learning models for the segmentation of diffuse lung disease on chest computed tomography (CT). MATERIALS AND METHODS Two annotated datasets of COVID-19 pneumonia (323,960 slices) and interstitial lung disease (ILD) (4,284 slices) were used. Annotated CT images were used to train a U-Net architecture to segment disease. All CT slices were reconstructed using both a lung kernel (LK) and a mediastinal kernel (MK). Three different trainings, resulting in three different models were compared for each disease: training on LK only, MK only or LK+MK images. Dice similarity scores (DSC) were compared using the Wilcoxon signed-rank test. RESULTS Models only trained on LK images performed better on LK images than on MK images (median DSC = 0.62 [interquartile range (IQR): 0.54, 0.69] vs. 0.60 [IQR: 0.50, 0.70], P < 0.001 for COVID-19 and median DSC = 0.62 [IQR: 0.56, 0.69] vs. 0.50 [IQR 0.43, 0.57], P < 0.001 for ILD). Similarly, models only trained on MK images performed better on MK images (median DSC = 0.62 [IQR: 0.53, 0.68] vs. 0.54 [IQR: 0.47, 0.63], P < 0.001 for COVID-19 and 0.69 [IQR: 0.61, 0.73] vs. 0.63 [IQR: 0.53, 0.70], P < 0.001 for ILD). Models trained on both kernels performed better or similarly than those trained on only one kernel. For COVID-19, median DSC was 0.67 (IQR: =0.59, 0.73) when applied on LK images and 0.67 (IQR: 0.60, 0.74) when applied on MK images (P < 0.001 for both). For ILD, median DSC was 0.69 (IQR: 0.63, 0.73) when applied on LK images (P = 0.006) and 0.68 (IQR: 0.62, 0.72) when applied on MK images (P > 0.99). CONCLUSION Reconstruction kernels impact the performance of deep learning-based models for lung disease segmentation. Training on both LK and MK images improves the performance.
Collapse
Affiliation(s)
- Trieu-Nghi Hoang-Thi
- Université de Paris, Faculté de Médecine, 75006 Paris, France; Department of Radiology, Hôpital Cochin, AP-HP.centre, 75014 Paris, France
| | - Maria Vakalopoulou
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, 3 91190 Gif-sur-Yvette, France
| | - Stergios Christodoulidis
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, 3 91190 Gif-sur-Yvette, France
| | - Nikos Paragios
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, 3 91190 Gif-sur-Yvette, France; TheraPanacea, 75014 Paris, France
| | - Marie-Pierre Revel
- Université de Paris, Faculté de Médecine, 75006 Paris, France; Department of Radiology, Hôpital Cochin, AP-HP.centre, 75014 Paris, France
| | - Guillaume Chassagnon
- Université de Paris, Faculté de Médecine, 75006 Paris, France; Department of Radiology, Hôpital Cochin, AP-HP.centre, 75014 Paris, France.
| |
Collapse
|
12
|
Breast nodule classification with two-dimensional ultrasound using Mask-RCNN ensemble aggregation. Diagn Interv Imaging 2021; 102:653-658. [PMID: 34600861 DOI: 10.1016/j.diii.2021.09.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 09/10/2021] [Accepted: 09/10/2021] [Indexed: 12/30/2022]
Abstract
PURPOSE The purpose of this study was to create a deep learning algorithm to infer the benign or malignant nature of breast nodules using two-dimensional B-mode ultrasound data initially marked as BI-RADS 3 and 4. MATERIALS AND METHODS An ensemble of mask region-based convolutional neural networks (Mask-RCNN) combining nodule segmentation and classification were trained to explicitly localize the nodule and generate a probability of the nodule to be malignant on two-dimensional B-mode ultrasound. These probabilities were aggregated at test time to produce final results. Resulting inferences were assessed using area under the curve (AUC). RESULTS A total of 460 ultrasound images of breast nodules classified as BI-RADS 3 or 4 were included. There were 295 benign and 165 malignant breast nodules used for training and validation, and another 137 breast nodules images used for testing. As a part of the challenge, the distribution of benign and malignant breast nodules in the test database remained unknown. The obtained AUC was 0.69 (95% CI: 0.57-0.82) on the training set and 0.67 on the test set. CONCLUSION The proposed deep learning solution helps classify benign and malignant breast nodules based solely on two-dimensional ultrasound images initially marked as BIRADS 3 and 4.
Collapse
|
13
|
MTU-COVNet: A hybrid methodology for diagnosing the COVID-19 pneumonia with optimized features from multi-net. Clin Imaging 2021; 81:1-8. [PMID: 34592696 PMCID: PMC8473071 DOI: 10.1016/j.clinimag.2021.09.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 08/25/2021] [Accepted: 09/06/2021] [Indexed: 12/11/2022]
Abstract
Purpose The aim of this study was to establish and evaluate a fully automatic deep learning system for the diagnosis of COVID-19 using thoracic computed tomography (CT). Materials and methods In this retrospective study, a novel hybrid model (MTU-COVNet) was developed to extract visual features from volumetric thoracic CT scans for the detection of COVID-19. The collected dataset consisted of 3210 CT scans from 953 patients. Of the total 3210 scans in the final dataset, 1327 (41%) were obtained from the COVID-19 group, 929 (29%) from the CAP group, and 954 (30%) from the Normal CT group. Diagnostic performance was assessed with the area under the receiver operating characteristic (ROC) curve, sensitivity, and specificity. Results The proposed approach with the optimized features from concatenated layers reached an overall accuracy of 97.7% for the CT-MTU dataset. The rest of the total performance metrics, such as; specificity, sensitivity, precision, F1 score, and Matthew Correlation Coefficient were 98.8%, 97.6%, 97.8%, 97.7%, and 96.5%, respectively. This model showed high diagnostic performance in detecting COVID-19 pneumonia (specificity: 98.0% and sensitivity: 98.2%) and CAP (specificity: 99.1% and sensitivity: 97.1%). The areas under the ROC curves for COVID-19 and CAP were 0.997 and 0.996, respectively. Conclusion A deep learning–based AI system built on the CT imaging can detect COVID-19 pneumonia with high diagnostic efficiency and distinguish it from CAP and normal CT. AI applications can have beneficial effects in the fight against COVID-19.
Collapse
|
14
|
Greffier J, Frandon J, Si-Mohamed S, Dabli D, Hamard A, Belaouni A, Akessoul P, Besse F, Guiu B, Beregi JP. Comparison of two deep learning image reconstruction algorithms in chest CT images: A task-based image quality assessment on phantom data. Diagn Interv Imaging 2021; 103:21-30. [PMID: 34493475 DOI: 10.1016/j.diii.2021.08.001] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 08/02/2021] [Accepted: 08/04/2021] [Indexed: 12/22/2022]
Abstract
PURPOSE The purpose of this study was to compare the effect of two deep learning image reconstruction (DLR) algorithms in chest computed tomography (CT) with different clinical indications. MATERIAL AND METHODS Acquisitions on image quality and anthropomorphic phantoms were performed at six dose levels (CTDIvol: 10/7.5/5/2.5/1/0.5mGy) on two CT scanners equipped with two different DLR algorithms (TrueFidelityTM and AiCE). Raw data were reconstructed using the filtered back-projection (FBP) and the lowest/intermediate/highest DLR levels (L-DLR/M-DLR/H-DLR) of each algorithm. Noise power spectrum, task-based transfer function (TTF) and detectability index (d') were computed: d' modelled detection of a soft tissue mediastinal nodule, ground-glass opacity, or high-contrast pulmonary lesion. Subjective image quality of anthropomorphic phantom images was analyzed by two radiologists. RESULTS For the L-DLR/M-DLR levels, the noise magnitude was lower with TrueFidelityTM than with AiCE from 2.5 to 10 mGy. For H-DLR, noise magnitude was lower with AiCE . For L-DLR and M-DLR, the average NPS spatial frequency (fav) values were greater for AiCE except for 0.5 mGy. For H-DLR levels, fav was greater for TrueFidelityTM than for AiCE. TTF50% values were greater with AiCE for the air insert, and lower than TrueFidelityTM for the polyethylene insert. From 2.5 to10 mGy, d' was greater for AiCE than for TrueFidelityTM for H-DLR for all lesions, but similar for L-DLR and M-DLR. Image quality was rated clinically appropriate for all levels of both algorithms, for dose from 2.5 to 10 mGy, except for L-DLR of AiCE. CONCLUSION DLR algorithms reduce the image-noise and improve lesion detectability. Their operations and properties impacted both noise-texture and spatial resolution.
Collapse
Affiliation(s)
- Joël Greffier
- Department of Medical Imaging, CHU Nimes, Univ Montpellier, Medical Imaging Group Nimes, EA 2992, 30029 Nîmes, France.
| | - Julien Frandon
- Department of Medical Imaging, CHU Nimes, Univ Montpellier, Medical Imaging Group Nimes, EA 2992, 30029 Nîmes, France
| | - Salim Si-Mohamed
- Department of Radiology, Hospices Civils de Lyon, 69500 Lyon, France
| | - Djamel Dabli
- Department of Medical Imaging, CHU Nimes, Univ Montpellier, Medical Imaging Group Nimes, EA 2992, 30029 Nîmes, France
| | - Aymeric Hamard
- Department of Medical Imaging, CHU Nimes, Univ Montpellier, Medical Imaging Group Nimes, EA 2992, 30029 Nîmes, France
| | - Asmaa Belaouni
- Department of Medical Imaging, CHU Nimes, Univ Montpellier, Medical Imaging Group Nimes, EA 2992, 30029 Nîmes, France
| | - Philippe Akessoul
- Department of Medical Imaging, CHU Nimes, Univ Montpellier, Medical Imaging Group Nimes, EA 2992, 30029 Nîmes, France
| | - Francis Besse
- Department of Radiology Centre Cardiologique Nord, 93200 Saint Denis, France
| | - Boris Guiu
- Department of Radiology Saint-Eloi University Hospital, 34295 Montpellier, France
| | - Jean-Paul Beregi
- Department of Medical Imaging, CHU Nimes, Univ Montpellier, Medical Imaging Group Nimes, EA 2992, 30029 Nîmes, France
| |
Collapse
|
15
|
Lassau N, Bousaid I, Chouzenoux E, Verdon A, Balleyguier C, Bidault F, Mousseaux E, Harguem-Zayani S, Gaillandre L, Bensalah Z, Doutriaux-Dumoulin I, Monroc M, Haquin A, Ceugnart L, Bachelle F, Charlot M, Thomassin-Naggara I, Fourquet T, Dapvril H, Orabona J, Chamming's F, El Haik M, Zhang-Yin J, Guillot MS, Ohana M, Caramella T, Diascorn Y, Airaud JY, Cuingnet P, Gencer U, Lawrance L, Luciani A, Cotten A, Meder JF. Three artificial intelligence data challenges based on CT and ultrasound. Diagn Interv Imaging 2021; 102:669-674. [PMID: 34312111 DOI: 10.1016/j.diii.2021.06.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/21/2021] [Accepted: 06/23/2021] [Indexed: 12/18/2022]
Abstract
PURPOSE The 2020 edition of these Data Challenges was organized by the French Society of Radiology (SFR), from September 28 to September 30, 2020. The goals were to propose innovative artificial intelligence solutions for the current relevant problems in radiology and to build a large database of multimodal medical images of ultrasound and computed tomography (CT) on these subjects from several French radiology centers. MATERIALS AND METHODS This year the attempt was to create data challenge objectives in line with the clinical routine of radiologists, with less preprocessing of data and annotation, leaving a large part of the preprocessing task to the participating teams. The objectives were proposed by the different organizations depending on their core areas of expertise. A dedicated platform was used to upload the medical image data, to automatically anonymize the uploaded data. RESULTS Three challenges were proposed including classification of benign or malignant breast nodules on ultrasound examinations, detection and contouring of pathological neck lymph nodes from cervical CT examinations and classification of calcium score on coronary calcifications from thoracic CT examinations. A total of 2076 medical examinations were included in the database for the three challenges, in three months, by 18 different centers, of which 12% were excluded. The 39 participants were divided into six multidisciplinary teams among which the coronary calcification score challenge was solved with a concordance index > 95%, and the other two with scores of 67% (breast nodule classification) and 63% (neck lymph node calcifications).
Collapse
Affiliation(s)
- Nathalie Lassau
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France.
| | - Imad Bousaid
- Direction de la Transformation Numérique et des Systèmes d'Information, Institut Gustave Roussy, 94800 Villejuif, France
| | | | - Antoine Verdon
- Direction de la Transformation Numérique et des Systèmes d'Information, Institut Gustave Roussy, 94800 Villejuif, France
| | - Corinne Balleyguier
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - François Bidault
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - Elie Mousseaux
- Unité Fonctionnelle d'Imagerie Cardiovasculaire Non Invasive, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Sana Harguem-Zayani
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - Loic Gaillandre
- Centre Libéral d'Imagerie Médicale Agglomération Lille, 59800 Lille, France
| | - Zoubir Bensalah
- Department of Radiology, Centre Hospitalier St Jean, 66000 Perpignan, France
| | | | - Michèle Monroc
- Department of Radiology, Clinique Saint Antoine, 76230 Bois-Guillaume, France
| | - Audrey Haquin
- Department of Radiology, Hôpital de la Croix-Rousse - HCL, 69004 Lyon, France
| | - Luc Ceugnart
- Department of Radiology, Centre Oscar Lambret, 59000 Lille, France
| | | | - Mathilde Charlot
- Department of Radiology, Hôpital Lyon Sud - HCL, 69310 Pierre-Bénite, France
| | | | - Tiphaine Fourquet
- Department of Radiology, Centre Hospitalier Universitaire de Lille, 59000 Lille, France
| | - Héloise Dapvril
- Service d'Imagerie de la Femme, Centre Hospitalier de Valenciennes, 59300 Valenciennes, France
| | - Joseph Orabona
- Department of Radiology, Centre Hospitalier de Bastia, 20600 Bastia, France
| | | | - Mickael El Haik
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - Jules Zhang-Yin
- Department of Radiology, Hôpital Tenon, AP-HP, 75020 Paris, France
| | - Marc-Samir Guillot
- Unité Fonctionnelle d'Imagerie Cardiovasculaire Non Invasive, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Mickaël Ohana
- Department of Radiology, Centre Hospitalier Universitaire de Strasbourg, 67200 Strasbourg, France
| | - Thomas Caramella
- Department of Radiology, Institut Arnault Tzanck, 06700 Saint-Laurent du Var, France
| | - Yann Diascorn
- Department of Radiology, Institut Arnault Tzanck, 06700 Saint-Laurent du Var, France
| | | | - Philippe Cuingnet
- Department of Radiology, Centre Hospitalier de Douai, 59507 Douai, France
| | - Umit Gencer
- Unité Fonctionnelle d'Imagerie Cardiovasculaire Non Invasive, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Littisha Lawrance
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France
| | - Alain Luciani
- Collège des Enseignants de Radiologie de France, 75013 Paris, France; Department of Radiology, Centre Hospitalier Henri Mondor, 94000 Créteil, France
| | - Anne Cotten
- Musculoskeletal Imaging Department, Lille Regional University Hospital, 59000 Lille, France
| | - Jean-François Meder
- Department of Neuroradiology, Centre Hospitalier Sainte-Anne, 75014 Paris, France; Université de Paris, Faculté de Médecine, 75006 Paris, France
| |
Collapse
|
16
|
Alelyani M, Alamri S, Alqahtani MS, Musa A, Almater H, Alqahtani N, Alshahrani F, Alelyani S. Radiology Community Attitude in Saudi Arabia about the Applications of Artificial Intelligence in Radiology. Healthcare (Basel) 2021; 9:healthcare9070834. [PMID: 34356212 PMCID: PMC8307220 DOI: 10.3390/healthcare9070834] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 06/13/2021] [Accepted: 06/26/2021] [Indexed: 12/18/2022] Open
Abstract
Artificial intelligence (AI) is a broad, umbrella term that encompasses the theory and development of computer systems able to perform tasks normally requiring human intelligence. The aim of this study is to assess the radiology community’s attitude in Saudi Arabia toward the applications of AI. Methods: Data for this study were collected using electronic questionnaires in 2019 and 2020. The study included a total of 714 participants. Data analysis was performed using SPSS Statistics (version 25). Results: The majority of the participants (61.2%) had read or heard about the role of AI in radiology. We also found that radiologists had statistically different responses and tended to read more about AI compared to all other specialists. In addition, 82% of the participants thought that AI must be included in the curriculum of medical and allied health colleges, and 86% of the participants agreed that AI would be essential in the future. Even though human–machine interaction was considered to be one of the most important skills in the future, 89% of the participants thought that it would never replace radiologists. Conclusion: Because AI plays a vital role in radiology, it is important to ensure that radiologists and radiographers have at least a minimum understanding of the technology. Our finding shows an acceptable level of knowledge regarding AI technology and that AI applications should be included in the curriculum of the medical and health sciences colleges.
Collapse
Affiliation(s)
- Magbool Alelyani
- Department Radiological Sciences, King Khalid University, Abha 61421, Saudi Arabia; (M.S.A.); (A.M.); (H.A.); (N.A.); (F.A.)
- Correspondence:
| | - Sultan Alamri
- Department Radiological Sciences, Taif University, Taif 21944, Saudi Arabia;
| | - Mohammed S. Alqahtani
- Department Radiological Sciences, King Khalid University, Abha 61421, Saudi Arabia; (M.S.A.); (A.M.); (H.A.); (N.A.); (F.A.)
| | - Alamin Musa
- Department Radiological Sciences, King Khalid University, Abha 61421, Saudi Arabia; (M.S.A.); (A.M.); (H.A.); (N.A.); (F.A.)
| | - Hajar Almater
- Department Radiological Sciences, King Khalid University, Abha 61421, Saudi Arabia; (M.S.A.); (A.M.); (H.A.); (N.A.); (F.A.)
| | - Nada Alqahtani
- Department Radiological Sciences, King Khalid University, Abha 61421, Saudi Arabia; (M.S.A.); (A.M.); (H.A.); (N.A.); (F.A.)
| | - Fay Alshahrani
- Department Radiological Sciences, King Khalid University, Abha 61421, Saudi Arabia; (M.S.A.); (A.M.); (H.A.); (N.A.); (F.A.)
| | - Salem Alelyani
- Center for Artificial Intelligence (CAI), King Khalid University, Abha 61421, Saudi Arabia;
- College of Computer Science, King Khalid University, Abha 61421, Saudi Arabia
| |
Collapse
|
17
|
Iguchi T, Hiraki T, Matsui Y, Tomita K, Uka M, Tanaka T, Munetomo K, Gobara H, Kanazawa S. CT-guided biopsy of lung nodules with pleural contact: Comparison of two puncture routes. Diagn Interv Imaging 2021; 102:539-544. [PMID: 34099434 DOI: 10.1016/j.diii.2021.05.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 05/15/2021] [Accepted: 05/18/2021] [Indexed: 12/30/2022]
Abstract
PURPOSE The purpose of this study was to retrospectively compare two puncture routes (transpleural vs. transpulmonary) for computed tomography (CT) fluoroscopy-guided cutting needle biopsy of lung nodules with pleural contact. PATIENTS AND METHODS A total of 102 patients (72 men; mean age, 71.1±9.5 [SD] years) were included and 102 biopsies of 102 lung nodules (mean size, 16.7±5.9 [SD] mm; range, 6.0-29.4mm; mean length of pleural contact, 10.1±4.2 [SD] mm; range, 2.8-19.6mm) were analyzed. All procedures were classified as biopsies via the direct transpleural route or the transpulmonary route. The patient-, lesion-, and biopsy-related variables, diagnostic yields, and incidence of complications were compared between the two routes. RESULTS Biopsy was performed via the direct transpleural route (n=59; 57.8%) and transpulmonary route (n=43; 42.2%). In the transpulmonary route group, the mean distance of the intrapulmonary pathway was 17.7±9.4 [SD] mm (range: 4.1-47.6mm; P<0.001) and the introducer needle trajectory angle of<45° was significantly observed (8.5% [5/59] vs. 60.5% [26/43]; P<0.001). There was no significant difference in diagnostic accuracy between the direct transpleural and transpulmonary routes (93.2% [55/59] vs. 90.7% [39/43]; P=0.718). The frequencies of all complications (64.4% [38/59] vs. 97.7% [42/43]; P<0.001), pneumothorax (33.9% [20/59] vs. 65.1% [28/43]; P=0.003), pneumothorax with chest tube placement (3.4% [2/59] vs. 18.6% [8/43]; P=0.016), and pulmonary hemorrhage (47.5% [28/59] vs. 76.7% [33/43]; P=0.004) were significantly lower in the direct transpleural group. CONCLUSION Direct transpleural route is recommended for CT fluoroscopy-guided biopsy of lung nodules with pleural contact because it is safer and yields similar diagnostic accuracy than transpulmonary route.
Collapse
Affiliation(s)
- Toshihiro Iguchi
- Department of Radiology, Okayama University Medical School, 2-5-1 Shikata-cho kita-ku, 700-8558 Okayama, Japan.
| | - Takao Hiraki
- Department of Radiology, Okayama University Medical School, 2-5-1 Shikata-cho kita-ku, 700-8558 Okayama, Japan
| | - Yusuke Matsui
- Department of Radiology, Okayama University Medical School, 2-5-1 Shikata-cho kita-ku, 700-8558 Okayama, Japan
| | - Koji Tomita
- Department of Radiology, Okayama University Medical School, 2-5-1 Shikata-cho kita-ku, 700-8558 Okayama, Japan
| | - Mayu Uka
- Department of Radiology, Okayama University Medical School, 2-5-1 Shikata-cho kita-ku, 700-8558 Okayama, Japan
| | - Takashi Tanaka
- Department of Radiology, Okayama University Medical School, 2-5-1 Shikata-cho kita-ku, 700-8558 Okayama, Japan
| | - Kazuaki Munetomo
- Department of Radiology, Okayama University Medical School, 2-5-1 Shikata-cho kita-ku, 700-8558 Okayama, Japan
| | - Hideo Gobara
- Department of Radiology, Okayama University Medical School, 2-5-1 Shikata-cho kita-ku, 700-8558 Okayama, Japan
| | - Susumu Kanazawa
- Department of Radiology, Okayama University Medical School, 2-5-1 Shikata-cho kita-ku, 700-8558 Okayama, Japan
| |
Collapse
|
18
|
Courot A, Cabrera DLF, Gogin N, Gaillandre L, Rico G, Zhang-Yin J, Elhaik M, Bidault F, Bousaid I, Lassau N. Automatic cervical lymphadenopathy segmentation from CT data using deep learning. Diagn Interv Imaging 2021; 102:675-681. [PMID: 34023232 DOI: 10.1016/j.diii.2021.04.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 04/21/2021] [Accepted: 04/26/2021] [Indexed: 12/20/2022]
Abstract
PURPOSE The purpose of this study was to develop a fast and automatic algorithm to detect and segment lymphadenopathy from head and neck computed tomography (CT) examination. MATERIALS AND METHODS An ensemble of three convolutional neural networks (CNNs) based on a U-Net architecture were trained to segment the lymphadenopathies in a fully supervised framework. The resulting predictions were assessed using the Dice similarity coefficient (DSC) on examinations presenting one or more adenopathies. On examinations without adenopathies, the score was given by the formula M/(M+A) where M was the mean adenopathy volume per patient and A the volume segmented by the algorithm. The networks were trained on 117 annotated CT acquisitions. RESULTS The test set included 150 additional CT acquisitions unseen during the training. The performance on the test set yielded a mean score of 0.63. CONCLUSION Despite limited available data and partial annotations, our CNN based approach achieved promising results in the task of cervical lymphadenopathy segmentation. It has the potential to bring precise quantification to the clinical workflow and to assist the clinician in the detection task.
Collapse
Affiliation(s)
| | - Diana L F Cabrera
- General Electric Healthcare, 78530 Buc, France; Université de Reims Champagne Ardenne, CReSTIC EA 3804, 51097 Reims, France
| | | | - Loic Gaillandre
- Centre Libéral d'Imagerie Médicale de l'Agglomération Lilloise, 59000 Lille, France
| | | | | | | | - François Bidault
- Department of Radiology, Institut Gustave Roussy, 94800 Villejuif, France; Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France
| | - Imad Bousaid
- Institut Gustave Roussy, 94800 Villejuif, France
| | - Nathalie Lassau
- Department of Radiology, Institut Gustave Roussy, 94800 Villejuif, France; Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France
| |
Collapse
|
19
|
Khorrami M, Bera K, Thawani R, Rajiah P, Gupta A, Fu P, Linden P, Pennell N, Jacono F, Gilkeson RC, Velcheti V, Madabhushi A. Distinguishing granulomas from adenocarcinomas by integrating stable and discriminating radiomic features on non-contrast computed tomography scans. Eur J Cancer 2021; 148:146-158. [PMID: 33743483 DOI: 10.1016/j.ejca.2021.02.008] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 01/28/2021] [Accepted: 02/02/2021] [Indexed: 12/13/2022]
Abstract
OBJECTIVE To identify stable and discriminating radiomic features on non-contrast CT scans to develop more generalisable radiomic classifiers for distinguishing granulomas from adenocarcinomas. METHODS In total, 412 patients with adenocarcinomas and granulomas from three institutions were retrospectively included. Segmentations of the lung nodules were performed manually by an expert radiologist in a 2D axial view. Radiomic features were extracted from intra- and perinodular regions. A total of 145 patients were used as part of the training set (Str), whereas 205 patients were used as part of test set I (Ste1) and 62 patients were used as part of independent test set II (Ste2). To mitigate the variation of CT acquisition parameters, we defined 'stable' radiomic features as those for which the feature expression remains relatively unchanged between different sites, as assessed using a Wilcoxon rank-sum test. These stable features were used to develop more generalisable radiomic classifiers that were more resilient to variations in lung CT scans. Features were ranked based on two criteria, firstly based on discriminability (i.e. maximising AUC) alone and subsequently based on maximising both feature stability and discriminability. Different machine-learning classifiers (Linear discriminant analysis, Quadratic discriminant analysis, Support vector machines and random forest) were trained with features selected using the two different criteria and then compared on the two independent test sets for distinguishing granulomas from adenocarcinomas, in terms of area under the receiver operating characteristic curve. RESULTS In the test sets, classifiers constructed using the criteria involving maximising feature stability and discriminability simultaneously achieved higher AUC compared with the discriminating alone criteria (Ste1 [n = 205]: maximum AUCs of 0.85versus . 0.80; p-value = 0.047 and Ste2 [n = 62]: maximum AUCs of 0.87 versus. 0.79; p-value = 0.021). These differences held for features extracted from scans with <3 mm slice thickness (AUC = 0.88 versus. 0.80; p-value = 0.039, n = 100) and for the ≥3 mm cases (AUC = 0.81 versus. 0.76; p-value = 0.034, n = 105). In both experiments, shape and peritumoural texture features had a higher stability compared with intratumoural texture features. CONCLUSIONS Our study suggests that explicitly accounting for both stability and discriminability results in more generalisable radiomic classifiers to distinguish adenocarcinomas from granulomas on non-contrast CT scans. Our results also showed that peritumoural texture and shape features were less affected by the scanner parameters compared with intratumoural texture features; however, they were also less discriminating compared with intratumoural features.
Collapse
Affiliation(s)
- Mohammadhadi Khorrami
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Kaustav Bera
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Rajat Thawani
- OHSU Knight Cancer Institute, Oregon Health & Science University, Oregon, USA
| | - Prabhakar Rajiah
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Amit Gupta
- Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Pingfu Fu
- Department of Population and Quantitative Health Sciences, CWRU, Cleveland, OH, USA
| | - Philip Linden
- Thoracic and Esophageal Surgery Department, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Nathan Pennell
- Department of Hematology and Medical Oncology, Cleveland Clinic, Cleveland, OH, USA
| | - Frank Jacono
- Pulmonary Section, Cleveland Veterans Affairs Medical Center, Cleveland, OH, USA
| | - Robert C Gilkeson
- Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | | | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA; Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, USA.
| |
Collapse
|
20
|
Lederlin M, de Margerie-Mellon C, Boussouar S, Bommart S, Caramella C. Lung cancer screening: French radiologists should prepare for it. Diagn Interv Imaging 2021; 102:197-198. [PMID: 33642220 DOI: 10.1016/j.diii.2021.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Accepted: 02/03/2021] [Indexed: 11/17/2022]
Affiliation(s)
- Mathieu Lederlin
- Department of Radiology, University Hospital of Rennes, University of Rennes, 35033 Rennes, France.
| | - Constance de Margerie-Mellon
- Department of Radiology, Hôpital Saint-Louis, Assistance Publique-Hôpitaux de Paris, 75010 Paris, France; Université de Paris, 75010 Paris, France
| | - Samia Boussouar
- Department of Radiology, Hôpital de la Pitié-Salpêtrière, Assistance Publique-Hôpitaux de Paris, Sorbonne University, 75651 Paris, France
| | - Sébastien Bommart
- Department of Medical Imaging, University Hospital of Montpellier, University of Montpellier, 34295 Montpellier, France
| | - Caroline Caramella
- Department of Radiology, Hôpital Marie Lannelongue, Institut d'Oncologie Thoracique, Paris-Saclay University, 92350 Le Plessis-Robinson, France
| |
Collapse
|
21
|
Chassagnon G, Dohan A. Artificial intelligence: from challenges to clinical implementation. Diagn Interv Imaging 2020; 101:763-764. [DOI: 10.1016/j.diii.2020.10.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
22
|
Hamamoto R, Suvarna K, Yamada M, Kobayashi K, Shinkai N, Miyake M, Takahashi M, Jinnai S, Shimoyama R, Sakai A, Takasawa K, Bolatkan A, Shozu K, Dozen A, Machino H, Takahashi S, Asada K, Komatsu M, Sese J, Kaneko S. Application of Artificial Intelligence Technology in Oncology: Towards the Establishment of Precision Medicine. Cancers (Basel) 2020; 12:E3532. [PMID: 33256107 PMCID: PMC7760590 DOI: 10.3390/cancers12123532] [Citation(s) in RCA: 64] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 11/21/2020] [Accepted: 11/24/2020] [Indexed: 02/07/2023] Open
Abstract
In recent years, advances in artificial intelligence (AI) technology have led to the rapid clinical implementation of devices with AI technology in the medical field. More than 60 AI-equipped medical devices have already been approved by the Food and Drug Administration (FDA) in the United States, and the active introduction of AI technology is considered to be an inevitable trend in the future of medicine. In the field of oncology, clinical applications of medical devices using AI technology are already underway, mainly in radiology, and AI technology is expected to be positioned as an important core technology. In particular, "precision medicine," a medical treatment that selects the most appropriate treatment for each patient based on a vast amount of medical data such as genome information, has become a worldwide trend; AI technology is expected to be utilized in the process of extracting truly useful information from a large amount of medical data and applying it to diagnosis and treatment. In this review, we would like to introduce the history of AI technology and the current state of medical AI, especially in the oncology field, as well as discuss the possibilities and challenges of AI technology in the medical field.
Collapse
Affiliation(s)
- Ryuji Hamamoto
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Kruthi Suvarna
- Indian Institute of Technology Bombay, Powai, Mumbai 400 076, India;
| | - Masayoshi Yamada
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Department of Endoscopy, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku Tokyo 104-0045, Japan
| | - Kazuma Kobayashi
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Norio Shinkai
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Mototaka Miyake
- Department of Diagnostic Radiology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan;
| | - Masamichi Takahashi
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Department of Neurosurgery and Neuro-Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Shunichi Jinnai
- Department of Dermatologic Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan;
| | - Ryo Shimoyama
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
| | - Akira Sakai
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Ken Takasawa
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Amina Bolatkan
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Kanto Shozu
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
| | - Ai Dozen
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
| | - Hidenori Machino
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Satoshi Takahashi
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Ken Asada
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Masaaki Komatsu
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Jun Sese
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Humanome Lab, 2-4-10 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Syuzo Kaneko
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| |
Collapse
|