1
|
Nam Y, Kim SY, Kim KA, Kwon E, Lee YH, Jang J, Lee MK, Kim J, Choi Y. Development and Validation of Deep Learning-Based Automated Detection of Cervical Lymphadenopathy in Patients with Lymphoma for Treatment Response Assessment: A Bi-institutional Feasibility Study. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:734-743. [PMID: 38316667 PMCID: PMC11031526 DOI: 10.1007/s10278-024-00966-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 11/11/2023] [Accepted: 11/13/2023] [Indexed: 02/07/2024]
Abstract
The purpose is to train and evaluate a deep learning (DL) model for the accurate detection and segmentation of abnormal cervical lymph nodes (LN) on head and neck contrast-enhanced CT scans in patients diagnosed with lymphoma and evaluate the clinical utility of the DL model in response assessment. This retrospective study included patients who underwent CT for abnormal cervical LN and lymphoma assessment between January 2021 and July 2022. Patients were grouped into the development (n = 76), internal test 1 (n = 27), internal test 2 (n = 87), and external test (n = 26) cohorts. A 3D SegResNet model was used to train the CT images. The volume change rates of cervical LN across longitudinal CT scans were compared among patients with different treatment outcomes (stable, response, and progression). Dice similarity coefficient (DSC) and the Bland-Altman plot were used to assess the model's segmentation performance and reliability, respectively. No significant differences in baseline clinical characteristics were found across cohorts (age, P = 0.55; sex, P = 0.13; diagnoses, P = 0.06). The mean DSC was 0.39 ± 0.2 with a precision and recall of 60.9% and 57.0%, respectively. Most LN volumes were within the limits of agreement on the Bland-Altman plot. The volume change rates among the three groups differed significantly (progression (n = 74), 342.2%; response (n = 8), - 79.2%; stable (n = 5), - 8.1%; all P < 0.01). Our proposed DL segmentation model showed modest performance in quantifying the cervical LN burden on CT in patients with lymphoma. Longitudinal changes in cervical LN volume, as predicted by the DL model, were useful for treatment response assessment.
Collapse
Affiliation(s)
- Yoonho Nam
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin-Si, Gyeonggi-do, Republic of Korea
| | - Su-Youn Kim
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin-Si, Gyeonggi-do, Republic of Korea
| | - Kyu-Ah Kim
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin-Si, Gyeonggi-do, Republic of Korea
| | - Euna Kwon
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin-Si, Gyeonggi-do, Republic of Korea
| | - Yoo Hyun Lee
- College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Jinhee Jang
- Department of Radiology, College of Medicine, Seoul St. Mary's Hospital, The Catholic University of Korea, Seoul, Republic of Korea
| | - Min Kyoung Lee
- Department of Radiology, College of Medicine, Yeouido St. Mary's Hospital, The Catholic University of Korea, Seoul, Republic of Korea
| | - Jiwoong Kim
- Department of Mathematics and Statistics, University of South Florida, Tampa, FL, USA
| | - Yangsean Choi
- Department of Radiology, College of Medicine, Seoul St. Mary's Hospital, The Catholic University of Korea, Seoul, Republic of Korea.
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Centre, 43 Olympic-Ro 88, Songpa-Gu, Seoul, 05505, Republic of Korea.
| |
Collapse
|
2
|
Djahnine A, Lazarus C, Lederlin M, Mulé S, Wiemker R, Si-Mohamed S, Jupin-Delevaux E, Nempont O, Skandarani Y, De Craene M, Goubalan S, Raynaud C, Belkouchi Y, Afia AB, Fabre C, Ferretti G, De Margerie C, Berge P, Liberge R, Elbaz N, Blain M, Brillet PY, Chassagnon G, Cadour F, Caramella C, Hajjam ME, Boussouar S, Hadchiti J, Fablet X, Khalil A, Talbot H, Luciani A, Lassau N, Boussel L. Detection and severity quantification of pulmonary embolism with 3D CT data using an automated deep learning-based artificial solution. Diagn Interv Imaging 2024; 105:97-103. [PMID: 38261553 DOI: 10.1016/j.diii.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 09/14/2023] [Accepted: 09/18/2023] [Indexed: 01/25/2024]
Abstract
PURPOSE The purpose of this study was to propose a deep learning-based approach to detect pulmonary embolism and quantify its severity using the Qanadli score and the right-to-left ventricle diameter (RV/LV) ratio on three-dimensional (3D) computed tomography pulmonary angiography (CTPA) examinations with limited annotations. MATERIALS AND METHODS Using a database of 3D CTPA examinations of 1268 patients with image-level annotations, and two other public datasets of CTPA examinations from 91 (CAD-PE) and 35 (FUME-PE) patients with pixel-level annotations, a pipeline consisting of: (i), detecting blood clots; (ii), performing PE-positive versus negative classification; (iii), estimating the Qanadli score; and (iv), predicting RV/LV diameter ratio was followed. The method was evaluated on a test set including 378 patients. The performance of PE classification and severity quantification was quantitatively assessed using an area under the curve (AUC) analysis for PE classification and a coefficient of determination (R²) for the Qanadli score and the RV/LV diameter ratio. RESULTS Quantitative evaluation led to an overall AUC of 0.870 (95% confidence interval [CI]: 0.850-0.900) for PE classification task on the training set and an AUC of 0.852 (95% CI: 0.810-0.890) on the test set. Regression analysis yielded R² value of 0.717 (95% CI: 0.668-0.760) and of 0.723 (95% CI: 0.668-0.766) for the Qanadli score and the RV/LV diameter ratio estimation, respectively on the test set. CONCLUSION This study shows the feasibility of utilizing AI-based assistance tools in detecting blood clots and estimating PE severity scores with 3D CTPA examinations. This is achieved by leveraging blood clots and cardiac segmentations. Further studies are needed to assess the effectiveness of these tools in clinical practice.
Collapse
Affiliation(s)
- Aissam Djahnine
- Philips Research France, 92150 Suresnes, France; CREATIS, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, France.
| | | | | | - Sébastien Mulé
- Medical Imaging Department, Henri Mondor University Hospital, AP-HP, Créteil, France, Inserm, U955, Team 18, 94000 Créteil, France
| | | | - Salim Si-Mohamed
- Department of Radiology, Hospices Civils de Lyon, 69500 Lyon, France
| | | | | | | | | | | | | | - Younes Belkouchi
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay, BIOMAPS, UMR 1281, Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; OPIS - Optimisation Imagerie et Santé, Université Paris-Saclay, Inria, CentraleSupélec, CVN - Centre de vision numérique, 91190 Gif-Sur-Yvette, France
| | - Amira Ben Afia
- Department of Radiology, APHP Nord, Hôpital Bichat, 75018 Paris, France
| | - Clement Fabre
- Department of Radiology, Centre Hospitalier de Laval, 53000 Laval, France
| | - Gilbert Ferretti
- Universite Grenobles Alpes, Service de Radiologie et Imagerie Médicale, CHU Grenoble-Alpes, 38000 Grenoble, France
| | - Constance De Margerie
- Université Paris Cité, 75006 Paris, France, Department of Radiology, Hôpital Saint-Louis, Assistance Publique-Hôpitaux de Paris, 75010 Paris, France
| | - Pierre Berge
- Department of Radiology, CHU Angers, 49000 Angers, France
| | - Renan Liberge
- Department of Radiology, CHU Nantes, 44000 Nantes, France
| | - Nicolas Elbaz
- Department of Radiology, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Maxime Blain
- Department of Radiology, Hopital Henri Mondor, AP-HP, 94000 Créteil, France
| | - Pierre-Yves Brillet
- Department of Radiology, Hôpital Avicenne, Paris 13 University, 93000 Bobigny, France
| | - Guillaume Chassagnon
- Department of Radiology, Hopital Cochin, APHP, 75014 Paris, France; Université Paris Cité, 75006 Paris, France
| | - Farah Cadour
- APHM, Hôpital Universitaire Timone, CEMEREM, 13005 Marseille, France
| | - Caroline Caramella
- Department of Radiology, Groupe Hospitalier Paris Saint-Joseph, 75015 Paris, France
| | - Mostafa El Hajjam
- Department of Radiology, Hôpital Ambroise Paré Hospital, UMR 1179 INSERM/UVSQ, Team 3, 92100 Boulogne-Billancourt, France
| | - Samia Boussouar
- Sorbonne Université, Hôpital La Pitié-Salpêtrière, APHP, Unité d'Imagerie Cardiovasculaire et Thoracique (ICT), 75013 Paris, France
| | - Joya Hadchiti
- Department of Imaging, Institut Gustave Roussy, Université Paris-Saclay. 94800 Villejuif, France
| | - Xavier Fablet
- Department of Radiology, CHU Rennes, 35000 Rennes, France
| | - Antoine Khalil
- Department of Radiology, APHP Nord, Hôpital Bichat, 75018 Paris, France
| | - Hugues Talbot
- OPIS - Optimisation Imagerie et Santé, Université Paris-Saclay, Inria, CentraleSupélec, CVN - Centre de vision numérique, 91190 Gif-Sur-Yvette, France
| | - Alain Luciani
- Medical Imaging Department, Henri Mondor University Hospital, AP-HP, Créteil, France, Inserm, U955, Team 18, 94000 Créteil, France
| | - Nathalie Lassau
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay, BIOMAPS, UMR 1281, Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, Université Paris-Saclay. 94800 Villejuif, France
| | - Loic Boussel
- CREATIS, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, France; Department of Radiology, Hospices Civils de Lyon, 69500 Lyon, France
| |
Collapse
|
3
|
Molière S, Hamzaoui D, Granger B, Montagne S, Allera A, Ezziane M, Luzurier A, Quint R, Kalai M, Ayache N, Delingette H, Renard-Penna R. Reference standard for the evaluation of automatic segmentation algorithms: Quantification of inter observer variability of manual delineation of prostate contour on MRI. Diagn Interv Imaging 2024; 105:65-73. [PMID: 37822196 DOI: 10.1016/j.diii.2023.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 10/13/2023]
Abstract
PURPOSE The purpose of this study was to investigate the relationship between inter-reader variability in manual prostate contour segmentation on magnetic resonance imaging (MRI) examinations and determine the optimal number of readers required to establish a reliable reference standard. MATERIALS AND METHODS Seven radiologists with various experiences independently performed manual segmentation of the prostate contour (whole-gland [WG] and transition zone [TZ]) on 40 prostate MRI examinations obtained in 40 patients. Inter-reader variability in prostate contour delineations was estimated using standard metrics (Dice similarity coefficient [DSC], Hausdorff distance and volume-based metrics). The impact of the number of readers (from two to seven) on segmentation variability was assessed using pairwise metrics (consistency) and metrics with respect to a reference segmentation (conformity), obtained either with majority voting or simultaneous truth and performance level estimation (STAPLE) algorithm. RESULTS The average segmentation DSC for two readers in pairwise comparison was 0.919 for WG and 0.876 for TZ. Variability decreased with the number of readers: the interquartile ranges of the DSC were 0.076 (WG) / 0.021 (TZ) for configurations with two readers, 0.005 (WG) / 0.012 (TZ) for configurations with three readers, and 0.002 (WG) / 0.0037 (TZ) for configurations with six readers. The interquartile range decreased slightly faster between two and three readers than between three and six readers. When using consensus methods, variability often reached its minimum with three readers (with STAPLE, DSC = 0.96 [range: 0.945-0.971] for WG and DSC = 0.94 [range: 0.912-0.957] for TZ, and interquartile range was minimal for configurations with three readers. CONCLUSION The number of readers affects the inter-reader variability, in terms of inter-reader consistency and conformity to a reference. Variability is minimal for three readers, or three readers represent a tipping point in the variability evolution, with both pairwise-based metrics or metrics with respect to a reference. Accordingly, three readers may represent an optimal number to determine references for artificial intelligence applications.
Collapse
Affiliation(s)
- Sébastien Molière
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France; Breast and Thyroid Imaging Unit, Institut de Cancérologie Strasbourg Europe, 67200, Strasbourg, France; IGBMC, Institut de Génétique et de Biologie Moléculaire et Cellulaire, 67400, Illkirch, France.
| | - Dimitri Hamzaoui
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, 06902, Nice, France
| | - Benjamin Granger
- Sorbonne Université, INSERM, Institut Pierre Louis d'Epidémiologie et de Santé Publique, IPLESP, AP-HP, Hôpital Pitié Salpêtrière, Département de Santé Publique, 75013, Paris, France
| | - Sarah Montagne
- Department of Radiology, Hôpital Tenon, Assistance Publique-Hôpitaux de Paris, 75020, Paris, France; Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France; GRC N° 5, Oncotype-Uro, Sorbonne Université, 75020, Paris, France
| | - Alexandre Allera
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Malek Ezziane
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Anna Luzurier
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Raphaelle Quint
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Mehdi Kalai
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Nicholas Ayache
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France
| | - Hervé Delingette
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France
| | - Raphaële Renard-Penna
- Department of Radiology, Hôpital Tenon, Assistance Publique-Hôpitaux de Paris, 75020, Paris, France; Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France; GRC N° 5, Oncotype-Uro, Sorbonne Université, 75020, Paris, France
| |
Collapse
|
4
|
Can S, Türk Ö, Ayral M, Kozan G, Arı H, Akdağ M, Baylan MY. Can deep learning replace histopathological examinations in the differential diagnosis of cervical lymphadenopathy? Eur Arch Otorhinolaryngol 2024; 281:359-367. [PMID: 37578497 DOI: 10.1007/s00405-023-08181-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 08/07/2023] [Indexed: 08/15/2023]
Abstract
INTRODUCTION We aimed to develop a diagnostic deep learning model using contrast-enhanced CT images and to investigate whether cervical lymphadenopathies can be diagnosed with these deep learning methods without radiologist interpretations and histopathological examinations. MATERIAL METHOD A total of 400 patients who underwent surgery for lymphadenopathy in the neck between 2010 and 2022 were retrospectively analyzed. They were examined in four groups of 100 patients: the granulomatous diseases group, the lymphoma group, the squamous cell tumor group, and the reactive hyperplasia group. The diagnoses of the patients were confirmed histopathologically. Two CT images from all the patients in each group were used in the study. The CT images were classified using ResNet50, NASNetMobile, and DenseNet121 architecture input. RESULTS The classification accuracies obtained with ResNet50, DenseNet121, and NASNetMobile were 92.5%, 90.62, and 87.5, respectively. CONCLUSION Deep learning is a useful diagnostic tool in diagnosing cervical lymphadenopathy. In the near future, many diseases could be diagnosed with deep learning models without radiologist interpretations and invasive examinations such as histopathological examinations. However, further studies with much larger case series are needed to develop accurate deep-learning models.
Collapse
Affiliation(s)
- Sermin Can
- Department of Otorhinolaryngology and Head and Neck Surgery Clinic, Dicle University Faculty of Medicine, 21010, Diyarbakir, Turkey.
| | - Ömer Türk
- Department of Computer Programming, Mardin Artuklu University Vocational School, Mardin, Turkey
| | - Muhammed Ayral
- Department of Otorhinolaryngology and Head and Neck Surgery Clinic, Dicle University Faculty of Medicine, 21010, Diyarbakir, Turkey
| | - Günay Kozan
- Department of Otorhinolaryngology and Head and Neck Surgery Clinic, Dicle University Faculty of Medicine, 21010, Diyarbakir, Turkey
| | - Hamza Arı
- Department of Otorhinolaryngology and Head and Neck Surgery Clinic, Dicle University Faculty of Medicine, 21010, Diyarbakir, Turkey
| | - Mehmet Akdağ
- Department of Otorhinolaryngology and Head and Neck Surgery Clinic, Dicle University Faculty of Medicine, 21010, Diyarbakir, Turkey
| | - Müzeyyen Yıldırım Baylan
- Department of Otorhinolaryngology and Head and Neck Surgery Clinic, Dicle University Faculty of Medicine, 21010, Diyarbakir, Turkey
| |
Collapse
|
5
|
Homps M, Soyer P, Coriat R, Dermine S, Pellat A, Fuks D, Marchese U, Terris B, Groussin L, Dohan A, Barat M. A preoperative computed tomography radiomics model to predict disease-free survival in patients with pancreatic neuroendocrine tumors. Eur J Endocrinol 2023; 189:476-484. [PMID: 37787635 DOI: 10.1093/ejendo/lvad130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 08/18/2023] [Accepted: 08/22/2023] [Indexed: 10/04/2023]
Abstract
IMPORTANCE Imaging has demonstrated capabilities in the diagnosis of pancreatic neuroendocrine tumors (pNETs), but its utility for prognostic prediction has not been elucidated yet. OBJECTIVE The aim of this study was to build a radiomics model using preoperative computed tomography (CT) data that may help predict recurrence-free survival (RFS) or OS in patients with pNET. DESIGN We performed a retrospective observational study in a cohort of French patients with pNETs. PARTICIPANTS Patients with surgically resected pNET and available CT examinations were included. INTERVENTIONS Radiomics features of preoperative CT data were extracted using 3D-Slicer® software with manual segmentation. Discriminant features were selected with penalized regression using least absolute shrinkage and selection operator method with training on the tumor Ki67 rate (≤2 or >2). Selected features were used to build a radiomics index ranging from 0 to 1. OUTCOME AND MEASURE A receiving operator curve was built to select an optimal cutoff value of the radiomics index to predict patient RFS and OS. Recurrence-free survival and OS were assessed using Kaplan-Meier analysis. RESULTS Thirty-seven patients (median age, 61 years; 20 men) with 37 pNETs (grade 1, 21/37 [57%]; grade 2, 12/37 [32%]; grade 3, 4/37 [11%]) were included. Patients with a radiomics index >0.4 had a shorter median RFS (36 months; range: 1-133) than those with a radiomics index ≤0.4 (84 months; range: 9-148; P = .013). No associations were found between the radiomics index and OS (P = .86).
Collapse
Affiliation(s)
- Margaux Homps
- Department of Diagnostic and Interventional Imaging, APHP, Hôpital Cochin, Paris F-75014, France
- Faculté de Médecine, Université Paris Cité, Paris F-75006, France
| | - Philippe Soyer
- Department of Diagnostic and Interventional Imaging, APHP, Hôpital Cochin, Paris F-75014, France
- Faculté de Médecine, Université Paris Cité, Paris F-75006, France
| | - Romain Coriat
- Faculté de Médecine, Université Paris Cité, Paris F-75006, France
- Department of Gastroenterology and Digestive Oncology, AP-HP, Hôpital Cochin, Paris F-75014, France
| | - Solène Dermine
- Faculté de Médecine, Université Paris Cité, Paris F-75006, France
- Department of Gastroenterology and Digestive Oncology, AP-HP, Hôpital Cochin, Paris F-75014, France
| | - Anna Pellat
- Faculté de Médecine, Université Paris Cité, Paris F-75006, France
- Department of Gastroenterology and Digestive Oncology, AP-HP, Hôpital Cochin, Paris F-75014, France
| | - David Fuks
- Faculté de Médecine, Université Paris Cité, Paris F-75006, France
- Department of Surgery, Hôpital Cochin, APHP, Paris F-75014, France
| | - Ugo Marchese
- Faculté de Médecine, Université Paris Cité, Paris F-75006, France
- Department of Surgery, Hôpital Cochin, APHP, Paris F-75014, France
| | - Benoit Terris
- Faculté de Médecine, Université Paris Cité, Paris F-75006, France
- Department of Pathology, Center for Rare Adrenal Diseases, AP-HP, Hôpital Cochin, Paris F-75014, France
| | - Lionel Groussin
- Faculté de Médecine, Université Paris Cité, Paris F-75006, France
- Department of Endocrinology, Center for Rare Adrenal Diseases, AP-HP, Hôpital Cochin, Paris F-75014, France
| | - Anthony Dohan
- Department of Diagnostic and Interventional Imaging, APHP, Hôpital Cochin, Paris F-75014, France
- Faculté de Médecine, Université Paris Cité, Paris F-75006, France
| | - Maxime Barat
- Department of Diagnostic and Interventional Imaging, APHP, Hôpital Cochin, Paris F-75014, France
- Faculté de Médecine, Université Paris Cité, Paris F-75006, France
| |
Collapse
|
6
|
Zhong NN, Wang HQ, Huang XY, Li ZZ, Cao LM, Huo FY, Liu B, Bu LL. Enhancing head and neck tumor management with artificial intelligence: Integration and perspectives. Semin Cancer Biol 2023; 95:52-74. [PMID: 37473825 DOI: 10.1016/j.semcancer.2023.07.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 07/11/2023] [Accepted: 07/15/2023] [Indexed: 07/22/2023]
Abstract
Head and neck tumors (HNTs) constitute a multifaceted ensemble of pathologies that primarily involve regions such as the oral cavity, pharynx, and nasal cavity. The intricate anatomical structure of these regions poses considerable challenges to efficacious treatment strategies. Despite the availability of myriad treatment modalities, the overall therapeutic efficacy for HNTs continues to remain subdued. In recent years, the deployment of artificial intelligence (AI) in healthcare practices has garnered noteworthy attention. AI modalities, inclusive of machine learning (ML), neural networks (NNs), and deep learning (DL), when amalgamated into the holistic management of HNTs, promise to augment the precision, safety, and efficacy of treatment regimens. The integration of AI within HNT management is intricately intertwined with domains such as medical imaging, bioinformatics, and medical robotics. This article intends to scrutinize the cutting-edge advancements and prospective applications of AI in the realm of HNTs, elucidating AI's indispensable role in prevention, diagnosis, treatment, prognostication, research, and inter-sectoral integration. The overarching objective is to stimulate scholarly discourse and invigorate insights among medical practitioners and researchers to propel further exploration, thereby facilitating superior therapeutic alternatives for patients.
Collapse
Affiliation(s)
- Nian-Nian Zhong
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Han-Qi Wang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Xin-Yue Huang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Zi-Zhan Li
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Lei-Ming Cao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Fang-Yi Huo
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Bing Liu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| | - Lin-Lin Bu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| |
Collapse
|
7
|
Rinneburger M, Carolus H, Iuga AI, Weisthoff M, Lennartz S, Hokamp NG, Caldeira L, Shahzad R, Maintz D, Laqua FC, Baeßler B, Klinder T, Persigehl T. Automated localization and segmentation of cervical lymph nodes on contrast-enhanced CT using a 3D foveal fully convolutional neural network. Eur Radiol Exp 2023; 7:45. [PMID: 37505296 PMCID: PMC10382409 DOI: 10.1186/s41747-023-00360-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 06/03/2023] [Indexed: 07/29/2023] Open
Abstract
BACKGROUND In the management of cancer patients, determination of TNM status is essential for treatment decision-making and therefore closely linked to clinical outcome and survival. Here, we developed a tool for automatic three-dimensional (3D) localization and segmentation of cervical lymph nodes (LNs) on contrast-enhanced computed tomography (CECT) examinations. METHODS In this IRB-approved retrospective single-center study, 187 CECT examinations of the head and neck region from patients with various primary diseases were collected from our local database, and 3656 LNs (19.5 ± 14.9 LNs/CECT, mean ± standard deviation) with a short-axis diameter (SAD) ≥ 5 mm were segmented manually by expert physicians. With these data, we trained an independent fully convolutional neural network based on 3D foveal patches. Testing was performed on 30 independent CECTs with 925 segmented LNs with an SAD ≥ 5 mm. RESULTS In total, 4,581 LNs were segmented in 217 CECTs. The model achieved an average localization rate (LR), i.e., percentage of localized LNs/CECT, of 78.0% in the validation dataset. In the test dataset, average LR was 81.1% with a mean Dice coefficient of 0.71. For enlarged LNs with a SAD ≥ 10 mm, LR was 96.2%. In the test dataset, the false-positive rate was 2.4 LNs/CECT. CONCLUSIONS Our trained AI model demonstrated a good overall performance in the consistent automatic localization and 3D segmentation of physiological and metastatic cervical LNs with a SAD ≥ 5 mm on CECTs. This could aid clinical localization and automatic 3D segmentation, which can benefit clinical care and radiomics research. RELEVANCE STATEMENT Our AI model is a time-saving tool for 3D segmentation of cervical lymph nodes on contrast-enhanced CT scans and serves as a solid base for N staging in clinical practice and further radiomics research. KEY POINTS • Determination of N status in TNM staging is essential for therapy planning in oncology. • Segmenting cervical lymph nodes manually is highly time-consuming in clinical practice. • Our model provides a robust, automated 3D segmentation of cervical lymph nodes. • It achieves a high accuracy for localization especially of enlarged lymph nodes. • These segmentations should assist clinical care and radiomics research.
Collapse
Affiliation(s)
- Miriam Rinneburger
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.
| | | | - Andra-Iza Iuga
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Mathilda Weisthoff
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Simon Lennartz
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Nils Große Hokamp
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Liliana Caldeira
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Rahil Shahzad
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Innovative Technologies, Philips Healthcare, Aachen, Germany
| | - David Maintz
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Fabian Christopher Laqua
- Institute of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Bettina Baeßler
- Institute of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | | | - Thorsten Persigehl
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
8
|
Mulé S, Lawrance L, Belkouchi Y, Vilgrain V, Lewin M, Trillaud H, Hoeffel C, Laurent V, Ammari S, Morand E, Faucoz O, Tenenhaus A, Cotten A, Meder JF, Talbot H, Luciani A, Lassau N. Generative adversarial networks (GAN)-based data augmentation of rare liver cancers: The SFR 2021 Artificial Intelligence Data Challenge. Diagn Interv Imaging 2023; 104:43-48. [PMID: 36207277 DOI: 10.1016/j.diii.2022.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 09/20/2022] [Indexed: 01/10/2023]
Abstract
PURPOSE The 2021 edition of the Artificial Intelligence Data Challenge was organized by the French Society of Radiology together with the Centre National d'Études Spatiales and CentraleSupélec with the aim to implement generative adversarial networks (GANs) techniques to provide 1000 magnetic resonance imaging (MRI) cases of macrotrabecular-massive (MTM) hepatocellular carcinoma (HCC), a rare and aggressive subtype of HCC, generated from a limited number of real cases from multiple French centers. MATERIALS AND METHODS A dedicated platform was used by the seven inclusion centers to securely upload their anonymized MRI examinations including all three cross-sectional images (one late arterial and one portal-venous phase T1-weighted images and one fat-saturated T2-weighted image) in compliance with general data protection regulation. The quality of the database was checked by experts and manual delineation of the lesions was performed by the expert radiologists involved in each center. Multidisciplinary teams competed between October 11th, 2021 and February 13th, 2022. RESULTS A total of 91 MTM-HCC datasets of three images each were collected from seven French academic centers. Six teams with a total of 28 individuals participated in this challenge. Each participating team was asked to generate one thousand 3-image cases. The qualitative evaluation was performed by three radiologists using the Likert scale on ten randomly selected cases generated by each participant. A quantitative evaluation was also performed using two metrics, the Frechet inception distance and a leave-one-out accuracy of a 1-Nearest Neighbor algorithm. CONCLUSION This data challenge demonstrates the ability of GANs techniques to generate a large number of images from a small sample of imaging examinations of a rare malignant tumor.
Collapse
Affiliation(s)
- Sébastien Mulé
- Medical Imaging Department, AP-HP, Henri Mondor University Hospital, Créteil 94000, France; INSERM, U955, Team 18, Créteil 94000, France.
| | - Littisha Lawrance
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay, Inserm, CNRS, CEA, BIOMAPS, UMR 1281, Université Paris-Saclay, Villejuif 94800, France
| | - Younes Belkouchi
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay, Inserm, CNRS, CEA, BIOMAPS, UMR 1281, Université Paris-Saclay, Villejuif 94800, France; OPIS-Optimisation Imagerie et Santé, Inria, CentraleSupélec, CVN-Centre de Vision Numérique, Université Paris-Saclay, Gif-Sur-Yvette 91190, France
| | - Valérie Vilgrain
- Department of Radiology, APHP, University Hospitals Paris Nord Val de Seine, Hôpital Beaujon, Clichy 92110, France; CRI INSERM, Université Paris Cité, Paris 75018, France
| | - Maité Lewin
- Department of Radiology, AP-HP Hôpital Paul Brousse, Villejuif 94800, France; Faculté de Médecine, Université Paris-Saclay, Le Kremlin-Bicêtre 94270, France
| | - Hervé Trillaud
- CHU de Bordeaux, Department of Radiology, Université de Bordeaux, Bordeaux 33000, France
| | - Christine Hoeffel
- Department of Radiology, Reims University Hospital, Reims 51092, France; CRESTIC, University of Reims Champagne-Ardenne, Reims 51100, France
| | - Valérie Laurent
- Department of Radiology, Nancy University Hospital, University of Lorraine, Vandoeuvre-ls-Nancy 54500, France
| | - Samy Ammari
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay, Inserm, CNRS, CEA, BIOMAPS, UMR 1281, Université Paris-Saclay, Villejuif 94800, France; Department of Imaging, Institut Gustave Roussy, Université Paris-Saclay, Villejuif 94800, France
| | - Eric Morand
- Centre National d'Etudes Spatiales-CNES, Centre Spatial de Toulouse, Toulouse 31401 CEDEX 9 University, France
| | - Orphée Faucoz
- Centre National d'Etudes Spatiales-CNES, Centre Spatial de Toulouse, Toulouse 31401 CEDEX 9 University, France
| | - Arthur Tenenhaus
- CentraleSupélec, Laboratoire des Signaux et Systèmes, Université Paris-Saclay, Gif-sur-Yvette 91190, France
| | - Anne Cotten
- Department of Musculoskeletal Radiology, Centre de Consultations Et D'imagerie de L'appareil Locomoteur, Lille 59037, France; Lille University School of Medicine, Lille, France
| | - Jean-François Meder
- Department of Neuroimaging, Sainte-Anne Hospital, Paris 75013 University, France; Université Paris Cité, Paris 75006, France
| | - Hugues Talbot
- OPIS-Optimisation Imagerie et Santé, Inria, CentraleSupélec, CVN-Centre de Vision Numérique, Université Paris-Saclay, Gif-Sur-Yvette 91190, France
| | - Alain Luciani
- Medical Imaging Department, AP-HP, Henri Mondor University Hospital, Créteil 94000, France; INSERM, U955, Team 18, Créteil 94000, France
| | - Nathalie Lassau
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay, Inserm, CNRS, CEA, BIOMAPS, UMR 1281, Université Paris-Saclay, Villejuif 94800, France; Department of Imaging, Institut Gustave Roussy, Université Paris-Saclay, Villejuif 94800, France
| |
Collapse
|
9
|
Lacroix M, Aouad T, Feydy J, Biau D, Larousserie F, Fournier L, Feydy A. Artificial intelligence in musculoskeletal oncology imaging: A critical review of current applications. Diagn Interv Imaging 2023; 104:18-23. [PMID: 36270953 DOI: 10.1016/j.diii.2022.10.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2022] [Accepted: 10/05/2022] [Indexed: 01/10/2023]
Abstract
Artificial intelligence (AI) is increasingly being studied in musculoskeletal oncology imaging. AI has been applied to both primary and secondary bone tumors and assessed for various predictive tasks that include detection, segmentation, classification, and prognosis. Still, in the field of clinical research, further efforts are needed to improve AI reproducibility and reach an acceptable level of evidence in musculoskeletal oncology. This review describes the basic principles of the most common AI techniques, including machine learning, deep learning and radiomics. Then, recent developments and current results of AI in the field of musculoskeletal oncology are presented. Finally, limitations and future perspectives of AI in this field are discussed.
Collapse
Affiliation(s)
- Maxime Lacroix
- Department of Radiology, Hôpital Européen Georges Pompidou, Assistance Publique-Hôpitaux de Paris, Paris, 75015, France; Université Paris Cité, Faculté de Médecine, Paris, 75006, France; PARCC UMRS 970, INSERM, Paris 75015, France
| | - Theodore Aouad
- Université Paris-Saclay, CentraleSupélec, Inria, Centre for Visual Computing, 91190, Gif-sur-Yvette, France
| | - Jean Feydy
- Université Paris Cité, HeKA team, Inria Paris, Inserm, 75006, Paris, France
| | - David Biau
- Université Paris Cité, Faculté de Médecine, Paris, 75006, France; Department of Orthopedic Surgery, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, Paris, 75014, France
| | - Frédérique Larousserie
- Université Paris Cité, Faculté de Médecine, Paris, 75006, France; Department of Pathology, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, Paris, 75014, France
| | - Laure Fournier
- Department of Radiology, Hôpital Européen Georges Pompidou, Assistance Publique-Hôpitaux de Paris, Paris, 75015, France; Université Paris Cité, Faculté de Médecine, Paris, 75006, France; PARCC UMRS 970, INSERM, Paris 75015, France
| | - Antoine Feydy
- Université Paris Cité, Faculté de Médecine, Paris, 75006, France; Department of Radiology, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, Paris, 75014, France
| |
Collapse
|
10
|
Initial experience of a deep learning application for the differentiation of Kikuchi-Fujimoto’s disease from tuberculous lymphadenitis on neck CECT. Sci Rep 2022; 12:14184. [PMID: 35986073 PMCID: PMC9391448 DOI: 10.1038/s41598-022-18535-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 08/16/2022] [Indexed: 11/14/2022] Open
Abstract
Neck contrast-enhanced CT (CECT) is a routine tool used to evaluate patients with cervical lymphadenopathy. This study aimed to evaluate the ability of convolutional neural networks (CNNs) to classify Kikuchi-Fujimoto’s disease (KD) and cervical tuberculous lymphadenitis (CTL) on neck CECT in patients with benign cervical lymphadenopathy. A retrospective analysis of consecutive patients with biopsy-confirmed KD and CTL in a single center, from January 2012 to June 2020 was performed. This study included 198 patients of whom 125 patients (mean age, 25.1 years ± 8.7, 31 men) had KD and 73 patients (mean age, 41.0 years ± 16.8, 34 men) had CTL. A neuroradiologist manually labelled the enlarged lymph nodes on the CECT images. Using these labels as the reference standard, a CNNs was developed to classify the findings as KD or CTL. The CT images were divided into training (70%), validation (10%), and test (20%) subsets. As a supervised augmentation method, the Cut&Remain method was applied to improve performance. The best area under the receiver operating characteristic curve for classifying KD from CTL for the test set was 0.91. This study shows that the differentiation of KD from CTL on neck CECT using a CNNs is feasible with high diagnostic performance.
Collapse
|
11
|
Fabry V, Mamalet F, Laforet A, Capelle M, Acket B, Sengenes C, Cintas P, Faruch-Bilfeld M. A deep learning tool without muscle-by-muscle grading to differentiate myositis from facio-scapulo-humeral dystrophy using MRI. Diagn Interv Imaging 2022; 103:353-359. [DOI: 10.1016/j.diii.2022.01.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Revised: 01/26/2022] [Accepted: 01/27/2022] [Indexed: 11/03/2022]
|
12
|
Group-Based Sparse Representation for Compressed Sensing Image Reconstruction with Joint Regularization. ELECTRONICS 2022. [DOI: 10.3390/electronics11020182] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Achieving high-quality reconstructions of images is the focus of research in image compressed sensing. Group sparse representation improves the quality of reconstructed images by exploiting the non-local similarity of images; however, block-matching and dictionary learning in the image group construction process leads to a long reconstruction time and artifacts in the reconstructed images. To solve the above problems, a joint regularized image reconstruction model based on group sparse representation (GSR-JR) is proposed. A group sparse coefficients regularization term ensures the sparsity of the group coefficients and reduces the complexity of the model. The group sparse residual regularization term introduces the prior information of the image to improve the quality of the reconstructed image. The alternating direction multiplier method and iterative thresholding algorithm are applied to solve the optimization problem. Simulation experiments confirm that the optimized GSR-JR model is superior to other advanced image reconstruction models in reconstructed image quality and visual effects. When the sensing rate is 0.1, compared to the group sparse residual constraint with a nonlocal prior (GSRC-NLR) model, the gain of the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) is up to 4.86 dB and 0.1189, respectively.
Collapse
|
13
|
Dudoignon D, Delbot T, Cottereau AS, Dechmi A, Bienvenu M, Koumakis E, Cormier C, Gaujoux S, Groussin L, Cochand-Priollet B, Clerc J, Wartski M. 18F-fluorocholine PET/CT and conventional imaging in primary hyperparathyroidism. Diagn Interv Imaging 2022; 103:258-265. [DOI: 10.1016/j.diii.2021.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 11/16/2021] [Accepted: 12/06/2021] [Indexed: 11/03/2022]
|
14
|
|