1
|
Ishikawa Y, Sugino T, Okubo K, Nakajima Y. Detecting the location of lung cancer on thoracoscopic images using deep convolutional neural networks. Surg Today 2023; 53:1380-1387. [PMID: 37354240 DOI: 10.1007/s00595-023-02708-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Accepted: 04/03/2023] [Indexed: 06/26/2023]
Abstract
OBJECTIVES The prevalence of minimally invasive surgeries has increased the need for tumor detection using thoracoscopic images during lung cancer surgery. We conducted this study to analyze the efficacy of a deep convolutional neural network (DCNN) for tumor detection using recorded thoracoscopic images of pulmonary surfaces. MATERIALS AND METHODS We collected 644 intraoperative thoracoscopic images of changes in pulmonary appearance from 427 patients with lung cancer between 2012 and 2021. The lesion areas on the thoracoscopic images were detected by bounding boxes using an advanced version of YOLO, a well-known DCNN for object detection. The DCNN model was trained and evaluated by a 15-fold cross-validation scheme. Each predicted bounding box was considered successful detection when it overlapped more than 50% of the lesion areas annotated by board-certified surgeons. RESULTS AND CONCLUSIONS Precision, recall, and F1-measured values of 91.9%, 90.5%, and 91.1%, respectively, were obtained. The presence of lymphatic vessel invasion was associated with successful detection (p = 0.045). The presence of pathological pleural invasion also showed a tendency toward successful detection (p = 0.081). The proposed DCNN-based algorithm yielded an accuracy of more than 90% tumor detection. These algorithms will help surgeons detect lung cancer displayed on a screen automatically.
Collapse
Affiliation(s)
- Yuya Ishikawa
- Department of Thoracic Surgery, Tokyo Medical and Dental University, Tokyo, Japan
| | - Takaaki Sugino
- Department of Biomedical Information, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, 2-3-10, Surugadai, Chiyoda-ku, Tokyo, 101-0062, Japan
| | - Kenichi Okubo
- Department of Thoracic Surgery, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yoshikazu Nakajima
- Department of Biomedical Information, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, 2-3-10, Surugadai, Chiyoda-ku, Tokyo, 101-0062, Japan.
| |
Collapse
|
2
|
Liu R, Yuan X, Liu S, Hu B. Endoscopic submucosal dissection for a small high grade intraepithelial neoplasia in the hypopharynx detected incidentally by artificial intelligence. Endoscopy 2023; 55:E1240-E1241. [PMID: 38086415 PMCID: PMC10715902 DOI: 10.1055/a-2208-3412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Affiliation(s)
- Ruide Liu
- Department of Gastroenterology and Hepatology, West China Hospital of Sichuan University, Chengdu, China
| | - Xianglei Yuan
- Department of Gastroenterology and Hepatology, West China Hospital of Sichuan University, Chengdu, China
| | - Shuang Liu
- Department of Gastroenterology and Hepatology, West China Hospital of Sichuan University, Chengdu, China
| | - Bing Hu
- Department of Gastroenterology and Hepatology, West China Hospital of Sichuan University, Chengdu, China
| |
Collapse
|
3
|
Tsilivigkos C, Athanasopoulos M, Micco RD, Giotakis A, Mastronikolis NS, Mulita F, Verras GI, Maroulis I, Giotakis E. Deep Learning Techniques and Imaging in Otorhinolaryngology-A State-of-the-Art Review. J Clin Med 2023; 12:6973. [PMID: 38002588 PMCID: PMC10672270 DOI: 10.3390/jcm12226973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/02/2023] [Accepted: 11/06/2023] [Indexed: 11/26/2023] Open
Abstract
Over the last decades, the field of medicine has witnessed significant progress in artificial intelligence (AI), the Internet of Medical Things (IoMT), and deep learning (DL) systems. Otorhinolaryngology, and imaging in its various subspecialties, has not remained untouched by this transformative trend. As the medical landscape evolves, the integration of these technologies becomes imperative in augmenting patient care, fostering innovation, and actively participating in the ever-evolving synergy between computer vision techniques in otorhinolaryngology and AI. To that end, we conducted a thorough search on MEDLINE for papers published until June 2023, utilizing the keywords 'otorhinolaryngology', 'imaging', 'computer vision', 'artificial intelligence', and 'deep learning', and at the same time conducted manual searching in the references section of the articles included in our manuscript. Our search culminated in the retrieval of 121 related articles, which were subsequently subdivided into the following categories: imaging in head and neck, otology, and rhinology. Our objective is to provide a comprehensive introduction to this burgeoning field, tailored for both experienced specialists and aspiring residents in the domain of deep learning algorithms in imaging techniques in otorhinolaryngology.
Collapse
Affiliation(s)
- Christos Tsilivigkos
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Michail Athanasopoulos
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Riccardo di Micco
- Department of Otolaryngology and Head and Neck Surgery, Medical School of Hannover, 30625 Hannover, Germany;
| | - Aris Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Nicholas S. Mastronikolis
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Francesk Mulita
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Georgios-Ioannis Verras
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Ioannis Maroulis
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Evangelos Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| |
Collapse
|
4
|
Wu Q, Wang X, Liang G, Luo X, Zhou M, Deng H, Zhang Y, Huang X, Yang Q. Advances in Image-Based Artificial Intelligence in Otorhinolaryngology-Head and Neck Surgery: A Systematic Review. Otolaryngol Head Neck Surg 2023; 169:1132-1142. [PMID: 37288505 DOI: 10.1002/ohn.391] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/27/2023] [Accepted: 05/13/2023] [Indexed: 06/09/2023]
Abstract
OBJECTIVE To update the literature and provide a systematic review of image-based artificial intelligence (AI) applications in otolaryngology, highlight its advances, and propose future challenges. DATA SOURCES Web of Science, Embase, PubMed, and Cochrane Library. REVIEW METHODS Studies written in English, published between January 2020 and December 2022. Two independent authors screened the search results, extracted data, and assessed studies. RESULTS Overall, 686 studies were identified. After screening titles and abstracts, 325 full-text studies were assessed for eligibility, and 78 studies were included in this systematic review. The studies originated from 16 countries. Among these countries, the top 3 were China (n = 29), Korea (n = 8), the United States, and Japan (n = 7 each). The most common area was otology (n = 35), followed by rhinology (n = 20), pharyngology (n = 18), and head and neck surgery (n = 5). Most applications of AI in otology, rhinology, pharyngology, and head and neck surgery mainly included chronic otitis media (n = 9), nasal polyps (n = 4), laryngeal cancer (n = 12), and head and neck squamous cell carcinoma (n = 3), respectively. The overall performance of AI in accuracy, the area under the curve, sensitivity, and specificity were 88.39 ± 9.78%, 91.91 ± 6.70%, 86.93 ± 11.59%, and 88.62 ± 14.03%, respectively. CONCLUSION This state-of-the-art review aimed to highlight the increasing applications of image-based AI in otorhinolaryngology head and neck surgery. The following steps will entail multicentre collaboration to ensure data reliability, ongoing optimization of AI algorithms, and integration into real-world clinical practice. Future studies should consider 3-dimensional (3D)-based AI, such as 3D surgical AI.
Collapse
Affiliation(s)
- Qingwu Wu
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Department of Allergy, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xinyue Wang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Guixian Liang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xin Luo
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Min Zhou
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Department of Allergy, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Huiyi Deng
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Yana Zhang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xuekun Huang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Qintai Yang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Department of Allergy, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
5
|
Sampieri C, Baldini C, Azam MA, Moccia S, Mattos LS, Vilaseca I, Peretti G, Ioppi A. Artificial Intelligence for Upper Aerodigestive Tract Endoscopy and Laryngoscopy: A Guide for Physicians and State-of-the-Art Review. Otolaryngol Head Neck Surg 2023; 169:811-829. [PMID: 37051892 DOI: 10.1002/ohn.343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 03/03/2023] [Accepted: 03/23/2023] [Indexed: 04/14/2023]
Abstract
OBJECTIVE The endoscopic and laryngoscopic examination is paramount for laryngeal, oropharyngeal, nasopharyngeal, nasal, and oral cavity benign lesions and cancer evaluation. Nevertheless, upper aerodigestive tract (UADT) endoscopy is intrinsically operator-dependent and lacks objective quality standards. At present, there has been an increased interest in artificial intelligence (AI) applications in this area to support physicians during the examination, thus enhancing diagnostic performances. The relative novelty of this research field poses a challenge both for the reviewers and readers as clinicians often lack a specific technical background. DATA SOURCES Four bibliographic databases were searched: PubMed, EMBASE, Cochrane, and Google Scholar. REVIEW METHODS A structured review of the current literature (up to September 2022) was performed. Search terms related to topics of AI, machine learning (ML), and deep learning (DL) in UADT endoscopy and laryngoscopy were identified and queried by 3 independent reviewers. Citations of selected studies were also evaluated to ensure comprehensiveness. CONCLUSIONS Forty-one studies were included in the review. AI and computer vision techniques were used to achieve 3 fundamental tasks in this field: classification, detection, and segmentation. All papers were summarized and reviewed. IMPLICATIONS FOR PRACTICE This article comprehensively reviews the latest developments in the application of ML and DL in UADT endoscopy and laryngoscopy, as well as their future clinical implications. The technical basis of AI is also explained, providing guidance for nonexpert readers to allow critical appraisal of the evaluation metrics and the most relevant quality requirements.
Collapse
Affiliation(s)
- Claudio Sampieri
- Department of Experimental Medicine (DIMES), University of Genoa, Genoa, Italy
- Functional Unit of Head and Neck Tumors, Hospital Clínic, Barcelona, Spain
- Otorhinolaryngology Department, Hospital Clínic, Barcelona, Spain
| | - Chiara Baldini
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
- Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi (DIBRIS), University of Genoa, Genoa, Italy
| | - Muhammad Adeel Azam
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
- Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi (DIBRIS), University of Genoa, Genoa, Italy
| | - Sara Moccia
- Department of Excellence in Robotics and AI, The BioRobotics Institute, Pisa, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Isabel Vilaseca
- Functional Unit of Head and Neck Tumors, Hospital Clínic, Barcelona, Spain
- Otorhinolaryngology Department, Hospital Clínic, Barcelona, Spain
- Head Neck Clínic, Agència de Gestió d'Ajuts Universitaris i de Recerca, Barcelona, Catalunya, Spain
- Surgery and Medical-Surgical Specialties Department, Faculty of Medicine and Health Sciences, Universitat de Barcelona, Barcelona, Spain
- Translational Genomics and Target Therapies in Solid Tumors Group, Faculty of Medicine, Institut d́Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
- University of Barcelona, Barcelona, Spain
| | - Giorgio Peretti
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy
- Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Alessandro Ioppi
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy
- Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| |
Collapse
|
6
|
Zhu JQ, Wang ML, Li Y, Zhang W, Li LJ, Liu L, Zhang Y, Han CJ, Tie CW, Wang SX, Wang GQ, Ni XG. Convolutional neural network based anatomical site identification for laryngoscopy quality control: A multicenter study. Am J Otolaryngol 2023; 44:103695. [PMID: 36473265 DOI: 10.1016/j.amjoto.2022.103695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 09/26/2022] [Accepted: 11/19/2022] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Video laryngoscopy is an important diagnostic tool for head and neck cancers. The artificial intelligence (AI) system has been shown to monitor blind spots during esophagogastroduodenoscopy. This study aimed to test the performance of AI-driven intelligent laryngoscopy monitoring assistant (ILMA) for landmark anatomical sites identification on laryngoscopic images and videos based on a convolutional neural network (CNN). MATERIALS AND METHODS The laryngoscopic images taken from January to December 2018 were retrospectively collected, and ILMA was developed using the CNN model of Inception-ResNet-v2 + Squeeze-and-Excitation Networks (SENet). A total of 16,000 laryngoscopic images were used for training. These were assigned to 20 landmark anatomical sites covering six major head and neck regions. In addition, the performance of ILMA in identifying anatomical sites was validated using 4000 laryngoscopic images and 25 videos provided by five other tertiary hospitals. RESULTS ILMA identified the 20 anatomical sites on the laryngoscopic images with a total accuracy of 97.60 %, and the average sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 100 %, 99.87 %, 97.65 %, and 99.87 %, respectively. In addition, multicenter clinical verification displayed that the accuracy of ILMA in identifying the 20 targeted anatomical sites in 25 laryngoscopic videos from five hospitals was ≥95 %. CONCLUSION The proposed CNN-based ILMA model can rapidly and accurately identify the anatomical sites on laryngoscopic images. The model can reflect the coverage of anatomical regions of the head and neck by laryngoscopy, showing application potential in improving the quality of laryngoscopy.
Collapse
Affiliation(s)
- Ji-Qing Zhu
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Mei-Ling Wang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Ying Li
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Wei Zhang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Li-Juan Li
- Department of Otorhinolaryngology, The People's Hospital of Wenshan Prefecture, Wenshan, Yunnan, China
| | - Lin Liu
- Department of Otolaryngology-Head and Neck Surgery, Dalian Municipal Friendship Hospital, Dalian, Liaoning, China
| | - Yan Zhang
- Department of Otorhinolaryngology, Chongqing Traditional Chinese Medicine Hospital, Chongqing, China
| | - Cai-Juan Han
- Department of Otolaryngology-Head and Neck Surgery, Qilu Hospital (Qingdao), Cheeloo College of Medicine, Shandong University, Qingdao, Shandong, China
| | - Cheng-Wei Tie
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Shi-Xu Wang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Gui-Qi Wang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| | - Xiao-Guang Ni
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| |
Collapse
|
7
|
Intraoperative Imaging Techniques to Improve Surgical Resection Margins of Oropharyngeal Squamous Cell Cancer: A Comprehensive Review of Current Literature. Cancers (Basel) 2023; 15:cancers15030896. [PMID: 36765858 PMCID: PMC9913756 DOI: 10.3390/cancers15030896] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/24/2023] [Accepted: 01/26/2023] [Indexed: 02/04/2023] Open
Abstract
Inadequate resection margins in head and neck squamous cell carcinoma surgery necessitate adjuvant therapies such as re-resection and radiotherapy with or without chemotherapy and imply increasing morbidity and worse prognosis. On the other hand, taking larger margins by extending the resection also leads to avoidable increased morbidity. Oropharyngeal squamous cell carcinomas (OPSCCs) are often difficult to access; resections are limited by anatomy and functionality and thus carry an increased risk for close or positive margins. Therefore, there is a need to improve intraoperative assessment of resection margins. Several intraoperative techniques are available, but these often lead to prolonged operative time and are only suitable for a subgroup of patients. In recent years, new diagnostic tools have been the subject of investigation. This study reviews the available literature on intraoperative techniques to improve resection margins for OPSCCs. A literature search was performed in Embase, PubMed, and Cochrane. Narrow band imaging (NBI), high-resolution microendoscopic imaging, confocal laser endomicroscopy, frozen section analysis (FSA), ultrasound (US), computed tomography scan (CT), (auto) fluorescence imaging (FI), and augmented reality (AR) have all been used for OPSCC. NBI, FSA, and US are most commonly used and increase the rate of negative margins. Other techniques will become available in the future, of which fluorescence imaging has high potential for use with OPSCC.
Collapse
|
8
|
Azam MA, Sampieri C, Ioppi A, Benzi P, Giordano GG, De Vecchi M, Campagnari V, Li S, Guastini L, Paderno A, Moccia S, Piazza C, Mattos LS, Peretti G. Videomics of the Upper Aero-Digestive Tract Cancer: Deep Learning Applied to White Light and Narrow Band Imaging for Automatic Segmentation of Endoscopic Images. Front Oncol 2022; 12:900451. [PMID: 35719939 PMCID: PMC9198427 DOI: 10.3389/fonc.2022.900451] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Accepted: 04/26/2022] [Indexed: 12/13/2022] Open
Abstract
Introduction Narrow Band Imaging (NBI) is an endoscopic visualization technique useful for upper aero-digestive tract (UADT) cancer detection and margins evaluation. However, NBI analysis is strongly operator-dependent and requires high expertise, thus limiting its wider implementation. Recently, artificial intelligence (AI) has demonstrated potential for applications in UADT videoendoscopy. Among AI methods, deep learning algorithms, and especially convolutional neural networks (CNNs), are particularly suitable for delineating cancers on videoendoscopy. This study is aimed to develop a CNN for automatic semantic segmentation of UADT cancer on endoscopic images. Materials and Methods A dataset of white light and NBI videoframes of laryngeal squamous cell carcinoma (LSCC) was collected and manually annotated. A novel DL segmentation model (SegMENT) was designed. SegMENT relies on DeepLabV3+ CNN architecture, modified using Xception as a backbone and incorporating ensemble features from other CNNs. The performance of SegMENT was compared to state-of-the-art CNNs (UNet, ResUNet, and DeepLabv3). SegMENT was then validated on two external datasets of NBI images of oropharyngeal (OPSCC) and oral cavity SCC (OSCC) obtained from a previously published study. The impact of in-domain transfer learning through an ensemble technique was evaluated on the external datasets. Results 219 LSCC patients were retrospectively included in the study. A total of 683 videoframes composed the LSCC dataset, while the external validation cohorts of OPSCC and OCSCC contained 116 and 102 images. On the LSCC dataset, SegMENT outperformed the other DL models, obtaining the following median values: 0.68 intersection over union (IoU), 0.81 dice similarity coefficient (DSC), 0.95 recall, 0.78 precision, 0.97 accuracy. For the OCSCC and OPSCC datasets, results were superior compared to previously published data: the median performance metrics were, respectively, improved as follows: DSC=10.3% and 11.9%, recall=15.0% and 5.1%, precision=17.0% and 14.7%, accuracy=4.1% and 10.3%. Conclusion SegMENT achieved promising performances, showing that automatic tumor segmentation in endoscopic images is feasible even within the highly heterogeneous and complex UADT environment. SegMENT outperformed the previously published results on the external validation cohorts. The model demonstrated potential for improved detection of early tumors, more precise biopsies, and better selection of resection margins.
Collapse
Affiliation(s)
- Muhammad Adeel Azam
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Claudio Sampieri
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Alessandro Ioppi
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Pietro Benzi
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Giorgio Gregory Giordano
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Marta De Vecchi
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Valentina Campagnari
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Shunlei Li
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Luca Guastini
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Alberto Paderno
- Unit of Otorhinolaryngology - Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy.,Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Cesare Piazza
- Unit of Otorhinolaryngology - Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy.,Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Giorgio Peretti
- Unit of Otorhinolaryngology - Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy.,Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| |
Collapse
|
9
|
Nakajo K, Inaba A, Aoyama N, Takashima K, Kadota T, Yoda Y, Morishita Y, Okano W, Tomioka T, Shinozaki T, Matsuura K, Hayashi R, Akimoto T, Yano T. The characteristics of missed pharyngeal and laryngeal cancers at gastrointestinal endoscopy. Jpn J Clin Oncol 2022; 52:575-582. [DOI: 10.1093/jjco/hyac036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 03/03/2022] [Indexed: 11/14/2022] Open
Abstract
Abstract
Objectives
Understanding the miss rate and characteristics of missed pharyngeal and laryngeal cancers during upper gastrointestinal endoscopy may aid in reducing the endoscopic miss rate of this cancer type. However, little is known regarding the miss rate and characteristics of such cancers. Therefore, the aim of this study was to investigate the upper gastrointestinal endoscopic miss rate of oro-hypopharyngeal and laryngeal cancers, the characteristics of the missed cancers, and risk factors associated with the missed cancers.
Methods
Patients who underwent upper gastrointestinal endoscopy and were pathologically diagnosed with oro-hypopharyngeal and laryngeal squamous cell carcinoma from January 2019 to November 2020 at our institution were retrospectively evaluated. Missed cancers were defined as those diagnosed within 15 months after a negative upper gastrointestinal endoscopy.
Results
A total of 240 lesions were finally included. Eighty-five lesions were classified as missed cancers, and 155 lesions as non-missed cancers. The upper gastrointestinal endoscopic miss rate for oro-hypopharyngeal and laryngeal cancers was 35.4%. Multivariate analysis revealed that a tumor size of <13 mm (odds ratio: 1.96, P=0.026), tumors located on the anterior surface of the epiglottis/valleculae (odds ratio: 2.98, P=0.045) and inside of the pyriform sinus (odds ratio: 2.28, P=0.046) were associated with missed cancers.
Conclusions
This study revealed a high miss rate of oro-hypopharyngeal and laryngeal cancers during endoscopic observations. High-quality upper gastrointestinal endoscopic observation and awareness of missed cancer may help reduce this rate.
Collapse
Affiliation(s)
- Keiichiro Nakajo
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Japan
- Cancer Medicine, Cooperative Graduate School, The Jikei University Graduate School of Medicine, Tokyo, Japan
| | - Atsushi Inaba
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Japan
| | | | | | | | | | - Youhei Morishita
- Department of Head and Neck Surgery, National Cancer Center Hospital East Hospital, Kashiwa, Japan
| | | | | | | | | | | | - Tetsuo Akimoto
- Cancer Medicine, Cooperative Graduate School, The Jikei University Graduate School of Medicine, Tokyo, Japan
- Department of Radiation Oncology and Particle Therapy, National Cancer Center Hospital East, Kashiwa, Japan
| | - Tomonori Yano
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Japan
| |
Collapse
|
10
|
Nagao S, Tani Y, Shibata J, Tsuji Y, Tada T, Ishihara R, Fujishiro M. Implementation of artificial intelligence in upper gastrointestinal endoscopy. DEN OPEN 2022; 2:e72. [PMID: 35873509 PMCID: PMC9302271 DOI: 10.1002/deo2.72] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 10/11/2021] [Accepted: 10/16/2021] [Indexed: 12/24/2022]
Abstract
The application of artificial intelligence (AI) using deep learning has significantly expanded in the field of esophagogastric endoscopy. Recent studies have shown promising results in detecting and differentiating early gastric cancer using AI tools built using white light, magnified, or image‐enhanced endoscopic images. Some studies have reported the use of AI tools to predict the depth of early gastric cancer based on endoscopic images. Similarly, studies based on using AI for detecting early esophageal cancer have also been reported, with an accuracy comparable to that of endoscopy specialists. Moreover, an AI system, developed to diagnose pharyngeal cancer, has shown promising performance with high sensitivity. These reports suggest that, if introduced for regular use in clinical settings, AI systems can significantly reduce the burden on physicians. This review summarizes the current status of AI applications in the upper gastrointestinal tract and presents directions for clinical practice implementation and future research.
Collapse
Affiliation(s)
- Sayaka Nagao
- Department of Gastroenterology Graduate School of Medicine the University of Tokyo Tokyo Japan
- Department of Endoscopy and Endoscopic Surgery Graduate School of Medicine the University of Tokyo Tokyo Japan
| | - Yasuhiro Tani
- Department of Gastrointestinal Oncology Osaka International Cancer Institute Osaka Japan
| | - Junichi Shibata
- Tada Tomohiro Institute of Gastroenterology and Proctology Saitama Japan
| | - Yosuke Tsuji
- Department of Gastroenterology Graduate School of Medicine the University of Tokyo Tokyo Japan
| | - Tomohiro Tada
- Tada Tomohiro Institute of Gastroenterology and Proctology Saitama Japan
- AI Medical Service Inc. Tokyo Japan
- Department of Surgical Oncology Graduate School of Medicine the University of Tokyo Tokyo Japan
| | - Ryu Ishihara
- Department of Gastrointestinal Oncology Osaka International Cancer Institute Osaka Japan
| | - Mitsuhiro Fujishiro
- Department of Gastroenterology Graduate School of Medicine the University of Tokyo Tokyo Japan
| |
Collapse
|
11
|
Ikenoyama Y, Yoshio T, Tokura J, Naito S, Namikawa K, Tokai Y, Yoshimizu S, Horiuchi Y, Ishiyama A, Hirasawa T, Tsuchida T, Katayama N, Tada T, Fujisaki J. Artificial intelligence diagnostic system predicts multiple Lugol-voiding lesions in the esophagus and patients at high risk for esophageal squamous cell carcinoma. Endoscopy 2021; 53:1105-1113. [PMID: 33540446 DOI: 10.1055/a-1334-4053] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
BACKGROUND It is known that an esophagus with multiple Lugol-voiding lesions (LVLs) after iodine staining is high risk for esophageal cancer; however, it is preferable to identify high-risk cases without staining because iodine causes discomfort and prolongs examination times. This study assessed the capability of an artificial intelligence (AI) system to predict multiple LVLs from images that had not been stained with iodine as well as patients at high risk for esophageal cancer. METHODS We constructed the AI system by preparing a training set of 6634 images from white-light and narrow-band imaging in 595 patients before they underwent endoscopic examination with iodine staining. Diagnostic performance was evaluated on an independent validation dataset (667 images from 72 patients) and compared with that of 10 experienced endoscopists. RESULTS The sensitivity, specificity, and accuracy of the AI system to predict multiple LVLs were 84.4 %, 70.0 %, and 76.4 %, respectively, compared with 46.9 %, 77.5 %, and 63.9 %, respectively, for the endoscopists. The AI system had significantly higher sensitivity than 9/10 experienced endoscopists. We also identified six endoscopic findings that were significantly more frequent in patients with multiple LVLs; however, the AI system had greater sensitivity than these findings for the prediction of multiple LVLs. Moreover, patients with AI-predicted multiple LVLs had significantly more cancers in the esophagus and head and neck than patients without predicted multiple LVLs. CONCLUSION The AI system could predict multiple LVLs with high sensitivity from images without iodine staining. The system could enable endoscopists to apply iodine staining more judiciously.
Collapse
Affiliation(s)
- Yohei Ikenoyama
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan.,Department of Hematology and Oncology, Mie University Graduate School of Medicine, Mie, Japan
| | - Toshiyuki Yoshio
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan.,Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
| | - Junki Tokura
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Sakiko Naito
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Ken Namikawa
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Yoshitaka Tokai
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Shoichi Yoshimizu
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Yusuke Horiuchi
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Akiyoshi Ishiyama
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Toshiaki Hirasawa
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan.,Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
| | - Tomohiro Tsuchida
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Naoyuki Katayama
- Department of Hematology and Oncology, Mie University Graduate School of Medicine, Mie, Japan
| | - Tomohiro Tada
- Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan.,AI Medical Service Inc., Tokyo, Japan.,Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Junko Fujisaki
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| |
Collapse
|
12
|
Oka A, Ishimura N, Ishihara S. A New Dawn for the Use of Artificial Intelligence in Gastroenterology, Hepatology and Pancreatology. Diagnostics (Basel) 2021; 11:1719. [PMID: 34574060 PMCID: PMC8468082 DOI: 10.3390/diagnostics11091719] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/17/2021] [Accepted: 09/17/2021] [Indexed: 12/15/2022] Open
Abstract
Artificial intelligence (AI) is rapidly becoming an essential tool in the medical field as well as in daily life. Recent developments in deep learning, a subfield of AI, have brought remarkable advances in image recognition, which facilitates improvement in the early detection of cancer by endoscopy, ultrasonography, and computed tomography. In addition, AI-assisted big data analysis represents a great step forward for precision medicine. This review provides an overview of AI technology, particularly for gastroenterology, hepatology, and pancreatology, to help clinicians utilize AI in the near future.
Collapse
Affiliation(s)
- Akihiko Oka
- Department of Internal Medicine II, Faculty of Medicine, Shimane University, Izumo 693-8501, Shimane, Japan; (N.I.); (S.I.)
| | | | | |
Collapse
|
13
|
Role of Artificial Intelligence in the Early Diagnosis of Oral Cancer. A Scoping Review. Cancers (Basel) 2021; 13:cancers13184600. [PMID: 34572831 PMCID: PMC8467703 DOI: 10.3390/cancers13184600] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 08/29/2021] [Accepted: 09/09/2021] [Indexed: 01/06/2023] Open
Abstract
The early diagnosis of cancer can facilitate subsequent clinical patient management. Artificial intelligence (AI) has been found to be promising for improving the diagnostic process. The aim of the present study is to increase the evidence on the application of AI to the early diagnosis of oral cancer through a scoping review. A search was performed in the PubMed, Web of Science, Embase and Google Scholar databases during the period from January 2000 to December 2020, referring to the early non-invasive diagnosis of oral cancer based on AI applied to screening. Only accessible full-text articles were considered. Thirty-six studies were included on the early detection of oral cancer based on images (photographs (optical imaging and enhancement technology) and cytology) with the application of AI models. These studies were characterized by their heterogeneous nature. Each publication involved a different algorithm with potential training data bias and few comparative data for AI interpretation. Artificial intelligence may play an important role in precisely predicting the development of oral cancer, though several methodological issues need to be addressed in parallel to the advances in AI techniques, in order to allow large-scale transfer of the latter to population-based detection protocols.
Collapse
|
14
|
van Schaik JE, Halmos GB, Witjes MJH, Plaat BEC. An overview of the current clinical status of optical imaging in head and neck cancer with a focus on Narrow Band imaging and fluorescence optical imaging. Oral Oncol 2021; 121:105504. [PMID: 34454339 DOI: 10.1016/j.oraloncology.2021.105504] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 07/25/2021] [Accepted: 08/18/2021] [Indexed: 11/28/2022]
Abstract
Early and accurate identification of head and neck squamous cell carcinoma (HNSCC) is important to improve treatment outcomes and prognosis. New optical imaging techniques may assist in both the diagnostic process as well as in the operative setting by real-time visualization and delineation of tumor. Narrow Band Imaging (NBI) is an endoscopic technique that uses blue and green light to enhance mucosal and submucosal blood vessels, leading to better detection of (pre)malignant lesions showing aberrant blood vessel patterns. Fluorescence optical imaging makes use of near-infrared fluorescent agents to visualize and delineate HNSCC, resulting in fewer positive surgical margins. Targeted fluorescent agents, such as fluorophores conjugated to antibodies, show the most promising results. The aim of this review is: (1) to provide the clinical head and neck surgeon an overview of the current clinical status of various optical imaging techniques in head and neck cancer; (2) to provide an in-depth review of NBI and fluorescence optical imaging, as these techniques have the highest potential for clinical implementation; and (3) to describe future improvements and developments within the field of these two techniques.
Collapse
Affiliation(s)
- Jeroen E van Schaik
- Department of Otorhinolaryngology, Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands.
| | - Gyorgy B Halmos
- Department of Otorhinolaryngology, Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | - Max J H Witjes
- Department of Oral and Maxillofacial Surgery, University of Groningen, University Medical Center Groningen, the Netherlands
| | - Boudewijn E C Plaat
- Department of Otorhinolaryngology, Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| |
Collapse
|
15
|
Abe S, Oda I. Real-time pharyngeal cancer detection utilizing artificial intelligence: Journey from the proof of concept to the clinical use. Dig Endosc 2021; 33:552-553. [PMID: 33029824 DOI: 10.1111/den.13833] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 08/21/2020] [Accepted: 08/27/2020] [Indexed: 01/01/2023]
Affiliation(s)
- Seiichiro Abe
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan
| | - Ichiro Oda
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan
| |
Collapse
|
16
|
Musulin J, Štifanić D, Zulijani A, Ćabov T, Dekanić A, Car Z. An Enhanced Histopathology Analysis: An AI-Based System for Multiclass Grading of Oral Squamous Cell Carcinoma and Segmenting of Epithelial and Stromal Tissue. Cancers (Basel) 2021; 13:1784. [PMID: 33917952 PMCID: PMC8068326 DOI: 10.3390/cancers13081784] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 04/02/2021] [Accepted: 04/07/2021] [Indexed: 12/12/2022] Open
Abstract
Oral squamous cell carcinoma is most frequent histological neoplasm of head and neck cancers, and although it is localized in a region that is accessible to see and can be detected very early, this usually does not occur. The standard procedure for the diagnosis of oral cancer is based on histopathological examination, however, the main problem in this kind of procedure is tumor heterogeneity where a subjective component of the examination could directly impact patient-specific treatment intervention. For this reason, artificial intelligence (AI) algorithms are widely used as computational aid in the diagnosis for classification and segmentation of tumors, in order to reduce inter- and intra-observer variability. In this research, a two-stage AI-based system for automatic multiclass grading (the first stage) and segmentation of the epithelial and stromal tissue (the second stage) from oral histopathological images is proposed in order to assist the clinician in oral squamous cell carcinoma diagnosis. The integration of Xception and SWT resulted in the highest classification value of 0.963 (σ = 0.042) AUCmacro and 0.966 (σ = 0.027) AUCmicro while using DeepLabv3+ along with Xception_65 as backbone and data preprocessing, semantic segmentation prediction resulted in 0.878 (σ = 0.027) mIOU and 0.955 (σ = 0.014) F1 score. Obtained results reveal that the proposed AI-based system has great potential in the diagnosis of OSCC.
Collapse
Affiliation(s)
- Jelena Musulin
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (J.M.); (Z.C.)
| | - Daniel Štifanić
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (J.M.); (Z.C.)
| | - Ana Zulijani
- Department of Oral Surgery, Clinical Hospital Center Rijeka, Krešimirova Ul. 40, 51000 Rijeka, Croatia;
| | - Tomislav Ćabov
- Faculty of Dental Medicine, University of Rijeka, Krešimirova Ul. 40, 51000 Rijeka, Croatia
| | - Andrea Dekanić
- Department of Pathology and Cytology, Clinical Hospital Center Rijeka, Krešimirova Ul. 42, 51000 Rijeka, Croatia;
- Faculty of Medicine, University of Rijeka, Ul. Braće Branchetta 20/1, 51000 Rijeka, Croatia
| | - Zlatan Car
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (J.M.); (Z.C.)
| |
Collapse
|
17
|
Ability of artificial intelligence to detect T1 esophageal squamous cell carcinoma from endoscopic videos and the effects of real-time assistance. Sci Rep 2021; 11:7759. [PMID: 33833355 PMCID: PMC8032773 DOI: 10.1038/s41598-021-87405-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 03/26/2021] [Indexed: 12/28/2022] Open
Abstract
Diagnosis using artificial intelligence (AI) with deep learning could be useful in endoscopic examinations. We investigated the ability of AI to detect superficial esophageal squamous cell carcinoma (ESCC) from esophagogastroduodenoscopy (EGD) videos. We retrospectively collected 8428 EGD images of esophageal cancer to develop a convolutional neural network through deep learning. We evaluated the detection accuracy of the AI diagnosing system compared with that of 18 endoscopists. We used 144 EGD videos for the two validation sets. First, we used 64 EGD observation videos of ESCCs using both white light imaging (WLI) and narrow-band imaging (NBI). We then evaluated the system using 80 EGD videos from 40 patients (20 with superficial ESCC and 20 with non-ESCC). In the first set, the AI system correctly diagnosed 100% ESCCs. In the second set, it correctly detected 85% (17/20) ESCCs. Of these, 75% (15/20) and 55% (11/22) were detected by WLI and NBI, respectively, and the positive predictive value was 36.7%. The endoscopists correctly detected 45% (25–70%) ESCCs. With AI real-time assistance, the sensitivities of the endoscopists were significantly improved without AI assistance (p < 0.05). AI can detect superficial ESCCs from EGD videos with high sensitivity and the sensitivity of the endoscopist was improved with AI real-time support.
Collapse
|
18
|
Paderno A, Piazza C, Del Bon F, Lancini D, Tanagli S, Deganello A, Peretti G, De Momi E, Patrini I, Ruperti M, Mattos LS, Moccia S. Deep Learning for Automatic Segmentation of Oral and Oropharyngeal Cancer Using Narrow Band Imaging: Preliminary Experience in a Clinical Perspective. Front Oncol 2021; 11:626602. [PMID: 33842330 PMCID: PMC8024583 DOI: 10.3389/fonc.2021.626602] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Accepted: 03/08/2021] [Indexed: 01/22/2023] Open
Abstract
Introduction Fully convoluted neural networks (FCNN) applied to video-analysis are of particular interest in the field of head and neck oncology, given that endoscopic examination is a crucial step in diagnosis, staging, and follow-up of patients affected by upper aero-digestive tract cancers. The aim of this study was to test FCNN-based methods for semantic segmentation of squamous cell carcinoma (SCC) of the oral cavity (OC) and oropharynx (OP). Materials and Methods Two datasets were retrieved from the institutional registry of a tertiary academic hospital analyzing 34 and 45 NBI endoscopic videos of OC and OP lesions, respectively. The dataset referring to the OC was composed of 110 frames, while 116 frames composed the OP dataset. Three FCNNs (U-Net, U-Net 3, and ResNet) were investigated to segment the neoplastic images. FCNNs performance was evaluated for each tested network and compared to the gold standard, represented by the manual annotation performed by expert clinicians. Results For FCNN-based segmentation of the OC dataset, the best results in terms of Dice Similarity Coefficient (Dsc) were achieved by ResNet with 5(×2) blocks and 16 filters, with a median value of 0.6559. In FCNN-based segmentation for the OP dataset, the best results in terms of Dsc were achieved by ResNet with 4(×2) blocks and 16 filters, with a median value of 0.7603. All tested FCNNs presented very high values of variance, leading to very low values of minima for all metrics evaluated. Conclusions FCNNs have promising potential in the analysis and segmentation of OC and OP video-endoscopic images. All tested FCNN architectures demonstrated satisfying outcomes in terms of diagnostic accuracy. The inference time of the processing networks were particularly short, ranging between 14 and 115 ms, thus showing the possibility for real-time application.
Collapse
Affiliation(s)
- Alberto Paderno
- Department of Otorhinolaryngology-Head and Neck Surgery, ASST-Spedali Civili of Brescia, University of Brescia, Brescia, Italy
| | - Cesare Piazza
- Department of Otorhinolaryngology-Head and Neck Surgery, ASST-Spedali Civili of Brescia, University of Brescia, Brescia, Italy
| | - Francesca Del Bon
- Department of Otorhinolaryngology-Head and Neck Surgery, ASST-Spedali Civili of Brescia, University of Brescia, Brescia, Italy
| | - Davide Lancini
- Department of Otorhinolaryngology-Head and Neck Surgery, ASST-Spedali Civili of Brescia, University of Brescia, Brescia, Italy
| | - Stefano Tanagli
- Department of Otorhinolaryngology-Head and Neck Surgery, ASST-Spedali Civili of Brescia, University of Brescia, Brescia, Italy
| | - Alberto Deganello
- Department of Otorhinolaryngology-Head and Neck Surgery, ASST-Spedali Civili of Brescia, University of Brescia, Brescia, Italy
| | - Giorgio Peretti
- Department of Otorhinolaryngology-Head and Neck Surgery, IRCCS San Martino Hospital, University of Genoa, Genoa, Italy
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Ilaria Patrini
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Michela Ruperti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Sara Moccia
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy.,The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy.,Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| |
Collapse
|
19
|
Lehmann J, Cofala T, Tschuggnall M, Giesinger JM, Rumpold G, Holzner B. Machine learning in oncology—Perspectives in patient-reported outcome research. DER ONKOLOGE 2021. [DOI: 10.1007/s00761-021-00916-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Abstract
Background
Increasing data volumes in oncology pose new challenges for data analysis. Machine learning, a branch of artificial intelligence, can identify patterns even in very large and less structured datasets.
Objective
This article provides an overview of the possible applications for machine learning in oncology. Furthermore, the potential of machine learning in patient-reported outcome (PRO) research is discussed.
Materials and methods
We conducted a selective literature search (PubMed, MEDLINE, IEEE Xplore) and discuss current research.
Results
There are three primary applications for machine learning in oncology: (1) cancer detection or classification; (2) overall survival prediction or risk assessment; and (3) supporting therapy decision-making and prediction of treatment response. Generally, machine learning approaches in oncology PRO research are scarce and few studies integrate PRO data into machine learning models.
Discussion
Machine learning is a promising area of oncology, but few models have been transferred into clinical practice. The promise of personalized cancer therapy and shared decision-making through machine learning has yet to be realized. As an equally important emerging research area in oncology, PROs should also be incorporated into machine learning approaches. To gather the data necessary for this, broad implementation of PRO assessments in clinical practice, as well as the harmonization of existing datasets, is suggested.
Collapse
|
20
|
Abstract
PURPOSE OF REVIEW Machine learning (ML) algorithms have augmented human judgment in various fields of clinical medicine. However, little progress has been made in applying these tools to video-endoscopy. We reviewed the field of video-analysis (herein termed 'Videomics' for the first time) as applied to diagnostic endoscopy, assessing its preliminary findings, potential, as well as limitations, and consider future developments. RECENT FINDINGS ML has been applied to diagnostic endoscopy with different aims: blind-spot detection, automatic quality control, lesion detection, classification, and characterization. The early experience in gastrointestinal endoscopy has recently been expanded to the upper aerodigestive tract, demonstrating promising results in both clinical fields. From top to bottom, multispectral imaging (such as Narrow Band Imaging) appeared to provide significant information drawn from endoscopic images. SUMMARY Videomics is an emerging discipline that has the potential to significantly improve human detection and characterization of clinically significant lesions during endoscopy across medical and surgical disciplines. Research teams should focus on the standardization of data collection, identification of common targets, and optimal reporting. With such a collaborative stepwise approach, Videomics is likely to soon augment clinical endoscopy, significantly impacting cancer patient outcomes.
Collapse
|
21
|
Sinonquel P, Eelbode T, Bossuyt P, Maes F, Bisschops R. Artificial intelligence and its impact on quality improvement in upper and lower gastrointestinal endoscopy. Dig Endosc 2021; 33:242-253. [PMID: 33145847 DOI: 10.1111/den.13888] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 10/14/2020] [Accepted: 11/01/2020] [Indexed: 12/24/2022]
Abstract
Artificial intelligence (AI) and its application in medicine has grown large interest. Within gastrointestinal (GI) endoscopy, the field of colonoscopy and polyp detection is the most investigated, however, upper GI follows the lead. Since endoscopy is performed by humans, it is inherently an imperfect procedure. Computer-aided diagnosis may improve its quality by helping prevent missing lesions and supporting optical diagnosis for those detected. An entire evolution in AI systems has been established in the last decades, resulting in optimization of the diagnostic performance with lower variability and matching or even outperformance of expert endoscopists. This shows a great potential for future quality improvement of endoscopy, given the outstanding diagnostic features of AI. With this narrative review, we highlight the potential benefit of AI to improve overall quality in daily endoscopy and describe the most recent developments for characterization and diagnosis as well as the recent conditions for regulatory approval.
Collapse
Affiliation(s)
- Pieter Sinonquel
- Department of Gastroenterology and Hepatology, University Hospitals Leuven, Leuven, Belgium.,Departments of, Department of, Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven, Leuven, Belgium
| | - Tom Eelbode
- Medical Imaging Research Center (MIRC), University Hospitals Leuven, Leuven, Belgium.,Department of Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
| | - Peter Bossuyt
- Department of Gastroenterology and Hepatology, University Hospitals Leuven, Leuven, Belgium.,Department of Gastroenterology and Hepatology, Imelda Hospital, Bonheiden, Belgium
| | - Frederik Maes
- Medical Imaging Research Center (MIRC), University Hospitals Leuven, Leuven, Belgium.,Department of Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
| | - Raf Bisschops
- Department of Gastroenterology and Hepatology, University Hospitals Leuven, Leuven, Belgium.,Departments of, Department of, Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven, Leuven, Belgium
| |
Collapse
|
22
|
Suzuki H, Yoshitaka T, Yoshio T, Tada T. Artificial intelligence for cancer detection of the upper gastrointestinal tract. Dig Endosc 2021; 33:254-262. [PMID: 33222330 DOI: 10.1111/den.13897] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 11/16/2020] [Indexed: 12/20/2022]
Abstract
In recent years, artificial intelligence (AI) has been found to be useful to physicians in the field of image recognition due to three elements: deep learning (that is, CNN, convolutional neural network), a high-performance computer, and a large amount of digitized data. In the field of gastrointestinal endoscopy, Japanese endoscopists have produced the world's first achievements of CNN-based AI system for detecting gastric and esophageal cancers. This study reviews papers on CNN-based AI for gastrointestinal cancers, and discusses the future of this technology in clinical practice. Employing AI-based endoscopes would enable early cancer detection. The better diagnostic abilities of AI technology may be beneficial in early gastrointestinal cancers in which endoscopists have variable diagnostic abilities and accuracy. AI coupled with the expertise of endoscopists would increase the accuracy of endoscopic diagnosis.
Collapse
Affiliation(s)
- Hideo Suzuki
- Department of Gastroenterology, Graduate School of Institute Clinical Medicine, University of Tsukuba, Ibaraki, Japan
| | - Tokai Yoshitaka
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Toshiyuki Yoshio
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Tomohiro Tada
- Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.,AI Medical Service Inc., Tokyo, Japan.,Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
| |
Collapse
|
23
|
Boscolo Nata F, Tirelli G, Capriotti V, Marcuzzo AV, Sacchet E, Šuran-Brunelli AN, de Manzini N. NBI utility in oncologic surgery: An organ by organ review. Surg Oncol 2020; 36:65-75. [PMID: 33316681 DOI: 10.1016/j.suronc.2020.11.017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 11/26/2020] [Indexed: 02/07/2023]
Abstract
The main aims of the oncologic surgeon should be an early tumor diagnosis, complete surgical resection, and a careful post-treatment follow-up to ensure a prompt diagnosis of recurrence. Radiologic and endoscopic methods have been traditionally used for these purposes, but their accuracy might sometimes be suboptimal. Technological improvements could help the clinician during the diagnostic and therapeutic management of tumors. Narrow band imaging (NBI) belongs to optical image techniques, and uses light characteristics to enhance tissue vascularization. Because neoangiogenesis is a fundamental step during carcinogenesis, NBI could be useful in the diagnostic and therapeutic workup of tumors. Since its introduction in 2001, NBI use has rapidly spread in different oncologic specialties with clear advantages. There is an active interest in this topic as demonstrated by the thriving literature. It is unavoidable for clinicians to gain in-depth knowledge about the application of NBI to their specific field, losing the overall view on the topic. However, by looking at other fields of application, clinicians could find ideas to improve NBI use in their own specialty. The aim of this review is to summarize the existing literature on NBI use in oncology, with the aim of providing the state of the art: we present an overview on NBI fields of application, results, and possible future improvements in the different specialties.
Collapse
Affiliation(s)
- Francesca Boscolo Nata
- ENT Clinic, Head and Neck Department, Azienda Sanitaria Universitaria Giuliano Isontina, Strada di Fiume 447, 34149, Trieste, Italy; Otorhinolaryngology Unit, Ospedali Riuniti Padova Sud "Madre Teresa di Calcutta", ULSS 6 Euganea, Via Albere 30, 35043, Monselice, PD, Italy.
| | - Giancarlo Tirelli
- ENT Clinic, Head and Neck Department, Azienda Sanitaria Universitaria Giuliano Isontina, Strada di Fiume 447, 34149, Trieste, Italy.
| | - Vincenzo Capriotti
- ENT Clinic, Head and Neck Department, Azienda Sanitaria Universitaria Giuliano Isontina, Strada di Fiume 447, 34149, Trieste, Italy.
| | - Alberto Vito Marcuzzo
- ENT Clinic, Head and Neck Department, Azienda Sanitaria Universitaria Giuliano Isontina, Strada di Fiume 447, 34149, Trieste, Italy.
| | - Erica Sacchet
- ENT Clinic, Head and Neck Department, Azienda Sanitaria Universitaria Giuliano Isontina, Strada di Fiume 447, 34149, Trieste, Italy.
| | - Azzurra Nicole Šuran-Brunelli
- ENT Clinic, Head and Neck Department, Azienda Sanitaria Universitaria Giuliano Isontina, Strada di Fiume 447, 34149, Trieste, Italy.
| | - Nicolò de Manzini
- General Surgery Unit, Department of Medical, Surgical and Health Sciences, Azienda Sanitaria Universitaria Giuliano Isontina, Strada di Fiume 447, 34149, Trieste, Italy.
| |
Collapse
|
24
|
Augmented Realities, Artificial Intelligence, and Machine Learning: Clinical Implications and How Technology Is Shaping the Future of Medicine. J Clin Med 2020; 9:jcm9123811. [PMID: 33255705 PMCID: PMC7761251 DOI: 10.3390/jcm9123811] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 11/18/2020] [Accepted: 11/20/2020] [Indexed: 01/23/2023] Open
Abstract
Technology has been integrated into every facet of human life, and whether it is completely advantageous remains unknown, but one thing is for sure; we are dependent on technology. Medical advances from the integration of artificial intelligence, machine learning, and augmented realities are widespread and have helped countless patients. Much of the advanced technology utilized by medical providers today has been borrowed and extrapolated from other industries. There remains no great collaboration between providers and engineers, which may be why medicine is only in its infancy of innovation with regards to advanced technologic integration. The purpose of this narrative review is to highlight the different technologies currently being utilized in a variety of medical specialties. Furthermore, we hope that by bringing attention to one shortcoming of the medical community, we may inspire future innovators to seek collaboration outside of the purely medical community for the betterment of all patients seeking care.
Collapse
|
25
|
Goshtasbi K, Yasaka TM, Zandi-Toghani M, Djalilian HR, Armstrong WB, Tjoa T, Haidar YM, Abouzari M. Machine learning models to predict length of stay and discharge destination in complex head and neck surgery. Head Neck 2020; 43:788-797. [PMID: 33142001 DOI: 10.1002/hed.26528] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Revised: 10/13/2020] [Accepted: 10/23/2020] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND This study develops machine learning (ML) algorithms that use preoperative-only features to predict discharge-to-nonhome-facility (DNHF) and length-of-stay (LOS) following complex head and neck surgeries. METHODS Patients undergoing laryngectomy or composite tissue excision followed by free tissue transfer were extracted from the 2005 to 2017 NSQIP database. RESULTS Among the 2786 included patients, DNHF and mean LOS were 421 (15.1%) and 11.7 ± 8.8 days. Four classification models for predicting DNHF with high specificities (range, 0.80-0.84) were developed. The generalized linear and gradient boosting machine models performed best with receiver operating characteristic (ROC), accuracy, and negative predictive value (NPV) of 0.72-0.73, 0.75-0.76, and 0.88-0.89. Four regression models for predicting LOS in days were developed, where all performed similarly with mean absolute error and root mean-squared errors of 3.95-3.98 and 5.14-5.16. Both models were developed into an encrypted web-based interface: https://uci-ent.shinyapps.io/head-neck/. CONCLUSION Novel and proof-of-concept ML models to predict DNHF and LOS were developed and published as web-based interfaces.
Collapse
Affiliation(s)
- Khodayar Goshtasbi
- Department of Otolaryngology-Head and Neck Surgery, University of California, Irvine, California, USA
| | - Tyler M Yasaka
- Department of Otolaryngology-Head and Neck Surgery, University of California, Irvine, California, USA
| | - Mehdi Zandi-Toghani
- Department of Otolaryngology-Head and Neck Surgery, University of California, Irvine, California, USA
| | - Hamid R Djalilian
- Department of Otolaryngology-Head and Neck Surgery, University of California, Irvine, California, USA.,Department of Biomedical Engineering, University of California, Irvine, California, USA
| | - William B Armstrong
- Department of Otolaryngology-Head and Neck Surgery, University of California, Irvine, California, USA
| | - Tjoson Tjoa
- Department of Otolaryngology-Head and Neck Surgery, University of California, Irvine, California, USA
| | - Yarah M Haidar
- Department of Otolaryngology-Head and Neck Surgery, University of California, Irvine, California, USA
| | - Mehdi Abouzari
- Department of Otolaryngology-Head and Neck Surgery, University of California, Irvine, California, USA
| |
Collapse
|
26
|
Namikawa K, Hirasawa T, Yoshio T, Fujisaki J, Ozawa T, Ishihara S, Aoki T, Yamada A, Koike K, Suzuki H, Tada T. Utilizing artificial intelligence in endoscopy: a clinician's guide. Expert Rev Gastroenterol Hepatol 2020; 14:689-706. [PMID: 32500760 DOI: 10.1080/17474124.2020.1779058] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) that surpasses human ability in image recognition is expected to be applied in the field of gastrointestinal endoscopes. Accordingly, its research and development (R &D) is being actively conducted. With the development of endoscopic diagnosis, there is a shortage of specialists who can perform high-precision endoscopy. We will examine whether AI with excellent image recognition ability can overcome this problem. AREAS COVERED Since 2016, papers on artificial intelligence using convolutional neural network (CNN in other word Deep Learning) have been published. CNN is generally capable of more accurate detection and classification than conventional machine learning. This is a review of papers using CNN in the gastrointestinal endoscopy area, along with the reasons why AI is required in clinical practice. We divided this review into four parts: stomach, esophagus, large intestine, and capsule endoscope (small intestine). EXPERT OPINION Potential applications for the AI include colorectal polyp detection and differentiation, gastric and esophageal cancer detection, and lesion detection in capsule endoscopy. The accuracy of endoscopic diagnosis will increase if the AI and endoscopist perform the endoscopy together.
Collapse
Affiliation(s)
- Ken Namikawa
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research , Tokyo, Japan
| | - Toshiaki Hirasawa
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research , Tokyo, Japan
| | - Toshiyuki Yoshio
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research , Tokyo, Japan
| | - Junko Fujisaki
- Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research , Tokyo, Japan
| | - Tsuyoshi Ozawa
- Department of Surgery, Teikyo University School of Medicine , Tokyo, Japan
| | - Soichiro Ishihara
- Department of Surgical Oncology, Graduate School of Medicine, the University of Tokyo , Tokyo, Japan
| | - Tomonori Aoki
- Department of Gastroenterology, Graduate School of Medicine, the University of Tokyo , Tokyo, Japan
| | - Atsuo Yamada
- Department of Gastroenterology, Graduate School of Medicine, the University of Tokyo , Tokyo, Japan
| | - Kazuhiko Koike
- Department of Gastroenterology, Graduate School of Medicine, the University of Tokyo , Tokyo, Japan
| | - Hideo Suzuki
- Department of Gastroenterology, Graduate School of Institute Clinical Medicine, University of Tsukuba , Ibaraki, Japan
| | - Tomohiro Tada
- Department of Surgical Oncology, Graduate School of Medicine, the University of Tokyo , Tokyo, Japan.,AI Medical Service Inc ., Tokyo, Japan.,Tada Tomohiro the Institute of Gastroenterology and Proctology , Saitama, Japan
| |
Collapse
|