1
|
Cui R, Wang L, Lin L, Li J, Lu R, Liu S, Liu B, Gu Y, Zhang H, Shang Q, Chen L, Tian D. Deep Learning in Barrett's Esophagus Diagnosis: Current Status and Future Directions. Bioengineering (Basel) 2023; 10:1239. [PMID: 38002363 PMCID: PMC10669008 DOI: 10.3390/bioengineering10111239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 11/26/2023] Open
Abstract
Barrett's esophagus (BE) represents a pre-malignant condition characterized by abnormal cellular proliferation in the distal esophagus. A timely and accurate diagnosis of BE is imperative to prevent its progression to esophageal adenocarcinoma, a malignancy associated with a significantly reduced survival rate. In this digital age, deep learning (DL) has emerged as a powerful tool for medical image analysis and diagnostic applications, showcasing vast potential across various medical disciplines. In this comprehensive review, we meticulously assess 33 primary studies employing varied DL techniques, predominantly featuring convolutional neural networks (CNNs), for the diagnosis and understanding of BE. Our primary focus revolves around evaluating the current applications of DL in BE diagnosis, encompassing tasks such as image segmentation and classification, as well as their potential impact and implications in real-world clinical settings. While the applications of DL in BE diagnosis exhibit promising results, they are not without challenges, such as dataset issues and the "black box" nature of models. We discuss these challenges in the concluding section. Essentially, while DL holds tremendous potential to revolutionize BE diagnosis, addressing these challenges is paramount to harnessing its full capacity and ensuring its widespread application in clinical practice.
Collapse
Affiliation(s)
- Ruichen Cui
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Lei Wang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Lin Lin
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Jie Li
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Runda Lu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Shixiang Liu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Bowei Liu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Yimin Gu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Hanlu Zhang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Qixin Shang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Longqi Chen
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Dong Tian
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| |
Collapse
|
2
|
Giavina-Bianchi M, Vitor WG, Fornasiero de Paiva V, Okita AL, Sousa RM, Machado B. Explainability agreement between dermatologists and five visual explanations techniques in deep neural networks for melanoma AI classification. Front Med (Lausanne) 2023; 10:1241484. [PMID: 37746081 PMCID: PMC10513767 DOI: 10.3389/fmed.2023.1241484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 08/14/2023] [Indexed: 09/26/2023] Open
Abstract
Introduction The use of deep convolutional neural networks for analyzing skin lesion images has shown promising results. The identification of skin cancer by faster and less expensive means can lead to an early diagnosis, saving lives and avoiding treatment costs. However, to implement this technology in a clinical context, it is important for specialists to understand why a certain model makes a prediction; it must be explainable. Explainability techniques can be used to highlight the patterns of interest for a prediction. Methods Our goal was to test five different techniques: Grad-CAM, Grad-CAM++, Score-CAM, Eigen-CAM, and LIME, to analyze the agreement rate between features highlighted by the visual explanation maps to 3 important clinical criteria for melanoma classification: asymmetry, border irregularity, and color heterogeneity (ABC rule) in 100 melanoma images. Two dermatologists scored the visual maps and the clinical images using a semi-quantitative scale, and the results were compared. They also ranked their preferable techniques. Results We found that the techniques had different agreement rates and acceptance. In the overall analysis, Grad-CAM showed the best total+partial agreement rate (93.6%), followed by LIME (89.8%), Grad-CAM++ (88.0%), Eigen-CAM (86.4%), and Score-CAM (84.6%). Dermatologists ranked their favorite options: Grad-CAM and Grad-CAM++, followed by Score-CAM, LIME, and Eigen-CAM. Discussion Saliency maps are one of the few methods that can be used for visual explanations. The evaluation of explainability with humans is ideal to assess the understanding and applicability of these methods. Our results demonstrated that there is a significant agreement between clinical features used by dermatologists to diagnose melanomas and visual explanation techniques, especially Grad-Cam.
Collapse
|
3
|
McDonnell KJ. Leveraging the Academic Artificial Intelligence Silecosystem to Advance the Community Oncology Enterprise. J Clin Med 2023; 12:4830. [PMID: 37510945 PMCID: PMC10381436 DOI: 10.3390/jcm12144830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 07/05/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023] Open
Abstract
Over the last 75 years, artificial intelligence has evolved from a theoretical concept and novel paradigm describing the role that computers might play in our society to a tool with which we daily engage. In this review, we describe AI in terms of its constituent elements, the synthesis of which we refer to as the AI Silecosystem. Herein, we provide an historical perspective of the evolution of the AI Silecosystem, conceptualized and summarized as a Kuhnian paradigm. This manuscript focuses on the role that the AI Silecosystem plays in oncology and its emerging importance in the care of the community oncology patient. We observe that this important role arises out of a unique alliance between the academic oncology enterprise and community oncology practices. We provide evidence of this alliance by illustrating the practical establishment of the AI Silecosystem at the City of Hope Comprehensive Cancer Center and its team utilization by community oncology providers.
Collapse
Affiliation(s)
- Kevin J McDonnell
- Center for Precision Medicine, Department of Medical Oncology & Therapeutics Research, City of Hope Comprehensive Cancer Center, Duarte, CA 91010, USA
| |
Collapse
|
4
|
Borys K, Schmitt YA, Nauta M, Seifert C, Krämer N, Friedrich CM, Nensa F. Explainable AI in medical imaging: An overview for clinical practitioners – Saliency-based XAI approaches. Eur J Radiol 2023; 162:110787. [PMID: 37001254 DOI: 10.1016/j.ejrad.2023.110787] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 03/03/2023] [Accepted: 03/14/2023] [Indexed: 03/30/2023]
Abstract
Since recent achievements of Artificial Intelligence (AI) have proven significant success and promising results throughout many fields of application during the last decade, AI has also become an essential part of medical research. The improving data availability, coupled with advances in high-performance computing and innovative algorithms, has increased AI's potential in various aspects. Because AI rapidly reshapes research and promotes the development of personalized clinical care, alongside its implementation arises an urgent need for a deep understanding of its inner workings, especially in high-stake domains. However, such systems can be highly complex and opaque, limiting the possibility of an immediate understanding of the system's decisions. Regarding the medical field, a high impact is attributed to these decisions as physicians and patients can only fully trust AI systems when reasonably communicating the origin of their results, simultaneously enabling the identification of errors and biases. Explainable AI (XAI), becoming an increasingly important field of research in recent years, promotes the formulation of explainability methods and provides a rationale allowing users to comprehend the results generated by AI systems. In this paper, we investigate the application of XAI in medical imaging, addressing a broad audience, especially healthcare professionals. The content focuses on definitions and taxonomies, standard methods and approaches, advantages, limitations, and examples representing the current state of research regarding XAI in medical imaging. This paper focuses on saliency-based XAI methods, where the explanation can be provided directly on the input data (image) and which naturally are of special importance in medical imaging.
Collapse
|
5
|
Jin W, Li X, Fatehi M, Hamarneh G. Guidelines and evaluation of clinical explainable AI in medical image analysis. Med Image Anal 2023; 84:102684. [PMID: 36516555 DOI: 10.1016/j.media.2022.102684] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 10/20/2022] [Accepted: 11/03/2022] [Indexed: 11/18/2022]
Abstract
Explainable artificial intelligence (XAI) is essential for enabling clinical users to get informed decision support from AI and comply with evidence-based medical practice. Applying XAI in clinical settings requires proper evaluation criteria to ensure the explanation technique is both technically sound and clinically useful, but specific support is lacking to achieve this goal. To bridge the research gap, we propose the Clinical XAI Guidelines that consist of five criteria a clinical XAI needs to be optimized for. The guidelines recommend choosing an explanation form based on Guideline 1 (G1) Understandability and G2 Clinical relevance. For the chosen explanation form, its specific XAI technique should be optimized for G3 Truthfulness, G4 Informative plausibility, and G5 Computational efficiency. Following the guidelines, we conducted a systematic evaluation on a novel problem of multi-modal medical image explanation with two clinical tasks, and proposed new evaluation metrics accordingly. Sixteen commonly-used heatmap XAI techniques were evaluated and found to be insufficient for clinical use due to their failure in G3 and G4. Our evaluation demonstrated the use of Clinical XAI Guidelines to support the design and evaluation of clinically viable XAI.
Collapse
Affiliation(s)
- Weina Jin
- School of Computing Science, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada.
| | - Xiaoxiao Li
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, V6T 1Z4, Canada.
| | - Mostafa Fatehi
- Division of Neurosurgery, The University of British Columbia, Vancouver, BC, V5Z 1M9, Canada.
| | - Ghassan Hamarneh
- School of Computing Science, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada.
| |
Collapse
|
6
|
Loh HW, Ooi CP, Seoni S, Barua PD, Molinari F, Acharya UR. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022). COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107161. [PMID: 36228495 DOI: 10.1016/j.cmpb.2022.107161] [Citation(s) in RCA: 79] [Impact Index Per Article: 39.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 09/16/2022] [Accepted: 09/25/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) has branched out to various applications in healthcare, such as health services management, predictive medicine, clinical decision-making, and patient data and diagnostics. Although AI models have achieved human-like performance, their use is still limited because they are seen as a black box. This lack of trust remains the main reason for their low use in practice, especially in healthcare. Hence, explainable artificial intelligence (XAI) has been introduced as a technique that can provide confidence in the model's prediction by explaining how the prediction is derived, thereby encouraging the use of AI systems in healthcare. The primary goal of this review is to provide areas of healthcare that require more attention from the XAI research community. METHODS Multiple journal databases were thoroughly searched using PRISMA guidelines 2020. Studies that do not appear in Q1 journals, which are highly credible, were excluded. RESULTS In this review, we surveyed 99 Q1 articles covering the following XAI techniques: SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, rule-based systems, and others. CONCLUSION We discovered that detecting abnormalities in 1D biosignals and identifying key text in clinical notes are areas that require more attention from the XAI research community. We hope this is review will encourage the development of a holistic cloud system for a smart city.
Collapse
Affiliation(s)
- Hui Wen Loh
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Silvia Seoni
- Department of Electronics and Telecommunications, Biolab, Politecnico di Torino, Torino 10129, Italy
| | - Prabal Datta Barua
- Faculty of Engineering and Information Technology, University of Technology Sydney, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia
| | - Filippo Molinari
- Department of Electronics and Telecommunications, Biolab, Politecnico di Torino, Torino 10129, Italy
| | - U Rajendra Acharya
- School of Science and Technology, Singapore University of Social Sciences, Singapore; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia; School of Engineering, Ngee Ann Polytechnic, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taiwan; Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto, Japan.
| |
Collapse
|
7
|
A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features. SENSORS 2022; 22:s22145205. [PMID: 35890885 PMCID: PMC9318445 DOI: 10.3390/s22145205] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 06/19/2022] [Accepted: 06/24/2022] [Indexed: 02/01/2023]
Abstract
Machine learning (ML) models have been shown to predict the presence of clinical factors from medical imaging with remarkable accuracy. However, these complex models can be difficult to interpret and are often criticized as “black boxes”. Prediction models that provide no insight into how their predictions are obtained are difficult to trust for making important clinical decisions, such as medical diagnoses or treatment. Explainable machine learning (XML) methods, such as Shapley values, have made it possible to explain the behavior of ML algorithms and to identify which predictors contribute most to a prediction. Incorporating XML methods into medical software tools has the potential to increase trust in ML-powered predictions and aid physicians in making medical decisions. Specifically, in the field of medical imaging analysis the most used methods for explaining deep learning-based model predictions are saliency maps that highlight important areas of an image. However, they do not provide a straightforward interpretation of which qualities of an image area are important. Here, we describe a novel pipeline for XML imaging that uses radiomics data and Shapley values as tools to explain outcome predictions from complex prediction models built with medical imaging with well-defined predictors. We present a visualization of XML imaging results in a clinician-focused dashboard that can be generalized to various settings. We demonstrate the use of this workflow for developing and explaining a prediction model using MRI data from glioma patients to predict a genetic mutation.
Collapse
|
8
|
Unboxing Deep Learning Model of Food Delivery Service Reviews Using Explainable Artificial Intelligence (XAI) Technique. Foods 2022; 11:foods11142019. [PMID: 35885262 PMCID: PMC9320924 DOI: 10.3390/foods11142019] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Revised: 07/02/2022] [Accepted: 07/05/2022] [Indexed: 02/07/2023] Open
Abstract
The demand for food delivery services (FDSs) during the COVID-19 crisis has been fuelled by consumers who prefer to order meals online and have it delivered to their door than to wait at a restaurant. Since many restaurants moved online and joined FDSs such as Uber Eats, Menulog, and Deliveroo, customer reviews on internet platforms have become a valuable source of information about a company’s performance. FDS organisations strive to collect customer complaints and effectively utilise the information to identify improvements needed to enhance customer satisfaction. However, only a few customer opinions are addressed because of the large amount of customer feedback data and lack of customer service consultants. Organisations can use artificial intelligence (AI) instead of relying on customer service experts and find solutions on their own to save money as opposed to reading each review. Based on the literature, deep learning (DL) methods have shown remarkable results in obtaining better accuracy when working with large datasets in other domains, but lack explainability in their model. Rapid research on explainable AI (XAI) to explain predictions made by opaque models looks promising but remains to be explored in the FDS domain. This study conducted a sentiment analysis by comparing simple and hybrid DL techniques (LSTM, Bi-LSTM, Bi-GRU-LSTM-CNN) in the FDS domain and explained the predictions using SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME). The DL models were trained and tested on the customer review dataset extracted from the ProductReview website. Results showed that the LSTM, Bi-LSTM and Bi-GRU-LSTM-CNN models achieved an accuracy of 96.07%, 95.85% and 96.33%, respectively. The model should exhibit fewer false negatives because FDS organisations aim to identify and address each and every customer complaint. The LSTM model was chosen over the other two DL models, Bi-LSTM and Bi-GRU-LSTM-CNN, due to its lower rate of false negatives. XAI techniques, such as SHAP and LIME, revealed the feature contribution of the words used towards positive and negative sentiments, which were used to validate the model.
Collapse
|
9
|
Wang Y, Zhu C, Wang Y, Sun J, Ling D, Wang L. Survival risk prediction model for ESCC based on relief feature selection and CNN. Comput Biol Med 2022; 145:105460. [PMID: 35364307 DOI: 10.1016/j.compbiomed.2022.105460] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 03/23/2022] [Accepted: 03/24/2022] [Indexed: 01/10/2023]
Abstract
Esophageal squamous cell carcinoma (ESCC) is a common malignant tumor of the digestive system with poor prognosis and high mortality. It is of great significance to predict the prognosis risk of patients with cancer by using medical pathology information. To take full advantage of the clinic pathological information of ESCC patients and improve the accuracy of postoperative survival risk prediction, this paper proposes an ESCC survival risk prediction model based on Relief feature selection and convolutional neural network (CNN). Firstly, statistical analysis methods and relief feature selection algorithm are used to extract the important risk factors related to the survival risk of patients. Then, One-dimensional convolutional neural network (1D-CNN) is used to establish the survival risk prediction model of patients with esophageal cancer. Finally, the data of patients with esophageal cancer provided by the First Affiliated Hospital of Zhengzhou University is used to assess the performance of the model. The results show that the model proposed in this paper has a high accuracy rate, which can effectively predict the postoperative survival risk of the patient through the clinical phenotypic index of the patient.
Collapse
Affiliation(s)
- Yanfeng Wang
- School of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou, 45002, China
| | - Chuanqian Zhu
- School of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou, 45002, China
| | - Yan Wang
- School of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou, 45002, China.
| | - Junwei Sun
- School of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou, 45002, China
| | - Dan Ling
- School of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou, 45002, China
| | - Lidong Wang
- State Key Laboratory of Esophageal Cancer Prevention, Treatment and Henan Key Laboratory for Esophageal Cancer Research of The First Affiliated Hospital, Zhengzhou University, Zhengzhou, 450052, China
| |
Collapse
|