1
|
Qadir MI, Baril JA, Yip-Schneider MT, Schonlau D, Tran TTT, Schmidt CM, Kolbinger FR. Artificial Intelligence in Pancreatic Intraductal Papillary Mucinous Neoplasm Imaging: A Systematic Review. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2025.01.08.25320130. [PMID: 39830259 PMCID: PMC11741484 DOI: 10.1101/2025.01.08.25320130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 01/22/2025]
Abstract
Background Based on the Fukuoka and Kyoto international consensus guidelines, the current clinical management of intraductal papillary mucinous neoplasm (IPMN) largely depends on imaging features. While these criteria are highly sensitive in detecting high-risk IPMN, they lack specificity, resulting in surgical overtreatment. Artificial Intelligence (AI)-based medical image analysis has the potential to augment the clinical management of IPMNs by improving diagnostic accuracy. Methods Based on a systematic review of the academic literature on AI in IPMN imaging, 1041 publications were identified of which 25 published studies were included in the analysis. The studies were stratified based on prediction target, underlying data type and imaging modality, patient cohort size, and stage of clinical translation and were subsequently analyzed to identify trends and gaps in the field. Results Research on AI in IPMN imaging has been increasing in recent years. The majority of studies utilized CT imaging to train computational models. Most studies presented computational models developed on single-center datasets (n=11,44%) and included less than 250 patients (n=18,72%). Methodologically, convolutional neural network (CNN)-based algorithms were most commonly used. Thematically, most studies reported models augmenting differential diagnosis (n=9,36%) or risk stratification (n=10,40%) rather than IPMN detection (n=5,20%) or IPMN segmentation (n=2,8%). Conclusion This systematic review provides a comprehensive overview of the research landscape of AI in IPMN imaging. Computational models have potential to enhance the accurate and precise stratification of patients with IPMN. Multicenter collaboration and datasets comprising various modalities are necessary to fully utilize this potential, alongside concerted efforts towards clinical translation.
Collapse
Affiliation(s)
| | - Jackson A. Baril
- Division of Surgical Oncology, Department of Surgery, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Michele T. Yip-Schneider
- Division of Surgical Oncology, Department of Surgery, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Duane Schonlau
- Department of Radiology, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Thi Thanh Thoa Tran
- Division of Surgical Oncology, Department of Surgery, Indiana University School of Medicine, Indianapolis, IN, USA
| | - C. Max Schmidt
- Division of Surgical Oncology, Department of Surgery, Indiana University School of Medicine, Indianapolis, IN, USA
- Department of Biochemistry and Molecular Biology, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Fiona R. Kolbinger
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
- Regenstrief Center for Healthcare Engineering (RCHE), Purdue University, West Lafayette, IN, USA
- Department of Biostatistics and Health Data Science, Richard M. Fairbanks School of Public Health, Indiana University, Indianapolis, IN, USA
| |
Collapse
|
2
|
Wang J, Zhou Y, Zhou J, Liu H, Li X. Preliminary study on the ability of the machine learning models based on 18F-FDG PET/CT to differentiate between mass-forming pancreatic lymphoma and pancreatic carcinoma. Eur J Radiol 2024; 176:111531. [PMID: 38820949 DOI: 10.1016/j.ejrad.2024.111531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/25/2024] [Accepted: 05/24/2024] [Indexed: 06/02/2024]
Abstract
PURPOSE The objective of this study was to preliminarily assess the ability of metabolic parameters and radiomics derived from 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) to distinguish mass-forming pancreatic lymphoma from pancreatic carcinoma using machine learning. METHODS A total of 88 lesions from 86 patients diagnosed as mass-forming pancreatic lymphoma or pancreatic carcinoma were included and randomly divided into a training set and a validation set at a 4-to-1 ratio. The segmentation of regions of interest was performed using ITK-SNAP software, PET metabolic parameters and radiomics features were extracted using 3Dslicer and PYTHON. Following the selection of optimal metabolic parameters and radiomics features, Logistic regression (LR), support vector machine (SVM), and random forest (RF) models were constructed for PET metabolic parameters, CT radiomics, PET radiomics, and PET/CT radiomics. Model performance was assessed in terms of area under the curve (AUC), accuracy, sensitivity, and specificity in both the training and validation sets. RESULTS Strong discriminative ability observed in all models, with AUC values ranging from 0.727 to 0.978. The highest performance exhibited by the combined PET and CT radiomics features. AUC values for PET/CT radiomics models in the training set were LR 0.994, SVM 0.994, RF 0.989. In the validation set, AUC values were LR 0.909, SVM 0.883, RF 0.844. CONCLUSION Machine learning models utilizing the metabolic parameters and radiomics of 18F-FDG PET/CT show promise in distinguishing between pancreatic carcinoma and mass-forming pancreatic lymphoma. Further validation on a larger cohort is necessary before practical implementation in clinical settings.
Collapse
Affiliation(s)
- Jian Wang
- Department of Nuclear Medicine, Qilu Hospital of Shandong University, Jinan, China; Department of Nuclear Medicine, Dezhou People's Hospital, Dezhou, China
| | - Yujing Zhou
- Department of Nuclear Medicine, Qilu Hospital of Shandong University, Jinan, China
| | - Jianli Zhou
- Department of Nuclear Medicine, Dezhou People's Hospital, Dezhou, China
| | - Hongwei Liu
- Department of Nuclear Medicine, Dezhou People's Hospital, Dezhou, China
| | - Xin Li
- Department of Nuclear Medicine, Qilu Hospital of Shandong University, Jinan, China.
| |
Collapse
|
3
|
Ibragimov B, Mello-Thoms C. The Use of Machine Learning in Eye Tracking Studies in Medical Imaging: A Review. IEEE J Biomed Health Inform 2024; 28:3597-3612. [PMID: 38421842 PMCID: PMC11262011 DOI: 10.1109/jbhi.2024.3371893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2024]
Abstract
Machine learning (ML) has revolutionized medical image-based diagnostics. In this review, we cover a rapidly emerging field that can be potentially significantly impacted by ML - eye tracking in medical imaging. The review investigates the clinical, algorithmic, and hardware properties of the existing studies. In particular, it evaluates 1) the type of eye-tracking equipment used and how the equipment aligns with study aims; 2) the software required to record and process eye-tracking data, which often requires user interface development, and controller command and voice recording; 3) the ML methodology utilized depending on the anatomy of interest, gaze data representation, and target clinical application. The review concludes with a summary of recommendations for future studies, and confirms that the inclusion of gaze data broadens the ML applicability in Radiology from computer-aided diagnosis (CAD) to gaze-based image annotation, physicians' error detection, fatigue recognition, and other areas of potentially high research and clinical impact.
Collapse
|
4
|
Neves J, Hsieh C, Nobre IB, Sousa SC, Ouyang C, Maciel A, Duchowski A, Jorge J, Moreira C. Shedding light on ai in radiology: A systematic review and taxonomy of eye gaze-driven interpretability in deep learning. Eur J Radiol 2024; 172:111341. [PMID: 38340426 DOI: 10.1016/j.ejrad.2024.111341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/04/2024] [Accepted: 01/25/2024] [Indexed: 02/12/2024]
Abstract
X-ray imaging plays a crucial role in diagnostic medicine. Yet, a significant portion of the global population lacks access to this essential technology due to a shortage of trained radiologists. Eye-tracking data and deep learning models can enhance X-ray analysis by mapping expert focus areas, guiding automated anomaly detection, optimizing workflow efficiency, and bolstering training methods for novice radiologists. However, the literature shows contradictory results regarding the usefulness of eye-tracking data in deep-learning architectures for abnormality detection. We argue that these discrepancies between studies in the literature are due to (a) the way eye-tracking data is (or is not) processed, (b) the types of deep learning architectures chosen, and (c) the type of application that these architectures will have. We conducted a systematic literature review using PRISMA to address these contradicting results. We analyzed 60 studies that incorporated eye-tracking data in a deep-learning approach for different application goals in radiology. We performed a comparative analysis to understand if eye gaze data contains feature maps that can be useful under a deep learning approach and whether they can promote more interpretable predictions. To the best of our knowledge, this is the first survey in the area that performs a thorough investigation of eye gaze data processing techniques and their impacts in different deep learning architectures for applications such as error detection, classification, object detection, expertise level analysis, fatigue estimation and human attention prediction in medical imaging data. Our analysis resulted in two main contributions: (1) taxonomy that first divides the literature by task, enabling us to analyze the value eye movement can bring for each case and build guidelines regarding architectures and gaze processing techniques adequate for each application, and (2) an overall analysis of how eye gaze data can promote explainability in radiology.
Collapse
Affiliation(s)
- José Neves
- Instituto Superior Técnico / INESC-ID, University of Lisbon, Portugal.
| | - Chihcheng Hsieh
- School of Information Systems, Queensland University of Technology, Australia.
| | | | | | - Chun Ouyang
- School of Information Systems, Queensland University of Technology, Australia.
| | - Anderson Maciel
- Instituto Superior Técnico / INESC-ID, University of Lisbon, Portugal.
| | | | - Joaquim Jorge
- Instituto Superior Técnico / INESC-ID, University of Lisbon, Portugal.
| | - Catarina Moreira
- Human Technology Institute, University of Technology Sydney, Australia.
| |
Collapse
|
5
|
Jiang S, Wang T, Zhang KH. Data-driven decision-making for precision diagnosis of digestive diseases. Biomed Eng Online 2023; 22:87. [PMID: 37658345 PMCID: PMC10472739 DOI: 10.1186/s12938-023-01148-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 08/15/2023] [Indexed: 09/03/2023] Open
Abstract
Modern omics technologies can generate massive amounts of biomedical data, providing unprecedented opportunities for individualized precision medicine. However, traditional statistical methods cannot effectively process and utilize such big data. To meet this new challenge, machine learning algorithms have been developed and applied rapidly in recent years, which are capable of reducing dimensionality, extracting features, organizing data and forming automatable data-driven clinical decision systems. Data-driven clinical decision-making have promising applications in precision medicine and has been studied in digestive diseases, including early diagnosis and screening, molecular typing, staging and stratification of digestive malignancies, as well as precise diagnosis of Crohn's disease, auxiliary diagnosis of imaging and endoscopy, differential diagnosis of cystic lesions, etiology discrimination of acute abdominal pain, stratification of upper gastrointestinal bleeding (UGIB), and real-time diagnosis of esophageal motility function, showing good application prospects. Herein, we reviewed the recent progress of data-driven clinical decision making in precision diagnosis of digestive diseases and discussed the limitations of data-driven decision making after a brief introduction of methods for data-driven decision making.
Collapse
Affiliation(s)
- Song Jiang
- Department of Gastroenterology, The First Affiliated Hospital of Nanchang University, No. 17, Yongwai Zheng Street, Nanchang, 330006 China
- Jiangxi Institute of Gastroenterology and Hepatology, Nanchang, 330006 China
| | - Ting Wang
- Department of Gastroenterology, The First Affiliated Hospital of Nanchang University, No. 17, Yongwai Zheng Street, Nanchang, 330006 China
- Jiangxi Institute of Gastroenterology and Hepatology, Nanchang, 330006 China
| | - Kun-He Zhang
- Department of Gastroenterology, The First Affiliated Hospital of Nanchang University, No. 17, Yongwai Zheng Street, Nanchang, 330006 China
- Jiangxi Institute of Gastroenterology and Hepatology, Nanchang, 330006 China
| |
Collapse
|
6
|
Fu J, He B, Yang J, Liu J, Ouyang A, Wang Y. CDRNet: Cascaded dense residual network for grayscale and pseudocolor medical image fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 234:107506. [PMID: 37003041 DOI: 10.1016/j.cmpb.2023.107506] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 03/18/2023] [Accepted: 03/22/2023] [Indexed: 06/19/2023]
Abstract
OBJECTIVE Multimodal medical fusion images have been widely used in clinical medicine, computer-aided diagnosis and other fields. However, the existing multimodal medical image fusion algorithms generally have shortcomings such as complex calculations, blurred details and poor adaptability. To solve this problem, we propose a cascaded dense residual network and use it for grayscale and pseudocolor medical image fusion. METHODS The cascaded dense residual network uses a multiscale dense network and a residual network as the basic network architecture, and a multilevel converged network is obtained through cascade. The cascaded dense residual network contains 3 networks, the first-level network inputs two images with different modalities to obtain a fused Image 1, the second-level network uses fused Image 1 as the input image to obtain fused Image 2 and the third-level network uses fused Image 2 as the input image to obtain fused Image 3. The multimodal medical image is trained through each level of the network, and the output fusion image is enhanced step-by-step. RESULTS As the number of networks increases, the fusion image becomes increasingly clearer. Through numerous fusion experiments, the fused images of the proposed algorithm have higher edge strength, richer details, and better performance in the objective indicators than the reference algorithms. CONCLUSION Compared with the reference algorithms, the proposed algorithm has better original information, higher edge strength, richer details and an improvement of the four objective SF, AG, MZ and EN indicator metrics.
Collapse
Affiliation(s)
- Jun Fu
- School of Information Engineering, Zunyi Normal University, Zunyi, Guizhou, 563006, China.
| | - Baiqing He
- Nanchang Institute of Technology, Nanchang, Jiangxi, 330044, China
| | - Jie Yang
- School of Information Engineering, Zunyi Normal University, Zunyi, Guizhou, 563006, China
| | - Jianpeng Liu
- School of Science, East China Jiaotong University, Nanchang, Jiangxi, 330013, China
| | - Aijia Ouyang
- School of Information Engineering, Zunyi Normal University, Zunyi, Guizhou, 563006, China
| | - Ya Wang
- School of Information Engineering, Zunyi Normal University, Zunyi, Guizhou, 563006, China
| |
Collapse
|
7
|
Afzal S, Ghani S, Hittawe MM, Rashid SF, Knio OM, Hadwiger M, Hoteit I. Visualization and Visual Analytics Approaches for Image and Video Datasets: A Survey. ACM T INTERACT INTEL 2023. [DOI: 10.1145/3576935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Image and video data analysis has become an increasingly important research area with applications in different domains such as security surveillance, healthcare, augmented and virtual reality, video and image editing, activity analysis and recognition, synthetic content generation, distance education, telepresence, remote sensing, sports analytics, art, non-photorealistic rendering, search engines, and social media. Recent advances in Artificial Intelligence (AI) and particularly deep learning have sparked new research challenges and led to significant advancements, especially in image and video analysis. These advancements have also resulted in significant research and development in other areas such as visualization and visual analytics, and have created new opportunities for future lines of research. In this survey paper, we present the current state of the art at the intersection of visualization and visual analytics, and image and video data analysis. We categorize the visualization papers included in our survey based on different taxonomies used in visualization and visual analytics research. We review these papers in terms of task requirements, tools, datasets, and application areas. We also discuss insights based on our survey results, trends and patterns, the current focus of visualization research, and opportunities for future research.
Collapse
Affiliation(s)
- Shehzad Afzal
- King Abdullah University of Science & Technology, Saudi Arabia
| | - Sohaib Ghani
- King Abdullah University of Science & Technology, Saudi Arabia
| | | | | | - Omar M Knio
- King Abdullah University of Science & Technology, Saudi Arabia
| | - Markus Hadwiger
- King Abdullah University of Science & Technology, Saudi Arabia
| | - Ibrahim Hoteit
- King Abdullah University of Science & Technology, Saudi Arabia
| |
Collapse
|
8
|
Anta JA, Martínez-Ballestero I, Eiroa D, García J, Rodríguez-Comas J. Artificial intelligence for the detection of pancreatic lesions. Int J Comput Assist Radiol Surg 2022; 17:1855-1865. [PMID: 35951286 DOI: 10.1007/s11548-022-02706-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 06/17/2022] [Indexed: 11/30/2022]
Abstract
PURPOSE Pancreatic cancer is one of the most lethal neoplasms among common cancers worldwide, and PCLs are well-known precursors of this type of cancer. Artificial intelligence (AI) could help to improve and speed up the detection and classification of pancreatic lesions. The aim of this review is to summarize the articles addressing the diagnostic yield of artificial intelligence applied to medical imaging (computed tomography [CT] and/or magnetic resonance [MR]) for the detection of pancreatic cancer and pancreatic cystic lesions. METHODS We performed a comprehensive literature search using PubMed, EMBASE, and Scopus (from January 2010 to April 2021) to identify full articles evaluating the diagnostic accuracy of AI-based methods processing CT or MR images to detect pancreatic ductal adenocarcinoma (PDAC) or pancreatic cystic lesions (PCLs). RESULTS We found 20 studies meeting our inclusion criteria. Most of the AI-based systems used were convolutional neural networks. Ten studies addressed the use of AI to detect PDAC, eight studies aimed to detect and classify PCLs, and 4 aimed to predict the presence of high-grade dysplasia or cancer. CONCLUSION AI techniques have shown to be a promising tool which is expected to be helpful for most radiologists' tasks. However, methodologic concerns must be addressed, and prospective clinical studies should be carried out before implementation in clinical practice.
Collapse
Affiliation(s)
- Julia Arribas Anta
- Scientific and Technical Department, Sycai Technologies S.L., Carrer Roc Boronat 117, MediaTIC Building, 08018, Barcelona, Spain.,Department of Gastroenterology, University Hospital, 12 Octubre. Av. de Córdoba, s/n, 28041, Madrid, Spain
| | - Iván Martínez-Ballestero
- Scientific and Technical Department, Sycai Technologies S.L., Carrer Roc Boronat 117, MediaTIC Building, 08018, Barcelona, Spain
| | - Daniel Eiroa
- Scientific and Technical Department, Sycai Technologies S.L., Carrer Roc Boronat 117, MediaTIC Building, 08018, Barcelona, Spain.,Department of Radiology, Institut de Diagnòstic per la Imatge (IDI), Hospital Universitari Vall d'Hebrón, Passeig de la Vall d'Hebron, 119-129, 08035, Barcelona, Spain
| | - Javier García
- Scientific and Technical Department, Sycai Technologies S.L., Carrer Roc Boronat 117, MediaTIC Building, 08018, Barcelona, Spain
| | - Júlia Rodríguez-Comas
- Scientific and Technical Department, Sycai Technologies S.L., Carrer Roc Boronat 117, MediaTIC Building, 08018, Barcelona, Spain.
| |
Collapse
|
9
|
Jadhav S, Deng G, Zawin M, Kaufman AE. COVID-view: Diagnosis of COVID-19 using Chest CT. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:227-237. [PMID: 34587075 PMCID: PMC8981756 DOI: 10.1109/tvcg.2021.3114851] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 06/13/2021] [Accepted: 08/08/2021] [Indexed: 05/02/2023]
Abstract
Significant work has been done towards deep learning (DL) models for automatic lung and lesion segmentation and classification of COVID-19 on chest CT data. However, comprehensive visualization systems focused on supporting the dual visual+DL diagnosis of COVID-19 are non-existent. We present COVID-view, a visualization application specially tailored for radiologists to diagnose COVID-19 from chest CT data. The system incorporates a complete pipeline of automatic lungs segmentation, localization/isolation of lung abnormalities, followed by visualization, visual and DL analysis, and measurement/quantification tools. Our system combines the traditional 2D workflow of radiologists with newer 2D and 3D visualization techniques with DL support for a more comprehensive diagnosis. COVID-view incorporates a novel DL model for classifying the patients into positive/negative COVID-19 cases, which acts as a reading aid for the radiologist using COVID-view and provides the attention heatmap as an explainable DL for the model output. We designed and evaluated COVID-view through suggestions, close feedback and conducting case studies of real-world patient data by expert radiologists who have substantial experience diagnosing chest CT scans for COVID-19, pulmonary embolism, and other forms of lung infections. We present requirements and task analysis for the diagnosis of COVID-19 that motivate our design choices and results in a practical system which is capable of handling real-world patient cases.
Collapse
Affiliation(s)
| | - Gaofeng Deng
- Department of Computer ScienceStony Brook UniversityUSA
| | - Marlene Zawin
- Department of RadiologyStony Brook University HospitalUSA
| | | |
Collapse
|
10
|
Chen X, Fu R, Shao Q, Chen Y, Ye Q, Li S, He X, Zhu J. Application of artificial intelligence to pancreatic adenocarcinoma. Front Oncol 2022; 12:960056. [PMID: 35936738 PMCID: PMC9353734 DOI: 10.3389/fonc.2022.960056] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 06/24/2022] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Pancreatic cancer (PC) is one of the deadliest cancers worldwide although substantial advancement has been made in its comprehensive treatment. The development of artificial intelligence (AI) technology has allowed its clinical applications to expand remarkably in recent years. Diverse methods and algorithms are employed by AI to extrapolate new data from clinical records to aid in the treatment of PC. In this review, we will summarize AI's use in several aspects of PC diagnosis and therapy, as well as its limits and potential future research avenues. METHODS We examine the most recent research on the use of AI in PC. The articles are categorized and examined according to the medical task of their algorithm. Two search engines, PubMed and Google Scholar, were used to screen the articles. RESULTS Overall, 66 papers published in 2001 and after were selected. Of the four medical tasks (risk assessment, diagnosis, treatment, and prognosis prediction), diagnosis was the most frequently researched, and retrospective single-center studies were the most prevalent. We found that the different medical tasks and algorithms included in the reviewed studies caused the performance of their models to vary greatly. Deep learning algorithms, on the other hand, produced excellent results in all of the subdivisions studied. CONCLUSIONS AI is a promising tool for helping PC patients and may contribute to improved patient outcomes. The integration of humans and AI in clinical medicine is still in its infancy and requires the in-depth cooperation of multidisciplinary personnel.
Collapse
Affiliation(s)
- Xi Chen
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Ruibiao Fu
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Qian Shao
- Department of Surgical Ward 1, Ningbo Women and Children’s Hospital, Ningbo, China
| | - Yan Chen
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Qinghuang Ye
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Jinhui Zhu
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
- *Correspondence: Jinhui Zhu,
| |
Collapse
|
11
|
Bakasa W, Viriri S. Pancreatic Cancer Survival Prediction: A Survey of the State-of-the-Art. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:1188414. [PMID: 34630626 PMCID: PMC8497168 DOI: 10.1155/2021/1188414] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 08/24/2021] [Accepted: 09/18/2021] [Indexed: 12/22/2022]
Abstract
Cancer early detection increases the chances of survival. Some cancer types, like pancreatic cancer, are challenging to diagnose or detect early, and the stages have a fast progression rate. This paper presents the state-of-the-art techniques used in cancer survival prediction, suggesting how these techniques can be implemented in predicting the overall survival of pancreatic ductal adenocarcinoma cancer (pdac) patients. Because of bewildering and high volumes of data, the recent studies highlight the importance of machine learning (ML) algorithms like support vector machines and convolutional neural networks. Studies predict pancreatic ductal adenocarcinoma cancer (pdac) survival is within the limits of 41.7% at one year, 8.7% at three years, and 1.9% at five years. There is no significant correlation found between the disease stages and the overall survival rate. The implementation of ML algorithms can improve our understanding of cancer progression. ML methods need an appropriate level of validation to be considered in everyday clinical practice. The objective of these techniques is to perform classification, prediction, and estimation. Accurate predictions give pathologists information on the patient's state, surgical treatment to be done, optimal use of resources, individualized therapy, drugs to prescribe, and better patient management.
Collapse
Affiliation(s)
- Wilson Bakasa
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa
| |
Collapse
|