1
|
Zumla A, Ahmed R, Bakhri K. The role of artificial intelligence in the diagnosis, imaging, and treatment of thoracic empyema. Curr Opin Pulm Med 2025; 31:237-242. [PMID: 39711496 DOI: 10.1097/mcp.0000000000001150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2024]
Abstract
PURPOSE OF REVIEW The management of thoracic empyema is often complicated by diagnostic delays, recurrence, treatment failures and infections with antibiotic resistant bacteria. The emergence of artificial intelligence (AI) in healthcare, particularly in clinical decision support, imaging, and diagnostic microbiology raises great expectations in addressing these challenges. RECENT FINDINGS Machine learning (ML) and AI models have been applied to CT scans and chest X-rays to identify and classify pleural effusions and empyema with greater accuracy. AI-based analyses can identify complex imaging features that are often missed by the human eye, improving diagnostic precision. AI-driven decision-support algorithms could reduce time to diagnosis, improve antibiotic stewardship, and enhance more precise and less invasive surgical therapy, significantly improving clinical outcomes and reducing inpatient hospital stays. SUMMARY ML and AI can analyse large datasets and recognize complex patterns and thus have the potential to enhance diagnostic accuracy, preop planning for thoracic surgery, and optimize surgical treatment strategies, antibiotic therapy, antibiotic stewardship, monitoring complications, and long-term patient management outcomes.
Collapse
Affiliation(s)
- Adam Zumla
- Royal Bolton Hospital, Bolton NHS Foundation Trust, and University of Bolton School of Medicine, Bolton, Greater Manchester
| | - Rizwan Ahmed
- Royal Bolton Hospital, Bolton NHS Foundation Trust, and University of Bolton School of Medicine, Bolton, Greater Manchester
| | - Kunal Bakhri
- Thoracics Department, University College London Hospitals Foundation NHS Trust Westmoreland Street Hospital, London, UK
| |
Collapse
|
2
|
Parry R, Wright K, Bellinge JW, Ebert MA, Rowshanfarzad P, Francis RJ, Schultz CJ. Training and assessing convolutional neural network performance in automatic vascular segmentation using Ga-68 DOTATATE PET/CT. Int J Cardiovasc Imaging 2024; 40:1847-1861. [PMID: 38967895 PMCID: PMC11473569 DOI: 10.1007/s10554-024-03171-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Accepted: 06/22/2024] [Indexed: 07/06/2024]
Abstract
To evaluate a convolutional neural network's performance (nnU-Net) in the assessment of vascular contours, calcification and PET tracer activity using Ga-68 DOTATATE PET/CT. Patients who underwent Ga-68 DOTATATE PET/CT imaging over a 12-month period for neuroendocrine investigation were included. Manual cardiac and aortic segmentations were performed by an experienced observer. Scans were randomly allocated in ratio 64:16:20 for training, validation and testing of the nnU-Net model. PET tracer uptake and calcium scoring were compared between segmentation methods and different observers. 116 patients (53.5% female) with a median age of 64.5 years (range 23-79) were included. There were strong, positive correlations between all segmentations (mostly r > 0.98). There were no significant differences between manual and AI segmentation of SUVmean for global cardiac (mean ± SD 0.71 ± 0.22 vs. 0.71 ± 0.22; mean diff 0.001 ± 0.008, p > 0.05), ascending aorta (mean ± SD 0.44 ± 0.14 vs. 0.44 ± 0.14; mean diff 0.002 ± 0.01, p > 0.05), aortic arch (mean ± SD 0.44 ± 0.10 vs. 0.43 ± 0.10; mean diff 0.008 ± 0.16, p > 0.05) and descending aorta (mean ± SD < 0.001; 0.58 ± 0.12 vs. 0.57 ± 0.12; mean diff 0.01 ± 0.03, p > 0.05) contours. There was excellent agreement between the majority of manual and AI segmentation measures (r ≥ 0.80) and in all vascular contour calcium scores. Compared with the manual segmentation approach, the CNN required a significantly lower workflow time. AI segmentation of vascular contours using nnU-Net resulted in very similar measures of PET tracer uptake and vascular calcification when compared to an experienced observer and significantly reduced workflow time.
Collapse
Affiliation(s)
- R Parry
- School of Medicine, The University of Western Australia, Perth, Australia.
- Department of Cardiology, Royal Perth Hospital, Perth, Australia.
| | - K Wright
- School of Physics, Mathematics and Computing, The University of Western Australia, Crawley, WA, Australia
| | - J W Bellinge
- School of Medicine, The University of Western Australia, Perth, Australia
- Department of Cardiology, Royal Perth Hospital, Perth, Australia
| | - M A Ebert
- School of Physics, Mathematics and Computing, The University of Western Australia, Crawley, WA, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Australia
- School of Medicine and Population Health, University of Wisconsin, Madison, WI, USA
| | - P Rowshanfarzad
- School of Physics, Mathematics and Computing, The University of Western Australia, Crawley, WA, Australia
| | - R J Francis
- School of Medicine, The University of Western Australia, Perth, Australia
- Department of Nuclear Medicine, Sir Charles Gairdner Hospital, Perth, Australia
| | - C J Schultz
- School of Medicine, The University of Western Australia, Perth, Australia
- Department of Cardiology, Royal Perth Hospital, Perth, Australia
| |
Collapse
|
3
|
Sahai N, Kumar P, Sharma M. Virtual Reality Rehabilitation and Artificial Intelligence in Healthcare Technology. ADVANCES IN HOSPITALITY, TOURISM, AND THE SERVICES INDUSTRY 2024:395-416. [DOI: 10.4018/979-8-3693-2272-7.ch020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Abstract
The benefit of virtual rehabilitation is that it helps the patient increase their engagement and motivation. Another advantage is that it allows patient specific. A third utility is that the therapist can make the sessions more efficient and productive. A feature of virtual reality (VR) rehabilitation is that it is possible to create virtual environments which are more realistic than those in a video game and in which the patients can perform exercises. As a result, the patients are more immersed and motivated to avoid the boredom from which patients in standard therapy usually suffer. The features of artificial intelligence (AI) in biomedicine are the optimisation of diagnostics, treatment, and patient monitoring. AI allows for the analysis to have the potential to detect subtle deviations. In this chapter, the application of virtual reality and artificial intelligence in healthcare was discussed.
Collapse
Affiliation(s)
| | | | - Megha Sharma
- Faculty of Health Sciences, University of Pécs, Hungary
| |
Collapse
|
4
|
Lyu S, Zhang M, Zhang B, Gao L, Yang L, Guerrini S, Ong E, Zhang Y. The application of computer-aided diagnosis in Breast Imaging Reporting and Data System ultrasound training for residents-a randomized controlled study. Transl Cancer Res 2024; 13:1969-1979. [PMID: 38737674 PMCID: PMC11082692 DOI: 10.21037/tcr-23-2122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 04/09/2024] [Indexed: 05/14/2024]
Abstract
Background The consistency of Breast Imaging Reporting and Data System (BI-RADS) classification among experienced radiologists is different, which is difficult for inexperienced radiologists to master. This study aims to explore the value of computer-aided diagnosis (CAD) (AI-SONIC breast automatic detection system) in the BI-RADS training for residents. Methods A total of 12 residents who participated in the first year and the second year of standardized resident training in Ningbo No. 2 Hospital from May 2020 to May 2021 were randomly divided into 3 groups (Group 1, Group 2, Group 3) for BI-RADS training. They were asked to complete 2 tests and questionnaires at the beginning and end of the training. After the first test, the educational materials were given to the residents and reviewed during the breast imaging training month. Group 1 studied independently, Group 2 studied with CAD, and Group 3 was taught face-to-face by experts. The test scores and ultrasonographic descriptors of the residents were evaluated and compared with those of the radiology specialists. The trainees' confidence and recognition degree of CAD were investigated by questionnaire. Results There was no statistical significance in the scores of residents in the first test among the 3 groups (P=0.637). After training and learning, the scores of all 3 groups of residents were improved in the second test (P=0.006). Group 2 (52±7.30) and Group 3 (54±5.16) scored significantly higher than Group 1 (38±3.65). The consistency of ultrasonographic descriptors and final assessments between the residents and senior radiologists were improved (κ3 > κ2 > κ1), with κ2 and κ3 >0.4 (moderately consistent with experts), and κ1 =0.225 (fairly agreed with experts). The results of the questionnaire showed that the trainees had increased confidence in BI-RADS classification, especially Group 2 (1.5 to 3.5) and Group 3 (1.25 to 3.75). All trainees agreed that CAD was helpful for BI-RADS learning (Likert scale score: 4.75 out of 5) and were willing to use CAD as an aid (4.5, max. 5). Conclusions The AI-SONIC breast automatic detection system can help residents to quickly master BI-RADS, improve the consistency between residents and experts, and help to improve the confidence of residents in the classification of BI-RADS, which may have potential value in the BI-RADS training for radiology residents. Trial Registration Chinese Clinical Trial Registry (ChiCTR2400081672).
Collapse
Affiliation(s)
- Shuyi Lyu
- Department of Ultrasound, Ningbo No. 2 Hospital, Ningbo, China
- Department of Ultrasound, Zhenhai Hospital of Traditional Chinese Medicine, Ningbo, China
| | - Meiwu Zhang
- Department of Ultrasound, Ningbo No. 2 Hospital, Ningbo, China
| | - Baisong Zhang
- Department of Ultrasound, Ningbo No. 2 Hospital, Ningbo, China
| | - Libo Gao
- Department of Ultrasound, Ningbo No. 2 Hospital, Ningbo, China
| | - Liu Yang
- Department of Ultrasound, Ningbo No. 2 Hospital, Ningbo, China
| | - Susanna Guerrini
- Unit of Diagnostic Imaging, Department of Medical Sciences, Azienda Ospedaliero-Universitaria Senese, Siena, Italy
| | - Eugene Ong
- Diagnostic Radiology, Mount Elizabeth Novena Hospital, Singapore, Singapore
| | - Yan Zhang
- Department of Ultrasound, Ningbo No. 2 Hospital, Ningbo, China
- Department of Ultrasound, Zhenhai Hospital of Traditional Chinese Medicine, Ningbo, China
| |
Collapse
|
5
|
Lindroth H, Nalaie K, Raghu R, Ayala IN, Busch C, Bhattacharyya A, Moreno Franco P, Diedrich DA, Pickering BW, Herasevich V. Applied Artificial Intelligence in Healthcare: A Review of Computer Vision Technology Application in Hospital Settings. J Imaging 2024; 10:81. [PMID: 38667979 PMCID: PMC11050909 DOI: 10.3390/jimaging10040081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/08/2024] [Accepted: 03/11/2024] [Indexed: 04/28/2024] Open
Abstract
Computer vision (CV), a type of artificial intelligence (AI) that uses digital videos or a sequence of images to recognize content, has been used extensively across industries in recent years. However, in the healthcare industry, its applications are limited by factors like privacy, safety, and ethical concerns. Despite this, CV has the potential to improve patient monitoring, and system efficiencies, while reducing workload. In contrast to previous reviews, we focus on the end-user applications of CV. First, we briefly review and categorize CV applications in other industries (job enhancement, surveillance and monitoring, automation, and augmented reality). We then review the developments of CV in the hospital setting, outpatient, and community settings. The recent advances in monitoring delirium, pain and sedation, patient deterioration, mechanical ventilation, mobility, patient safety, surgical applications, quantification of workload in the hospital, and monitoring for patient events outside the hospital are highlighted. To identify opportunities for future applications, we also completed journey mapping at different system levels. Lastly, we discuss the privacy, safety, and ethical considerations associated with CV and outline processes in algorithm development and testing that limit CV expansion in healthcare. This comprehensive review highlights CV applications and ideas for its expanded use in healthcare.
Collapse
Affiliation(s)
- Heidi Lindroth
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Center for Aging Research, Regenstrief Institute, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
- Center for Health Innovation and Implementation Science, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Keivan Nalaie
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Roshini Raghu
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Ivan N. Ayala
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Charles Busch
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- College of Engineering, University of Wisconsin-Madison, Madison, WI 53705, USA
| | | | - Pablo Moreno Franco
- Department of Transplantation Medicine, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Daniel A. Diedrich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Brian W. Pickering
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| |
Collapse
|
6
|
Abbaker N, Minervini F, Guttadauro A, Solli P, Cioffi U, Scarci M. The future of artificial intelligence in thoracic surgery for non-small cell lung cancer treatment a narrative review. Front Oncol 2024; 14:1347464. [PMID: 38414748 PMCID: PMC10897973 DOI: 10.3389/fonc.2024.1347464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 01/16/2024] [Indexed: 02/29/2024] Open
Abstract
OBJECTIVES To present a comprehensive review of the current state of artificial intelligence (AI) applications in lung cancer management, spanning the preoperative, intraoperative, and postoperative phases. METHODS A review of the literature was conducted using PubMed, EMBASE and Cochrane, including relevant studies between 2002 and 2023 to identify the latest research on artificial intelligence and lung cancer. CONCLUSION While AI holds promise in managing lung cancer, challenges exist. In the preoperative phase, AI can improve diagnostics and predict biomarkers, particularly in cases with limited biopsy materials. During surgery, AI provides real-time guidance. Postoperatively, AI assists in pathology assessment and predictive modeling. Challenges include interpretability issues, training limitations affecting model use and AI's ineffectiveness beyond classification. Overfitting and global generalization, along with high computational costs and ethical frameworks, pose hurdles. Addressing these challenges requires a careful approach, considering ethical, technical, and regulatory factors. Rigorous analysis, external validation, and a robust regulatory framework are crucial for responsible AI implementation in lung surgery, reflecting the evolving synergy between human expertise and technology.
Collapse
Affiliation(s)
- Namariq Abbaker
- Division of Thoracic Surgery, Imperial College NHS Healthcare Trust and National Heart and Lung Institute, London, United Kingdom
| | - Fabrizio Minervini
- Division of Thoracic Surgery, Luzerner Kantonsspital, Lucern, Switzerland
| | - Angelo Guttadauro
- Division of Surgery, Università Milano-Bicocca and Istituti Clinici Zucchi, Monza, Italy
| | - Piergiorgio Solli
- Division of Thoracic Surgery, Policlinico S. Orsola-Malpighi, Bologna, Italy
| | - Ugo Cioffi
- Department of Surgery, University of Milan, Milan, Italy
| | - Marco Scarci
- Division of Thoracic Surgery, Imperial College NHS Healthcare Trust and National Heart and Lung Institute, London, United Kingdom
| |
Collapse
|
7
|
Xin N, Wu X, Chen Z, Wei R, Saito Y, Lachkar S, Salvicchi A, Fumimoto S, Drevet G, Xu Z, Huang K, Tang H. A new preoperative localization of pulmonary nodules guided by mixed reality: a pilot study of an animal model. Transl Lung Cancer Res 2023; 12:150-157. [PMID: 36762064 PMCID: PMC9903086 DOI: 10.21037/tlcr-22-884] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 01/03/2023] [Indexed: 01/12/2023]
Abstract
Background With the popularity of high-resolution computed tomography (HRCT), more and more pulmonary nodules are being discovered. Video-assisted thoracoscopic surgery (VATS) has become the first choice for surgical treatment of pulmonary nodules. The use of accurate preoperative localization is crucial for successful resection in VATS. At present, there are many kinds of preoperative localization methods, but there are certain disadvantages. This study aimed to evaluate the feasibility and safety of mixed reality (MR)-guided pulmonary nodules localization, which is a new method that can benefit patients to a greater extent. Methods By constructing an animal model of pulmonary nodules localization, 28 cases of pulmonary nodules were located by MR-guided localization. We recorded the localization accuracy, localization time, insertion attempts, and incidence of complications related to localization under MR-guidance. Results All 28 nodules were successfully located: the deviation of MR-guided localization was 5.71±2.59 mm, localization time was 8.07±1.44 min, and insertion attempts was 1. A pneumothorax and localizer dislodgement occurred in 1 case, respectively. Conclusions Since preoperative localization is critical for VATS resection of pulmonary nodules, we investigated a new localization method. As indicated by our study, MR-guided localization of pulmonary nodules is feasible and safe, which is worthy of further research and promotion. We have also registered corresponding clinical trials to further investigate and help to improve our understanding of this technique.
Collapse
Affiliation(s)
- Ning Xin
- Department of Thoracic Surgery, PLA 960th Hospital, Jinan, China;,Department of Thoracic Surgery, Shanghai Changzheng Hospital, Navy Military Medical University, Shanghai, China
| | - Xiaoyu Wu
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Zihao Chen
- Department of Thoracic Surgery, Shanghai Changzheng Hospital, Navy Military Medical University, Shanghai, China
| | - Rongqiang Wei
- Department of Thoracic Surgery, Shanghai Changzheng Hospital, Navy Military Medical University, Shanghai, China
| | - Yuichi Saito
- Department of Surgery, Teikyo University School of Medicine, Tokyo, Japan
| | - Samy Lachkar
- Department of Pulmonology, Thoracic Oncology and Respiratory Intensive Care, Hôpital Charles Nicolle, CHU de Rouen, Rouen Cedex, France
| | | | - Satoshi Fumimoto
- Department of Thoracic and Cardiovascular Surgery, Osaka Medical and Pharmaceutical University, Osaka, Japan
| | - Gabrielle Drevet
- Department of Thoracic Surgery, Lung and Heart-Lung Transplantation, Louis Pradel Hospital, Hospices Civils de Lyon, Lyon, France
| | - Zhifei Xu
- Department of Thoracic Surgery, Shanghai Changzheng Hospital, Navy Military Medical University, Shanghai, China
| | - Kenan Huang
- Department of Thoracic Surgery, Shanghai Changzheng Hospital, Navy Military Medical University, Shanghai, China
| | - Hua Tang
- Department of Thoracic Surgery, Shanghai Changzheng Hospital, Navy Military Medical University, Shanghai, China
| |
Collapse
|
8
|
Park JJ, Tiefenbach J, Demetriades AK. The role of artificial intelligence in surgical simulation. FRONTIERS IN MEDICAL TECHNOLOGY 2022; 4:1076755. [PMID: 36590155 PMCID: PMC9794840 DOI: 10.3389/fmedt.2022.1076755] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 11/21/2022] [Indexed: 12/15/2022] Open
Abstract
Artificial Intelligence (AI) plays an integral role in enhancing the quality of surgical simulation, which is increasingly becoming a popular tool for enriching the training experience of a surgeon. This spans the spectrum from facilitating preoperative planning, to intraoperative visualisation and guidance, ultimately with the aim of improving patient safety. Although arguably still in its early stages of widespread clinical application, AI technology enables personal evaluation and provides personalised feedback in surgical training simulations. Several forms of surgical visualisation technologies currently in use for anatomical education and presurgical assessment rely on different AI algorithms. However, while it is promising to see clinical examples and technological reports attesting to the efficacy of AI-supported surgical simulators, barriers to wide-spread commercialisation of such devices and software remain complex and multifactorial. High implementation and production costs, scarcity of reports evidencing the superiority of such technology, and intrinsic technological limitations remain at the forefront. As AI technology is key to driving the future of surgical simulation, this paper will review the literature delineating its current state, challenges, and prospects. In addition, a consolidated list of FDA/CE approved AI-powered medical devices for surgical simulation is presented, in order to shed light on the existing gap between academic achievements and the universal commercialisation of AI-enabled simulators. We call for further clinical assessment of AI-supported surgical simulators to support novel regulatory body approved devices and usher surgery into a new era of surgical education.
Collapse
Affiliation(s)
- Jay J. Park
- Department of General Surgery, Norfolk and Norwich University Hospital, Norwich, United Kingdom,Edinburgh Medical School, University of Edinburgh, Edinburgh, United Kingdom
| | - Jakov Tiefenbach
- Neurological Institute, Cleveland Clinic, Cleveland, OH, United States
| | - Andreas K. Demetriades
- Edinburgh Medical School, University of Edinburgh, Edinburgh, United Kingdom,Department of Neurosurgery, Royal Infirmary of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
9
|
Krass S, Lassen-Schmidt B, Schenk A. Computer-assisted image-based risk analysis and planning in lung surgery - a review. Front Surg 2022; 9:920457. [PMID: 36211288 PMCID: PMC9535081 DOI: 10.3389/fsurg.2022.920457] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 09/08/2022] [Indexed: 11/16/2022] Open
Abstract
In this paper, we give an overview on current trends in computer-assisted image-based methods for risk analysis and planning in lung surgery and present our own developments with a focus on computed tomography (CT) based algorithms and applications. The methods combine heuristic, knowledge based image processing algorithms for segmentation, quantification and visualization based on CT images of the lung. Impact for lung surgery is discussed regarding risk assessment, quantitative assessment of resection strategies, and surgical guiding. In perspective, we discuss the role of deep-learning based AI methods for further improvements.
Collapse
Affiliation(s)
- Stefan Krass
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Correspondence: Stefan Krass
| | | | - Andrea Schenk
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Department of Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
| |
Collapse
|