1
|
Alvarez-Lozada LA, Arrambide-Garza FJ, Quiroga-Garza A, Huerta-Sanchez MC, Escobar-Luna A, Sada-Treviño MA, Ramos-Proaño CE, Elizondo-Omaña RE. Underdiagnosis of umbilical hernias in CT scans in a multicenter study - the radiologically neglected pathology and its surgical implications. Hernia 2024:10.1007/s10029-024-03079-9. [PMID: 38837076 DOI: 10.1007/s10029-024-03079-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 05/19/2024] [Indexed: 06/06/2024]
Abstract
PURPOSE Umbilical hernias (UH) have a higher prevalence than previously considered. With the high workload radiologists must endure, UH can be missed when interpreting a computed tomography scan (CT). The clinical implications of its misdiagnosis are yet to be determined. Unreporting could lead to content lesions in surgical approaches and other potential complications. The aim was to determine the prevalence of UH using CT scans, and the incidence of radiological reporting. METHODS A multicenter, cross-sectional study was performed in four tertiary-level hospitals. CT scans were reviewed for abdominal wall defects at the umbilicus, and radiological reports were examined to compare findings. In the case of UH, transversal, anteroposterior, and craniocaudal lengths were obtained. RESULTS A total of 1557 CTs were included, from which 971 (62.4%, 95% CI 0.59-0.64) had UH. Out of those, 629 (64.8%, 95% CI 0.61-0.67) of the defects were not included in the radiological report. Smaller UH (x̄: 7.7 × 6.0 mm) were more frequently missed. Of the reported UH, 187 (54.7%) included at least one axis measurement, 289 (84.5%) content description, and 146 (42.7%) whether or not there were complication signs. CONCLUSION There is a high prevalence of UH, and a high incidence of under-reporting. This raises the question of whether this is a population-based finding or the norm worldwide. The reason of under-reporting and the clinical implications of these must be addressed in further studies.
Collapse
Affiliation(s)
- Luis Adrian Alvarez-Lozada
- Clinical-Surgical Research Group (GICQx), Human Anatomy Research Group (GIA), Human Anatomy Department, School of Medicine, Universidad Autonoma de Nuevo Leon, Francisco I. Madero y Aguirre Pequeño sin número, Colonia Mitras Centro, Monterrey, Monterrey, Nuevo León, C.P. 64460, México
| | - Francisco Javier Arrambide-Garza
- Clinical-Surgical Research Group (GICQx), Human Anatomy Research Group (GIA), Human Anatomy Department, School of Medicine, Universidad Autonoma de Nuevo Leon, Francisco I. Madero y Aguirre Pequeño sin número, Colonia Mitras Centro, Monterrey, Monterrey, Nuevo León, C.P. 64460, México
| | - Alejandro Quiroga-Garza
- Clinical-Surgical Research Group (GICQx), Human Anatomy Research Group (GIA), Human Anatomy Department, School of Medicine, Universidad Autonoma de Nuevo Leon, Francisco I. Madero y Aguirre Pequeño sin número, Colonia Mitras Centro, Monterrey, Monterrey, Nuevo León, C.P. 64460, México.
- Servicio de Cirugía General, Hospital de Traumatología y Ortopedia No.21, Instituto Mexicano del Seguro Social, Monterrey, Nuevo Leon, Mexico.
| | - Monica Catalina Huerta-Sanchez
- Department of Radiology, School of Medicine, Universidad Autonoma de Nuevo Leon, University Hospital "Dr. Jose Eleuterio Gonzalez", Monterrey, Mexico
| | - Ana Escobar-Luna
- Department of Radiology, Instituto Tecnológico y de Estudios Superiores de Monterrey, Hospital San José Tec Salud, Monterrey, Mexico
| | | | - Carlos Enrique Ramos-Proaño
- Department of Radiology, Instituto Tecnológico y de Estudios Superiores de Monterrey, Hospital San José Tec Salud, Monterrey, Mexico
| | - Rodrigo Enrique Elizondo-Omaña
- Clinical-Surgical Research Group (GICQx), Human Anatomy Research Group (GIA), Human Anatomy Department, School of Medicine, Universidad Autonoma de Nuevo Leon, Francisco I. Madero y Aguirre Pequeño sin número, Colonia Mitras Centro, Monterrey, Monterrey, Nuevo León, C.P. 64460, México.
| |
Collapse
|
2
|
Tan JR, Gao Y, Raghuraman R, Ting D, Wong KM, Cheng LTE, Oh HC, Goh SH, Yan YY. Application of deep learning algorithms in classification and localization of implant cutout for the postoperative hip. Skeletal Radiol 2024:10.1007/s00256-024-04692-6. [PMID: 38771507 DOI: 10.1007/s00256-024-04692-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 04/03/2024] [Accepted: 04/22/2024] [Indexed: 05/22/2024]
Abstract
OBJECTIVE This study aims to explore the feasibility of employing convolutional neural networks for detecting and localizing implant cutouts on anteroposterior pelvic radiographs. MATERIALS AND METHODS The research involves the development of two Deep Learning models. Initially, a model was created for image-level classification of implant cutouts using 40191 pelvic radiographs obtained from a single institution. The radiographs were partitioned into training, validation, and hold-out test datasets in a 6/2/2 ratio. Performance metrics including the area under the receiver operator characteristics curve (AUROC), sensitivity, and specificity were calculated using the test dataset. Additionally, a second object detection model was trained to localize implant cutouts within the same dataset. Bounding box visualizations were generated on images predicted as cutout-positive by the classification model in the test dataset, serving as an adjunct for assessing algorithm validity. RESULTS The classification model had an accuracy of 99.7%, sensitivity of 84.6%, specificity of 99.8%, AUROC of 0.998 (95% CI: 0.996, 0.999) and AUPRC of 0.774 (95% CI: 0.646, 0.880). From the pelvic radiographs predicted as cutout-positive, the object detection model could achieve 95.5% localization accuracy on true positive images, but falsely generated 14 results from the 15 false-positive predictions. CONCLUSION The classification model showed fair accuracy for detection of implant cutouts, while the object detection model effectively localized cutout. This serves as proof of concept of using a deep learning-based approach for classification and localization of implant cutouts from pelvic radiographs.
Collapse
Affiliation(s)
- Jin Rong Tan
- Department of Diagnostic Radiology, Singapore General Hospital, Singapore General Hospital, Block 2, Level 1 Outram Road, Singapore, 169608, Singapore.
- Radiological Sciences ACP, Duke-NUS Medical School, Singapore, Singapore.
| | - Yan Gao
- Health Services Research, Changi General Hospital, Singapore Health Services, Singapore, Singapore
| | - Raghavan Raghuraman
- Department of Orthopaedic Surgery, Changi General Hospital, Singapore, Singapore
| | - Daniel Ting
- Duke-NUS Medical School, Singapore Health Service (SingHealth), Singapore, Singapore
| | - Kang Min Wong
- Radiological Sciences ACP, Duke-NUS Medical School, Singapore, Singapore
- Department of Radiology, Changi General Hospital, Singapore, Singapore
| | - Lionel Tim-Ee Cheng
- Department of Diagnostic Radiology, Singapore General Hospital, Singapore General Hospital, Block 2, Level 1 Outram Road, Singapore, 169608, Singapore
- Radiological Sciences ACP, Duke-NUS Medical School, Singapore, Singapore
| | - Hong Choon Oh
- Health Services Research, Changi General Hospital, Singapore Health Services, Singapore, Singapore
| | - Siang Hiong Goh
- Department of Emergency Medicine, Changi General Hospital, Singapore, Singapore
| | - Yet Yen Yan
- Radiological Sciences ACP, Duke-NUS Medical School, Singapore, Singapore
- Department of Radiology, Changi General Hospital, Singapore, Singapore
| |
Collapse
|
3
|
Xiang B, Lu J, Yu J. Evaluating tooth segmentation accuracy and time efficiency in CBCT images using artificial intelligence: A systematic review and Meta-analysis. J Dent 2024; 146:105064. [PMID: 38768854 DOI: 10.1016/j.jdent.2024.105064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 04/22/2024] [Accepted: 05/09/2024] [Indexed: 05/22/2024] Open
Abstract
OBJECTIVES This systematic review and meta-analysis aimed to assess the current performance of artificial intelligence (AI)-based methods for tooth segmentation in three-dimensional cone-beam computed tomography (CBCT) images, with a focus on their accuracy and efficiency compared to those of manual segmentation techniques. DATA The data analyzed in this review consisted of a wide range of research studies utilizing AI algorithms for tooth segmentation in CBCT images. Meta-analysis was performed, focusing on the evaluation of the segmentation results using the dice similarity coefficient (DSC). SOURCES PubMed, Embase, Scopus, Web of Science, and IEEE Explore were comprehensively searched to identify relevant studies. The initial search yielded 5642 entries, and subsequent screening and selection processes led to the inclusion of 35 studies in the systematic review. Among the various segmentation methods employed, convolutional neural networks, particularly the U-net model, are the most commonly utilized. The pooled effect of the DSC score for tooth segmentation was 0.95 (95 %CI 0.94 to 0.96). Furthermore, seven papers provided insights into the time required for segmentation, which ranged from 1.5 s to 3.4 min when utilizing AI techniques. CONCLUSIONS AI models demonstrated favorable accuracy in automatically segmenting teeth from CBCT images while reducing the time required for the process. Nevertheless, correction methods for metal artifacts and tooth structure segmentation using different imaging modalities should be addressed in future studies. CLINICAL SIGNIFICANCE AI algorithms have great potential for precise tooth measurements, orthodontic treatment planning, dental implant placement, and other dental procedures that require accurate tooth delineation. These advances have contributed to improved clinical outcomes and patient care in dental practice.
Collapse
Affiliation(s)
- Bilu Xiang
- School of Dentistry, Shenzhen University Medical School, Shenzhen University, Shenzhen 518000, China.
| | - Jiayi Lu
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, Shenzhen 518000, China
| | - Jiayi Yu
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, Shenzhen 518000, China
| |
Collapse
|
4
|
Gairola S, Solanki SL, Patkar S, Goel M. Artificial Intelligence in Perioperative Planning and Management of Liver Resection. Indian J Surg Oncol 2024; 15:186-195. [PMID: 38818006 PMCID: PMC11133260 DOI: 10.1007/s13193-024-01883-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 01/16/2024] [Indexed: 06/01/2024] Open
Abstract
Artificial intelligence (AI) is a speciality within computer science that deals with creating systems that can replicate the intelligence of a human mind and has problem-solving abilities. AI includes a diverse array of techniques and approaches such as machine learning, neural networks, natural language processing, robotics, and expert systems. An electronic literature search was conducted using the databases of "PubMed" and "Google Scholar". The period for the search was from 2000 to June 2023. The search terms included "artificial intelligence", "machine learning", "liver cancers", "liver tumors", "hepatectomy", "perioperative" and their synonyms in various combinations. The search also included all MeSH terms. The extracted articles were further reviewed in a step-wise manner for identification of relevant studies. A total of 148 articles were identified after the initial literature search. Initial review included screening of article titles for relevance and identifying duplicates. Finally, 65 articles were reviewed for this review article. The future of AI in liver cancer planning and management holds immense promise. AI-driven advancements will increasingly enable precise tumour detection, location, and characterisation through enhanced image analysis. ML algorithms will predict patient-specific treatment responses and complications, allowing for tailored therapies. Surgical robots and AI-guided procedures will enhance the precision of liver resections, reducing risks and improving outcomes. AI will also streamline patient monitoring, better hemodynamic management, enabling early detection of recurrence or complications. Moreover, AI will facilitate data-driven research, accelerating the development of novel treatments and therapies. Ultimately, AI's integration will revolutionise liver cancer care, offering personalised, efficient and effective solutions, improving patients' quality of life and survival rates.
Collapse
Affiliation(s)
- Shruti Gairola
- Department of Anaesthesiology, Critical Care and Pain, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai, Maharashtra India
| | - Sohan Lal Solanki
- Department of Anaesthesiology, Critical Care and Pain, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai, Maharashtra India
| | - Shraddha Patkar
- Division of Hepatobiliary Surgical Oncology, Department of Surgical Oncology, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai, Maharashtra India
| | - Mahesh Goel
- Division of Hepatobiliary Surgical Oncology, Department of Surgical Oncology, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai, Maharashtra India
| |
Collapse
|
5
|
Estrada Alamo CE, Diatta F, Monsell SE, Lane-Fall MB. Artificial Intelligence in Anesthetic Care: A Survey of Physician Anesthesiologists. Anesth Analg 2024; 138:938-950. [PMID: 38055624 DOI: 10.1213/ane.0000000000006752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023]
Abstract
BACKGROUND This study explored physician anesthesiologists' knowledge, exposure, and perceptions of artificial intelligence (AI) and their associations with attitudes and expectations regarding its use in clinical practice. The findings highlight the importance of understanding anesthesiologists' perspectives for the successful integration of AI into anesthesiology, as AI has the potential to revolutionize the field. METHODS A cross-sectional survey of 27,056 US physician anesthesiologists was conducted to assess their knowledge, perceptions, and expectations regarding the use of AI in clinical practice. The primary outcome measured was attitude toward the use of AI in clinical practice, with scores of 4 or 5 on a 5-point Likert scale indicating positive attitudes. The anticipated impact of AI on various aspects of professional work was measured using a 3-point Likert scale. Logistic regression was used to explore the relationship between participant responses and attitudes toward the use of AI in clinical practice. RESULTS A 2021 survey of 27,056 US physician anesthesiologists received 1086 responses (4% response rate). Most respondents were male (71%), active clinicians (93%) under 45 (34%). A majority of anesthesiologists (61%) had some knowledge of AI and 48% had a positive attitude toward using AI in clinical practice. While most respondents believed that AI can improve health care efficiency (79%), timeliness (75%), and effectiveness (69%), they are concerned that its integration in anesthesiology could lead to a decreased demand for anesthesiologists (45%) and decreased earnings (45%). Within a decade, respondents expected AI would outperform them in predicting adverse perioperative events (83%), formulating pain management plans (67%), and conducting airway exams (45%). The absence of algorithmic transparency (60%), an ambiguous environment regarding malpractice (47%), and the possibility of medical errors (47%) were cited as significant barriers to the use of AI in clinical practice. Respondents indicated that their motivation to use AI in clinical practice stemmed from its potential to enhance patient outcomes (81%), lower health care expenditures (54%), reduce bias (55%), and boost productivity (53%). Variables associated with positive attitudes toward AI use in clinical practice included male gender (odds ratio [OR], 1.7; P < .001), 20+ years of experience (OR, 1.8; P < .01), higher AI knowledge (OR, 2.3; P = .01), and greater AI openness (OR, 10.6; P < .01). Anxiety about future earnings was associated with negative attitudes toward AI use in clinical practice (OR, 0.54; P < .01). CONCLUSIONS Understanding anesthesiologists' perspectives on AI is essential for the effective integration of AI into anesthesiology, as AI has the potential to revolutionize the field.
Collapse
Affiliation(s)
- Carlos E Estrada Alamo
- From the Department of Anesthesiology, Virginia Mason Medical Center, Seattle, Washington
| | - Fortunay Diatta
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Yale School of Medicine, New Haven, Connecticut
| | - Sarah E Monsell
- Department of Biostatistics, University of Washington, Hans Rosling Center for Population Health, Seattle, Washington
| | - Meghan B Lane-Fall
- Department of Anesthesiology and Critical Care, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
6
|
Stueckle CA, Haage P. The radiologist as a physician - artificial intelligence as a way to overcome tension between the patient, technology, and referring physicians - a narrative review. ROFO-FORTSCHR RONTG 2024. [PMID: 38569517 DOI: 10.1055/a-2271-0799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2024]
Abstract
BACKGROUND Large volumes of data increasing over time lead to a shortage of radiologists' time. The use of systems based on artificial intelligence (AI) offers opportunities to relieve the burden on radiologists. The AI systems are usually optimized for a radiological area. Radiologists must understand the basic features of its technical function in order to be able to assess the weaknesses and possible errors of the system and use the strengths of the system. This "explainability" creates trust in an AI system and shows its limits. METHOD Based on an expanded Medline search for the key words "radiology, artificial intelligence, referring physician interaction, patient interaction, job satisfaction, communication of findings, expectations", subjective additional relevant articles were considered for this narrative review. RESULTS The use of AI is well advanced, especially in radiology. The programmer should provide the radiologist with clear explanations as to how the system works. All systems on the market have strengths and weaknesses. Some of the optimizations are unintentionally specific, as they are often adapted too precisely to a certain environment that often does not exist in practice - this is known as "overfitting". It should also be noted that there are specific weak points in the systems, so-called "adversarial examples", which lead to fatal misdiagnoses by the AI even though these cannot be visually distinguished from an unremarkable finding by the radiologist. The user must know which diseases the system is trained for, which organ systems are recognized and taken into account by the AI, and, accordingly, which are not properly assessed. This means that the user can and must critically review the results and adjust the findings if necessary. Correctly applied AI can result in a time savings for the radiologist. If he knows how the system works, he only has to spend a short amount of time checking the results. The time saved can be used for communication with patients and referring physicians and thus contribute to higher job satisfaction. CONCLUSION Radiology is a constantly evolving specialty with enormous responsibility, as radiologists often make the diagnosis to be treated. AI-supported systems should be used consistently to provide relief and support. Radiologists need to know the strengths, weaknesses, and areas of application of these AI systems in order to save time. The time gained can be used for communication with patients and referring physicians. KEY POINTS · Explainable AI systems help to improve workflow and to save time.. · The physician must critically review AI results, under consideration of the limitations of the AI.. · The AI system will only provide useful results if it has been adapted to the data type and data origin.. · The communicating radiologist interested in the patient is important for the visibility of the discipline.. CITATION FORMAT · Stueckle CA, Haage P. The radiologist as a physician - artificial intelligence as a way to overcome tension between the patient, technology, and referring physicians - a narrative review. Fortschr Röntgenstr 2024; DOI: 10.1055/a-2271-0799.
Collapse
Affiliation(s)
| | - Patrick Haage
- Diagnostic and Interventional Radiology, HELIOS Universitätsklinikum Wuppertal, Germany
| |
Collapse
|
7
|
Marcus E, Teuwen J. Artificial intelligence and explanation: How, why, and when to explain black boxes. Eur J Radiol 2024; 173:111393. [PMID: 38417186 DOI: 10.1016/j.ejrad.2024.111393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2024] [Accepted: 02/22/2024] [Indexed: 03/01/2024]
Abstract
Artificial intelligence (AI) is infiltrating nearly all fields of science by storm. One notorious property that AI algorithms bring is their so-called black box character. In particular, they are said to be inherently unexplainable algorithms. Of course, such characteristics would pose a problem for the medical world, including radiology. The patient journey is filled with explanations along the way, from diagnoses to treatment, follow-up, and more. If we were to replace part of these steps with non-explanatory algorithms, we could lose grip on vital aspects such as finding mistakes, patient trust, and even the creation of new knowledge. In this article, we argue that, even for the darkest of black boxes, there is hope of understanding them. In particular, we compare the situation of understanding black box models to that of understanding the laws of nature in physics. In the case of physics, we are given a 'black box' law of nature, about which there is no upfront explanation. However, as current physical theories show, we can learn plenty about them. During this discussion, we present the process by which we make such explanations and the human role therein, keeping a solid focus on radiological AI situations. We will outline the AI developers' roles in this process, but also the critical role fulfilled by the practitioners, the radiologists, in providing a healthy system of continuous improvement of AI models. Furthermore, we explore the role of the explainable AI (XAI) research program in the broader context we describe.
Collapse
Affiliation(s)
- Eric Marcus
- AI for Oncology, Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands.
| | - Jonas Teuwen
- AI for Oncology, Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands; Department of Radiology and Nuclear Medicine, Radboud University Medical Center, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
| |
Collapse
|
8
|
Silva TP, Pinheiro MCR, Freitas DQ, Gaêta-Araujo H, Oliveira-Santos C. Assessment of accuracy and reproducibility of cephalometric identification performed by 2 artificial intelligence-driven tracing applications and human examiners. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 137:431-440. [PMID: 38365543 DOI: 10.1016/j.oooo.2024.01.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 01/04/2024] [Accepted: 01/13/2024] [Indexed: 02/18/2024]
Abstract
OBJECTIVE To assess the accuracy and reproducibility of cephalometric landmark identification performed by 2 artificial intelligence (AI)-driven applications (CefBot and WebCeph) and human examiners. STUDY DESIGN Lateral cephalometric radiographs of 10 skulls containing 0.5 mm lead spheres directly placed at 10 cephalometric landmarks were obtained as the reference standard. Ten radiographs without spheres were obtained from the same skulls for identification of cephalometric points performed by the AI applications and 10 examiners. The x- and y-coordinate values of the cephalometric points identified by the AI applications and examiners were compared with those from the reference standard images using one-way analysis of variance and the Dunnet post-hoc test. The intraclass correlation coefficient (ICC) was used to evaluate reproducibility. Mean radial error (MRE) in identification was calculated with respect to the reference standard. Statistical significance was established at P < .05. RESULTS Landmark identification by CefBot and the examiners did not exhibit significant differences from the reference standard on either axis (P > .05). WebCeph produced a significant difference (P < .05) in 4 and 6 points on the x- and y-axes, respectively. Reproducibility was excellent for CefBot and the examiners (ICC ≥ 0.9943) and good for WebCeph (ICC ≥ 0.7868). MREs of CefBot and the examiners were similar. CONCLUSION With results similar to those of human examiners, CefBot demonstrated excellent reliability and can aid in cephalometric applications. WebCeph produced significant errors.
Collapse
Affiliation(s)
- Thaísa Pinheiro Silva
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, Sao Paulo, Brazil.
| | - Maria Clara Rodrigues Pinheiro
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, Sao Paulo, Brazil
| | - Deborah Queiroz Freitas
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Piracicaba, Sao Paulo, Brazil
| | - Hugo Gaêta-Araujo
- Department of Stomatology, Public Oral Health, Forensic Dentistry, Division of Oral Radiology, School of Dentistry of Ribeirao Preto, University of Sao Paulo (USP), Ribeirao Preto, Sao Paulo, Brazil
| | - Christiano Oliveira-Santos
- Department of Diagnosis and Oral Health, University of Louisville School of Dentistry, Louisville, KY, USA
| |
Collapse
|
9
|
Al Mohammad B, Aldaradkeh A, Gharaibeh M, Reed W. Assessing radiologists' and radiographers' perceptions on artificial intelligence integration: opportunities and challenges. Br J Radiol 2024; 97:763-769. [PMID: 38273675 PMCID: PMC11027289 DOI: 10.1093/bjr/tqae022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 09/30/2023] [Accepted: 01/21/2024] [Indexed: 01/27/2024] Open
Abstract
OBJECTIVES The objective of this study was to evaluate radiologists' and radiographers' opinions and perspectives on artificial intelligence (AI) and its integration into the radiology department. Additionally, we investigated the most common challenges and barriers that radiologists and radiographers face when learning about AI. METHODS A nationwide, online descriptive cross-sectional survey was distributed to radiologists and radiographers working in hospitals and medical centres from May 29, 2023 to July 30, 2023. The questionnaire examined the participants' opinions, feelings, and predictions regarding AI and its applications in the radiology department. Descriptive statistics were used to report the participants' demographics and responses. Five-points Likert-scale data were reported using divergent stacked bar graphs to highlight any central tendencies. RESULTS Responses were collected from 258 participants, revealing a positive attitude towards implementing AI. Both radiologists and radiographers predicted breast imaging would be the subspecialty most impacted by the AI revolution. MRI, mammography, and CT were identified as the primary modalities with significant importance in the field of AI application. The major barrier encountered by radiologists and radiographers when learning about AI was the lack of mentorship, guidance, and support from experts. CONCLUSION Participants demonstrated a positive attitude towards learning about AI and implementing it in the radiology practice. However, radiologists and radiographers encounter several barriers when learning about AI, such as the absence of experienced professionals support and direction. ADVANCES IN KNOWLEDGE Radiologists and radiographers reported several barriers to AI learning, with the most significant being the lack of mentorship and guidance from experts, followed by the lack of funding and investment in new technologies.
Collapse
Affiliation(s)
- Badera Al Mohammad
- Department of Allied Medical Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid 22110, Jordan
| | - Afnan Aldaradkeh
- Department of Allied Medical Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid 22110, Jordan
| | - Monther Gharaibeh
- Department of Special Surgery, Faculty of Medicine, The Hashemite University, Zarqa 13133, Jordan
| | - Warren Reed
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney 2006, Sydney, NSW, Australia
| |
Collapse
|
10
|
Shrivastava PK, Hasan S, Abid L, Injety R, Shrivastav AK, Sybil D. Accuracy of machine learning in the diagnosis of odontogenic cysts and tumors: a systematic review and meta-analysis. Oral Radiol 2024:10.1007/s11282-024-00745-7. [PMID: 38530559 DOI: 10.1007/s11282-024-00745-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 03/06/2024] [Indexed: 03/28/2024]
Abstract
BACKGROUND The recent impact of artificial intelligence in diagnostic services has been enormous. Machine learning tools offer an innovative alternative to diagnose cysts and tumors radiographically that pose certain challenges due to the near similar presentation, anatomical variations, and superimposition. It is crucial that the performance of these models is evaluated for their clinical applicability in diagnosing cysts and tumors. METHODS A comprehensive literature search was carried out on eminent databases for published studies between January 2015 and December 2022. Studies utilizing machine learning models in the diagnosis of odontogenic cysts or tumors using Orthopantomograms (OPG) or Cone Beam Computed Tomographic images (CBCT) were included. QUADAS-2 tool was used for the assessment of the risk of bias and applicability concerns. Meta-analysis was performed for studies reporting sufficient performance metrics, separately for OPG and CBCT. RESULTS 16 studies were included for qualitative synthesis including a total of 10,872 odontogenic cysts and tumors. The sensitivity and specificity of machine learning in diagnosing cysts and tumors through OPG were 0.83 (95% CI 0.81-0.85) and 0.82 (95% CI 0.81-0.83) respectively. Studies utilizing CBCT noted a sensitivity of 0.88 (95% CI 0.87-0.88) and specificity of 0.88 (95% CI 0.87-0.89). Highest classification accuracy was 100%, noted for Support Vector Machine classifier. CONCLUSION The results from the present review favoured machine learning models to be used as a clinical adjunct in the radiographic diagnosis of odontogenic cysts and tumors, provided they undergo robust training with a huge dataset. However, the arduous process, investment, and certain ethical concerns associated with the total dependence on technology must be taken into account. Standardized reporting of outcomes for diagnostic studies utilizing machine learning methods is recommended to ensure homogeneity in assessment criteria, facilitate comparison between different studies, and promote transparency in research findings.
Collapse
Affiliation(s)
| | - Shamimul Hasan
- Department of Oral Medicine and Radiology, Faculty of Dentistry, Jamia Millia Islamia, New Delhi, India
| | - Laraib Abid
- Faculty of Dentistry, Jamia Millia Islamia, New Delhi, India
| | - Ranjit Injety
- Department of Neurology, Christian Medical College & Hospital, Ludhiana, Punjab, India
| | - Ayush Kumar Shrivastav
- Computer Science and Engineering, Centre for Development of Advanced Computing, Noida, Uttar Pradesh, India
| | - Deborah Sybil
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Jamia Millia Islamia, New Delhi, India.
| |
Collapse
|
11
|
VanDecker WA. The Integrative Sport of Cardiac Imaging and Clinical Cardiology: Machine Augmentation and an Evolving Odyssey. JACC Cardiovasc Imaging 2024:S1936-878X(24)00079-2. [PMID: 38613557 DOI: 10.1016/j.jcmg.2024.02.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 02/13/2024] [Indexed: 04/15/2024]
Affiliation(s)
- William A VanDecker
- Lewis Katz School of Medicine at Temple University, Philadelphia, Pennsylvania, USA.
| |
Collapse
|
12
|
Shawn Yuan PH, Yan TD, Sharma S, Chahley E, MacLean LJ, Freitas V, Yong-Hing CJ. Authorship gender among articles about artificial intelligence in breast imaging. Eur J Radiol 2024; 175:111428. [PMID: 38492508 DOI: 10.1016/j.ejrad.2024.111428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 03/04/2024] [Accepted: 03/11/2024] [Indexed: 03/18/2024]
Abstract
RATIONALE AND OBJECTIVES The purpose of this study is to investigate the variance of women authors, specifically first and senior authorship among peer-reviewed artificial intelligence-related articles with a specific focus in breast imaging. MATERIALS AND METHODS A strategic search was conducted in July 2022 according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to capture all existing and publicly available peer-reviewed articles intersecting AI and breast imaging. Primary outcomes were first and senior authors' gender, which were assigned with the aid of an emailed self-declaration survey. Secondary outcomes included country of article, journal impact factor, and year of publication. Comparisons were made using logistic regression models and analysis of variances. RESULTS 115 studies were included in the analysis. Women authors represented 35.7% (41/115) and 37.4% (43/115) of first and senior authors, respectively. Logistic regression modelling showed a significant increase in women senior authors over time but no changes in women first authors. Impact factor was not associated with female authorship and certain countries had women authorship reach over 50%. CONCLUSION This study demonstrates that there is a significant authorship gender gap in artificial intelligence breast imaging research. An increasing temporal trend of senior authors in breast imaging AI-related research is a promising prognosis for more women voices in this field. Further study needs to be done to understand the reasons behind this gap and any potential implications.
Collapse
Affiliation(s)
- Po Hsiang Shawn Yuan
- Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Tyler D Yan
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Sonali Sharma
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Erin Chahley
- Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Luke J MacLean
- Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Vivianne Freitas
- Department of Medical Imaging, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Charlotte J Yong-Hing
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada; Department of Diagnostic Imaging, BC Cancer, Vancouver, British Columbia, Canada.
| |
Collapse
|
13
|
Leng Y, Kan A, Wang X, Li X, Xiao X, Wang Y, Liu L, Gong L. Contrast-enhanced CT radiomics for preoperative prediction of stage in epithelial ovarian cancer: a multicenter study. BMC Cancer 2024; 24:307. [PMID: 38448945 PMCID: PMC10916071 DOI: 10.1186/s12885-024-12037-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 02/21/2024] [Indexed: 03/08/2024] Open
Abstract
BACKGROUND Preoperative prediction of International Federation of Gynecology and Obstetrics (FIGO) stage in patients with epithelial ovarian cancer (EOC) is crucial for determining appropriate treatment strategy. This study aimed to explore the value of contrast-enhanced CT (CECT) radiomics in predicting preoperative FIGO staging of EOC, and to validate the stability of the model through an independent external dataset. METHODS A total of 201 EOC patients from three centers, divided into a training cohort (n = 106), internal (n = 46) and external (n = 49) validation cohorts. The least absolute shrinkage and selection operator (LASSO) regression algorithm was used for screening radiomics features. Five machine learning algorithms, namely logistic regression, support vector machine, random forest, light gradient boosting machine (LightGBM), and decision tree, were utilized in developing the radiomics model. The optimal performing algorithm was selected to establish the radiomics model, clinical model, and the combined model. The diagnostic performances of the models were evaluated through receiver operating characteristic analysis, and the comparison of the area under curves (AUCs) were conducted using the Delong test or F-test. RESULTS Seven optimal radiomics features were retained by the LASSO algorithm. The five radiomics models demonstrate that the LightGBM model exhibits notable prediction efficiency and robustness, as evidenced by AUCs of 0.83 in the training cohort, 0.80 in the internal validation cohort, and 0.68 in the external validation cohort. The multivariate logistic regression analysis indicated that carcinoma antigen 125 and tumor location were identified as independent predictors for the FIGO staging of EOC. The combined model exhibited best diagnostic efficiency, with AUCs of 0.95 in the training cohort, 0.83 in the internal validation cohort, and 0.79 in the external validation cohort. The F-test indicated that the combined model exhibited a significantly superior AUC value compared to the radiomics model in the training cohort (P < 0.001). CONCLUSIONS The combined model integrating clinical characteristics and radiomics features shows potential as a non-invasive adjunctive diagnostic modality for preoperative evaluation of the FIGO staging status of EOC, thereby facilitating clinical decision-making and enhancing patient outcomes.
Collapse
Affiliation(s)
- Yinping Leng
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Minde Road No. 1, 330006, Nanchang, Jiangxi Province, China
| | - Ao Kan
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Minde Road No. 1, 330006, Nanchang, Jiangxi Province, China
| | - Xiwen Wang
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Minde Road No. 1, 330006, Nanchang, Jiangxi Province, China
| | - Xiaofen Li
- Department of Radiology, Jiangxi Provincial People's Hospital, Nanchang, Jiangxi, China
| | - Xuan Xiao
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Minde Road No. 1, 330006, Nanchang, Jiangxi Province, China
| | - Yu Wang
- Clinical and Technical Support, Philips Healthcare, Shanghai, China
| | - Lan Liu
- Department of Radiology, Jiangxi Cancer Hospital, Nanchang, Jiangxi, China.
| | - Lianggeng Gong
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Minde Road No. 1, 330006, Nanchang, Jiangxi Province, China.
| |
Collapse
|
14
|
Salehi MA, Harandi H, Mohammadi S, Shahrabi Farahani M, Shojaei S, Saleh RR. Diagnostic Performance of Artificial Intelligence in Detection of Hepatocellular Carcinoma: A Meta-analysis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01058-1. [PMID: 38438694 DOI: 10.1007/s10278-024-01058-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 02/18/2024] [Accepted: 02/19/2024] [Indexed: 03/06/2024]
Abstract
Due to the increasing interest in the use of artificial intelligence (AI) algorithms in hepatocellular carcinoma detection, we performed a systematic review and meta-analysis to pool the data on diagnostic performance metrics of AI and to compare them with clinicians' performance. A search in PubMed and Scopus was performed in January 2024 to find studies that evaluated and/or validated an AI algorithm for the detection of HCC. We performed a meta-analysis to pool the data on the metrics of diagnostic performance. Subgroup analysis based on the modality of imaging and meta-regression based on multiple parameters were performed to find potential sources of heterogeneity. The risk of bias was assessed using Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) and Prediction Model Study Risk of Bias Assessment Tool (PROBAST) reporting guidelines. Out of 3177 studies screened, 44 eligible studies were included. The pooled sensitivity and specificity for internally validated AI algorithms were 84% (95% CI: 81,87) and 92% (95% CI: 90,94), respectively. Externally validated AI algorithms had a pooled sensitivity of 85% (95% CI: 78,89) and specificity of 84% (95% CI: 72,91). When clinicians were internally validated, their pooled sensitivity was 70% (95% CI: 60,78), while their pooled specificity was 85% (95% CI: 77,90). This study implies that AI can perform as a diagnostic supplement for clinicians and radiologists by screening images and highlighting regions of interest, thus improving workflow.
Collapse
Affiliation(s)
| | - Hamid Harandi
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
- Research Center for Antibiotic Stewardship and Antimicrobial Resistance, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | - Soheil Mohammadi
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran.
| | | | - Shayan Shojaei
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Ramy R Saleh
- Department of Oncology, McGill University, Montreal, QC, H3A 0G4, Canada
- Division of Medical Oncology, McGill University Health Centre, Montreal, QC, H4A 3J1, Canada
| |
Collapse
|
15
|
Liu Z, Zhang L, Wu Z, Yu X, Cao C, Dai H, Liu N, Liu J, Liu W, Li Q, Shen D, Li X, Zhu D, Liu T. Surviving ChatGPT in healthcare. FRONTIERS IN RADIOLOGY 2024; 3:1224682. [PMID: 38464946 PMCID: PMC10920216 DOI: 10.3389/fradi.2023.1224682] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 07/25/2023] [Indexed: 03/12/2024]
Abstract
At the dawn of of Artificial General Intelligence (AGI), the emergence of large language models such as ChatGPT show promise in revolutionizing healthcare by improving patient care, expanding medical access, and optimizing clinical processes. However, their integration into healthcare systems requires careful consideration of potential risks, such as inaccurate medical advice, patient privacy violations, the creation of falsified documents or images, overreliance on AGI in medical education, and the perpetuation of biases. It is crucial to implement proper oversight and regulation to address these risks, ensuring the safe and effective incorporation of AGI technologies into healthcare systems. By acknowledging and mitigating these challenges, AGI can be harnessed to enhance patient care, medical knowledge, and healthcare processes, ultimately benefiting society as a whole.
Collapse
Affiliation(s)
- Zhengliang Liu
- School of Computing, University of Georgia, Athens, GA, United States
| | - Lu Zhang
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, United States
| | - Zihao Wu
- School of Computing, University of Georgia, Athens, GA, United States
| | - Xiaowei Yu
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, United States
| | - Chao Cao
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, United States
| | - Haixing Dai
- School of Computing, University of Georgia, Athens, GA, United States
| | - Ninghao Liu
- School of Computing, University of Georgia, Athens, GA, United States
| | - Jun Liu
- Department of Radiology, Second Xiangya Hospital, Changsha, Hunan, China
| | - Wei Liu
- Department of Radiation Oncology, Mayo Clinic, Scottsdale, AZ, United States
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Department of Research and Development, Shanhai United Imaging Intelligence Co., Ltd., Shanghai, China
- Shanghai Clinical Research and Trial Center, Shanghai, China
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Dajiang Zhu
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, United States
| | - Tianming Liu
- School of Computing, University of Georgia, Athens, GA, United States
| |
Collapse
|
16
|
Boverhof BJ, Redekop WK, Bos D, Starmans MPA, Birch J, Rockall A, Visser JJ. Radiology AI Deployment and Assessment Rubric (RADAR) to bring value-based AI into radiological practice. Insights Imaging 2024; 15:34. [PMID: 38315288 PMCID: PMC10844175 DOI: 10.1186/s13244-023-01599-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 11/14/2023] [Indexed: 02/07/2024] Open
Abstract
OBJECTIVE To provide a comprehensive framework for value assessment of artificial intelligence (AI) in radiology. METHODS This paper presents the RADAR framework, which has been adapted from Fryback and Thornbury's imaging efficacy framework to facilitate the valuation of radiology AI from conception to local implementation. Local efficacy has been newly introduced to underscore the importance of appraising an AI technology within its local environment. Furthermore, the RADAR framework is illustrated through a myriad of study designs that help assess value. RESULTS RADAR presents a seven-level hierarchy, providing radiologists, researchers, and policymakers with a structured approach to the comprehensive assessment of value in radiology AI. RADAR is designed to be dynamic and meet the different valuation needs throughout the AI's lifecycle. Initial phases like technical and diagnostic efficacy (RADAR-1 and RADAR-2) are assessed pre-clinical deployment via in silico clinical trials and cross-sectional studies. Subsequent stages, spanning from diagnostic thinking to patient outcome efficacy (RADAR-3 to RADAR-5), require clinical integration and are explored via randomized controlled trials and cohort studies. Cost-effectiveness efficacy (RADAR-6) takes a societal perspective on financial feasibility, addressed via health-economic evaluations. The final level, RADAR-7, determines how prior valuations translate locally, evaluated through budget impact analysis, multi-criteria decision analyses, and prospective monitoring. CONCLUSION The RADAR framework offers a comprehensive framework for valuing radiology AI. Its layered, hierarchical structure, combined with a focus on local relevance, aligns RADAR seamlessly with the principles of value-based radiology. CRITICAL RELEVANCE STATEMENT The RADAR framework advances artificial intelligence in radiology by delineating a much-needed framework for comprehensive valuation. KEYPOINTS • Radiology artificial intelligence lacks a comprehensive approach to value assessment. • The RADAR framework provides a dynamic, hierarchical method for thorough valuation of radiology AI. • RADAR advances clinical radiology by bridging the artificial intelligence implementation gap.
Collapse
Affiliation(s)
- Bart-Jan Boverhof
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - W Ken Redekop
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Daniel Bos
- Department of Epidemiology, Erasmus University Medical Centre, Rotterdam, The Netherlands
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Martijn P A Starmans
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | | | - Andrea Rockall
- Department of Surgery & Cancer, Imperial College London, London, UK
| | - Jacob J Visser
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands.
| |
Collapse
|
17
|
Lombi L, Rossero E. How artificial intelligence is reshaping the autonomy and boundary work of radiologists. A qualitative study. SOCIOLOGY OF HEALTH & ILLNESS 2024; 46:200-218. [PMID: 37573551 DOI: 10.1111/1467-9566.13702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 07/19/2023] [Indexed: 08/15/2023]
Abstract
The application of artificial intelligence (AI) in medical practice is spreading, especially in technologically dense fields such as radiology, which could consequently undergo profound transformations in the near future. This article aims to qualitatively explore the potential influence of AI technologies on the professional identity of radiologists. Drawing on 12 in-depth interviews with a subgroup of radiologists who participated in a larger study, this article investigated (1) whether radiologists perceived AI as a threat to their decision-making autonomy; and (2) how radiologists perceived the future of their profession compared to other health-care professions. The findings revealed that while AI did not generally affect radiologists' decision-making autonomy, it threatened their professional and epistemic authority. Two discursive strategies were identified to explain these findings. The first strategy emphasised radiologists' specific expertise and knowledge that extends beyond interpreting images, a task performed with high accuracy by AI machines. The second strategy underscored the fostering of radiologists' professional prestige through developing expertise in using AI technologies, a skill that would distinguish them from other clinicians who did not pose this knowledge. This study identifies AI machines as status objects and useful tools in performing boundary work in and around the radiological profession.
Collapse
Affiliation(s)
- Linda Lombi
- Department of Sociology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Eleonora Rossero
- Fundamental Rights Laboratory, Collegio Carlo Alberto, Turin, Italy
| |
Collapse
|
18
|
Feng L, Zhang Y, Wei W, Qiu H, Shi M. Applying deep learning to recognize the properties of vitreous opacity in ophthalmic ultrasound images. Eye (Lond) 2024; 38:380-385. [PMID: 37596401 PMCID: PMC10810903 DOI: 10.1038/s41433-023-02705-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 07/20/2023] [Accepted: 08/09/2023] [Indexed: 08/20/2023] Open
Abstract
BACKGROUND To explore the feasibility of artificial intelligence technology based on deep learning to automatically recognize the properties of vitreous opacities in ophthalmic ultrasound images. METHODS A total of 2000 greyscale Doppler ultrasound images containing non-pathological eye and three typical vitreous opacities confirmed as physiological vitreous opacity (VO), asteroid hyalosis (AH), and vitreous haemorrhage (VH) were selected and labelled for each lesion type. Five residual networks (ResNet) and two GoogLeNet models were trained to recognize vitreous lesions. Seventy-five percent of the images were randomly selected as the training set, and the remaining 25% were selected as the test set. The accuracy and parameters were recorded and compared among these seven different deep learning (DL) models. The precision, recall, F1 score, and area under the receiver operating characteristic curve (AUC) values for recognizing vitreous lesions were calculated for the most accurate DL model. RESULTS These seven DL models had significant differences in terms of their accuracy and parameters. GoogLeNet Inception V1 achieved the highest accuracy (95.5%) and minor parameters (10315580) in vitreous lesion recognition. GoogLeNet Inception V1 achieved precision values of 0.94, 0.94, 0.96, and 0.96, recall values of 0.94, 0.93, 0.97 and 0.98, and F1 scores of 0.94, 0.93, 0.96 and 0.97 for normal, VO, AH, and VH recognition, respectively. The AUC values for these four vitreous lesion types were 0.99, 1.0, 0.99, and 0.99, respectively. CONCLUSIONS GoogLeNet Inception V1 has shown promising results in ophthalmic ultrasound image recognition. With increasing ultrasound image data, a wide variety of confidential information on eye diseases can be detected automatically by artificial intelligence technology based on deep learning.
Collapse
Affiliation(s)
- Li Feng
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China
| | | | - Wei Wei
- Hebei Eye Hospital, Xingtai, China
| | - Hui Qiu
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China
| | - Mingyu Shi
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China.
| |
Collapse
|
19
|
Sandeep B, Liu X, Huang X, Wang X, Mao L, Xiao Z. Feasibility of artificial intelligence its current status, clinical applications, and future direction in cardiovascular disease. Curr Probl Cardiol 2024; 49:102349. [PMID: 38103818 DOI: 10.1016/j.cpcardiol.2023.102349] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 12/13/2023] [Indexed: 12/19/2023]
Abstract
In routine clinical practice, the diagnosis and treatment of cardiovascular disease (CVD) rely on data in a variety of formats. These formats comprise invasive angiography, laboratory data, non-invasive imaging diagnostics, and patient history. Artificial intelligence (AI) is a field of computer science that aims to mimic human thought processes, learning capacity, and knowledge storage. In cardiovascular medicine, artificial intelligence (AI) algorithms have been used to discover novel genotypes and phenotypes in established diseases enhance patient care, enable cost effectiveness, and lower readmission and mortality rates. AI will lead to a paradigm change toward precision cardiovascular medicine in the near future. The promise application of AI in cardiovascular medicine is immense; however, failure to recognize and ignorance of the challenges may overshadow its potential clinical impact. AI can facilitate every stage in cardiology in the imaging process, from acquisition and reconstruction, to segmentation, measurement, interpretation, and subsequent clinical pathways. Along with new possibilities, new threats arise, acknowledging and understanding them is as important as understanding the machine learning (ML) methodology itself. Therefore, attention is also paid to the current opinions and guidelines regarding the validation and safety of AI. This paper provides a outline for clinicians on relevant aspects of AI and machine learning, selection of applications and methods in cardiology to date, and identifies how cardiovascular medicine could incorporate AI in the future. With progress continuing in this emerging technology, the impact for cardiovascular medicine is highlighted to provide insight for the practicing clinician and to identify potential patient benefits.
Collapse
Affiliation(s)
- Bhushan Sandeep
- Department of Cardio-Thoracic Surgery, Chengdu Second People's Hospital, Chengdu, Sichuan 610017, China.
| | - Xian Liu
- Department of Cardio-Thoracic Surgery, Chengdu Second People's Hospital, Chengdu, Sichuan 610017, China
| | - Xin Huang
- Department of Anesthesiology, West China Hospital of Medicine, Sichuan University, Chengdu, Sichuan 610017, China
| | - Xiaowei Wang
- Department of Cardio-Thoracic Surgery, Chengdu Second People's Hospital, Chengdu, Sichuan 610017, China
| | - Long Mao
- Department of Cardio-Thoracic Surgery, Chengdu Second People's Hospital, Chengdu, Sichuan 610017, China
| | - Zongwei Xiao
- Department of Cardio-Thoracic Surgery, Chengdu Second People's Hospital, Chengdu, Sichuan 610017, China
| |
Collapse
|
20
|
Takeshita WM, Silva TP, de Souza LLT, Tenorio JM. State of the art and prospects for artificial intelligence in orthognathic surgery: A systematic review with meta-analysis. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101787. [PMID: 38302057 DOI: 10.1016/j.jormas.2024.101787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 01/19/2024] [Accepted: 01/25/2024] [Indexed: 02/03/2024]
Abstract
OBJECTIVE To present a systematic review of the state of the art regarding clinical applications, main features, and outcomes of artificial intelligence (AI) in orthognathic surgery. METHODS The PICOS strategy was performed on a systematic review (SR) to answer the following question: "What are the state of the art, characteristics and outcomes of applications with artificial intelligence for orthognathic surgery?" After registering in PROSPERO (CRD42021270789) a systematic search was performed in the databases: PubMed (including MedLine), Scopus, Embase, LILACS, MEDLINE EBSCOHOST and Cochrane Library. 195 studies were selected, after screening titles and abstracts, of which thirteen manuscripts were included in the qualitative analysis and six in the quantitative analysis. The treatment effects were plotted in a Forest-plot. JBI questionnaire for observational studies was used to asses the risk of bias. The quality of the SR evidence was assessed using the GRADE tool. RESULTS AI studies on 2D cephalometry for orthognathic surgery, the Tau2 = 0.00, Chi2 = 3.78, p = 1.00 and I² of 0 %, indicating low heterogeneity, AI did not differ statistically from control (p = 0.79). AI studies in the diagnosis of the decision of whether or not to perform orthognathic surgery showed heterogeneity, and therefore meta-analysis was not peformed. CONCLUSION The outcome of AI is similar to the control group, with a low degree of bias, highlighting its potential for use in various applications.
Collapse
Affiliation(s)
- Wilton Mitsunari Takeshita
- Department of Diagnosis and Surgery, São Paulo State University (Unesp), School of Dentistry, Araçatuba, 16015-050 Araçatuba, São Paulo, Brazil
| | - Thaísa Pinheiro Silva
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), 13414-903 Piracicaba, Sao Paulo, Brazil.
| | | | - Josceli Maria Tenorio
- Department of Information technology and health, Federal Institute of São Paulo, 01109-010 São Paulo, São Paulo, Brazil
| |
Collapse
|
21
|
Pitarch C, Ungan G, Julià-Sapé M, Vellido A. Advances in the Use of Deep Learning for the Analysis of Magnetic Resonance Image in Neuro-Oncology. Cancers (Basel) 2024; 16:300. [PMID: 38254790 PMCID: PMC10814384 DOI: 10.3390/cancers16020300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
Machine Learning is entering a phase of maturity, but its medical applications still lag behind in terms of practical use. The field of oncological radiology (and neuro-oncology in particular) is at the forefront of these developments, now boosted by the success of Deep-Learning methods for the analysis of medical images. This paper reviews in detail some of the most recent advances in the use of Deep Learning in this field, from the broader topic of the development of Machine-Learning-based analytical pipelines to specific instantiations of the use of Deep Learning in neuro-oncology; the latter including its use in the groundbreaking field of ultra-low field magnetic resonance imaging.
Collapse
Affiliation(s)
- Carla Pitarch
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Eurecat, Digital Health Unit, Technology Centre of Catalonia, 08005 Barcelona, Spain
| | - Gulnur Ungan
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Margarida Julià-Sapé
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Alfredo Vellido
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| |
Collapse
|
22
|
Sumner C, Kietzman A, Kadom N, Frigini A, Makary MS, Martin A, McKnight C, Retrouvey M, Spieler B, Griffith B. Medical Malpractice and Diagnostic Radiology: Challenges and Opportunities. Acad Radiol 2024; 31:233-241. [PMID: 37741730 DOI: 10.1016/j.acra.2023.08.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 08/10/2023] [Accepted: 08/14/2023] [Indexed: 09/25/2023]
Abstract
Medicolegal challenges in radiology are broad and impact both radiologists and patients. Radiologists may be affected directly by malpractice litigation or indirectly due to defensive imaging ordering practices. Patients also could be harmed physically, emotionally, or financially by unnecessary tests or procedures. As technology advances, the incorporation of artificial intelligence into medicine will bring with it new medicolegal challenges and opportunities. This article reviews the current and emerging direct and indirect effects of medical malpractice on radiologists and summarizes evidence-based solutions.
Collapse
Affiliation(s)
- Christina Sumner
- Department of Radiology and Imaging Sciences, Emory University (C.S., N.K.), Atlanta, GA
| | | | - Nadja Kadom
- Department of Radiology and Imaging Sciences, Emory University (C.S., N.K.), Atlanta, GA
| | - Alexandre Frigini
- Department of Radiology, Baylor College of Medicine (A.F.), Houston, TX
| | - Mina S Makary
- Department of Radiology, Ohio State University Wexner Medical Center (M.S.M.), Columbus, OH
| | - Ardenne Martin
- Louisiana State University Health Sciences Center (A.M.), New Orleans, LA
| | - Colin McKnight
- Department of Radiology, Vanderbilt University Medical Center (C.M.), Nashville, TN
| | - Michele Retrouvey
- Department of Radiology, Eastern Virginia Medical School/Medical Center Radiologists (M.R.), Norfolk, VA
| | - Bradley Spieler
- Department of Radiology, University Medical Center, Louisiana State University Health Sciences Center (B.S.), New Orleans, LA
| | - Brent Griffith
- Department of Radiology, Henry Ford Health (B.G.), Detroit, MI.
| |
Collapse
|
23
|
Panagiotidis E, Papachristou K, Makridou A, Zoglopitou LA, Paschali A, Kalathas T, Chatzimarkou M, Chatzipavlidou V. Review of artificial intelligence clinical applications in Nuclear Medicine. Nucl Med Commun 2024; 45:24-34. [PMID: 37901920 DOI: 10.1097/mnm.0000000000001786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2023]
Abstract
This paper provides an in-depth analysis of the clinical applications of artificial intelligence (AI) in Nuclear Medicine, focusing on three key areas: neurology, cardiology, and oncology. Beginning with neurology, specifically Alzheimer's disease and Parkinson's disease, the paper examines reviews on diagnosis and treatment planning. The same pattern is followed in cardiology studies. In the final section on oncology, the paper explores the various AI applications in multiple cancer types, including lung, head and neck, lymphoma, and pancreatic cancer.
Collapse
Affiliation(s)
| | | | - Anna Makridou
- Medical Physics Department, Cancer Hospital of Thessaloniki 'Theagenio', Thessaloniki, Greece
| | | | - Anna Paschali
- Nuclear Medicine Department, Cancer Hospital of Thessaloniki 'Theagenio' and
| | - Theodoros Kalathas
- Nuclear Medicine Department, Cancer Hospital of Thessaloniki 'Theagenio' and
| | - Michael Chatzimarkou
- Medical Physics Department, Cancer Hospital of Thessaloniki 'Theagenio', Thessaloniki, Greece
| | | |
Collapse
|
24
|
TerKonda SP, TerKonda AA, Sacks JM, Kinney BM, Gurtner GC, Nachbar JM, Reddy SK, Jeffers LL. Artificial Intelligence: Singularity Approaches. Plast Reconstr Surg 2024; 153:204e-217e. [PMID: 37075274 DOI: 10.1097/prs.0000000000010572] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/21/2023]
Abstract
SUMMARY Artificial intelligence (AI) has been a disruptive technology within health care, from the development of simple care algorithms to complex deep-learning models. AI has the potential to reduce the burden of administrative tasks, advance clinical decision-making, and improve patient outcomes. Unlocking the full potential of AI requires the analysis of vast quantities of clinical information. Although AI holds tremendous promise, widespread adoption within plastic surgery remains limited. Understanding the basics is essential for plastic surgeons to evaluate the potential uses of AI. This review provides an introduction of AI, including the history of AI, key concepts, applications of AI in plastic surgery, and future implications.
Collapse
Affiliation(s)
- Sarvam P TerKonda
- From the Division of Plastic and Reconstructive Surgery, Mayo Clinic Florida
| | - Anurag A TerKonda
- Division of Plastic and Reconstructive Surgery, Washington University School of Medicine in St. Louis
| | - Justin M Sacks
- Division of Plastic and Reconstructive Surgery, Washington University School of Medicine in St. Louis
| | - Brian M Kinney
- Division of Plastic Surgery, University of Southern California
| | - Geoff C Gurtner
- Division of Plastic and Reconstructive Surgery, Stanford University
| | | | | | | |
Collapse
|
25
|
Hassankhani A, Amoukhteh M, Valizadeh P, Jannatdoust P, Sabeghi P, Gholamrezanezhad A. Radiology as a Specialty in the Era of Artificial Intelligence: A Systematic Review and Meta-analysis on Medical Students, Radiology Trainees, and Radiologists. Acad Radiol 2024; 31:306-321. [PMID: 37349157 DOI: 10.1016/j.acra.2023.05.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 05/20/2023] [Accepted: 05/21/2023] [Indexed: 06/24/2023]
Abstract
RATIONALE AND OBJECTIVES Artificial intelligence (AI) is changing radiology by automating tasks and assisting in abnormality detection and understanding perceptions of medical students, radiology trainees, and radiologists is vital for preparing them for AI integration in radiology. MATERIALS AND METHODS A systematic review and meta-analysis were conducted following established guidelines. PubMed, Scopus, and Web of Science were searched up to March 5, 2023. Eligible studies reporting outcomes of interest were included, and relevant data were extracted and analyzed using STATA software version 17.0. RESULTS A meta-analysis of 21 studies revealed that 22.36% of individuals were less likely to choose radiology as a career due to concerns about advances in AI. Medical students showed higher rates of concern (31.94%) compared to radiology trainees and radiologists (9.16%) (P < .01). Radiology trainees and radiologists also demonstrated higher basic AI knowledge (71.84% vs 35.38%). Medical students had higher rates of belief that AI poses a threat to the radiology job market (42.66% vs 6.25%, P < .02). The pooled rate of respondents who believed that "AI will revolutionize radiology in the future" was 79.48%, with no significant differences based on participants' positions. The pooled rate of responders who believed in the integration of AI in medical curricula was 81.75% among radiology trainees and radiologists and 70.23% among medical students. CONCLUSION The study revealed growing concerns regarding the impact of AI in radiology, particularly among medical students, which can be addressed by revamping education, providing direct AI experience, addressing limitations, and emphasizing medico-legal issues to prepare for AI integration in radiology.
Collapse
Affiliation(s)
- Amir Hassankhani
- Department of Radiology, Keck School of Medicine, University of Southern California (USC), 1441 Eastlake Avenue Ste 2315, Los Angeles, CA 90089 (A.H., M.A., P.V., P.J., P.S., A.G.); Department of Radiology, Mayo Clinic, Rochester, Minnesota (A.H., M.A.).
| | - Melika Amoukhteh
- Department of Radiology, Keck School of Medicine, University of Southern California (USC), 1441 Eastlake Avenue Ste 2315, Los Angeles, CA 90089 (A.H., M.A., P.V., P.J., P.S., A.G.); Department of Radiology, Mayo Clinic, Rochester, Minnesota (A.H., M.A.)
| | - Parya Valizadeh
- Department of Radiology, Keck School of Medicine, University of Southern California (USC), 1441 Eastlake Avenue Ste 2315, Los Angeles, CA 90089 (A.H., M.A., P.V., P.J., P.S., A.G.)
| | - Payam Jannatdoust
- Department of Radiology, Keck School of Medicine, University of Southern California (USC), 1441 Eastlake Avenue Ste 2315, Los Angeles, CA 90089 (A.H., M.A., P.V., P.J., P.S., A.G.)
| | - Paniz Sabeghi
- Department of Radiology, Keck School of Medicine, University of Southern California (USC), 1441 Eastlake Avenue Ste 2315, Los Angeles, CA 90089 (A.H., M.A., P.V., P.J., P.S., A.G.)
| | - Ali Gholamrezanezhad
- Department of Radiology, Keck School of Medicine, University of Southern California (USC), 1441 Eastlake Avenue Ste 2315, Los Angeles, CA 90089 (A.H., M.A., P.V., P.J., P.S., A.G.)
| |
Collapse
|
26
|
Vafaei Sadr A, Bülow R, von Stillfried S, Schmitz NEJ, Pilva P, Hölscher DL, Ha PP, Schweiker M, Boor P. Operational greenhouse-gas emissions of deep learning in digital pathology: a modelling study. Lancet Digit Health 2024; 6:e58-e69. [PMID: 37996339 PMCID: PMC10728828 DOI: 10.1016/s2589-7500(23)00219-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 10/04/2023] [Accepted: 10/16/2023] [Indexed: 11/25/2023]
Abstract
BACKGROUND Deep learning is a promising way to improve health care. Image-processing medical disciplines, such as pathology, are expected to be transformed by deep learning. The first clinically applicable deep-learning diagnostic support tools are already available in cancer pathology, and their number is increasing. However, data on the environmental sustainability of these tools are scarce. We aimed to conduct an environmental-sustainability analysis of a theoretical implementation of deep learning in patient-care pathology. METHODS For this modelling study, we first assembled and calculated relevant data and parameters of a digital-pathology workflow. Data were breast and prostate specimens from the university clinic at the Institute of Pathology of the Rheinisch-Westfälische Technische Hochschule Aachen (Aachen, Germany), for which commercially available deep learning was already available. Only specimens collected between Jan 1 and Dec 31, 2019 were used, to omit potential biases due to the COVID-19 pandemic. Our final selection was based on 2 representative weeks outside holidays, covering different types of specimens. To calculate carbon dioxide (CO2) or CO2 equivalent (CO2 eq) emissions of deep learning in pathology, we gathered relevant data for exact numbers and sizes of whole-slide images (WSIs), which were generated by scanning histopathology samples of prostate and breast specimens. We also evaluated different data input scenarios (including all slide tiles, only tiles containing tissue, or only tiles containing regions of interest). To convert estimated energy consumption from kWh to CO2 eq, we used the internet protocol address of the computational server and the Electricity Maps database to obtain information on the sources of the local electricity grid (ie, renewable vs non-renewable), and estimated the number of trees and proportion of the local and world's forests needed to sequester the CO2 eq emissions. We calculated the computational requirements and CO2 eq emissions of 30 deep-learning models that varied in task and size. The first scenario represented the use of one commercially available deep-learning model for one task in one case (1-task), the second scenario considered two deep-learning models for two tasks per case (2-task), the third scenario represented a future, potentially automated workflow that could handle 7 tasks per case (7-task), and the fourth scenario represented the use of a single potential, large, computer-vision model that could conduct multiple tasks (multitask). We also compared the performance (ie, accuracy) and CO2 eq emissions of different deep-learning models for the classification of renal cell carcinoma on WSIs, also from Rheinisch-Westfälische Technische Hochschule Aachen. We also tested other approaches to reducing CO2 eq emissions, including model pruning and an alternative method for histopathology analysis (pathomics). FINDINGS The pathology database contained 35 552 specimens (237 179 slides), 6420 of which were prostate specimens (10 115 slides) and 11 801 of which were breast specimens (19 763 slides). We selected and subsequently digitised 140 slides from eight breast-cancer cases and 223 slides from five prostate-cancer cases. Applying large deep-learning models on all WSI tiles of prostate and breast pathology cases would result in yearly CO2 eq emissions of 7·65 metric tons (t; 95% CI 7·62-7·68) with the use of a single deep-learning model per case; yearly CO2 eq emissions were up to 100·56 t (100·21-100·99) with the use of seven deep-learning models per case. CO2 eq emissions for different deep-learning model scenarios, data inputs, and deep-learning model sizes for all slides varied from 3·61 t (3·59-3·63) to 2795·30 t (1177·51-6482·13. For the estimated number of overall pathology cases worldwide, the yearly CO2 eq emissions varied, reaching up to 16 megatons (Mt) of CO2 eq, requiring up to 86 590 km2 (0·22%) of world forest to sequester the CO2 eq emissions. Use of the 7-task scenario and small deep-learning models on slides containing tissue only could substantially reduce CO2 eq emissions worldwide by up to 141 times (0·1 Mt, 95% CI 0·1-0·1). Considering the local environment in Aachen, Germany, the maximum CO2 eq emission from the use of deep learning in digital pathology only would require 32·8% (95% CI 13·8-76·6) of the local forest to sequester the CO2 eq emissions. A single pathomics run on a tissue could provide information that was comparable to or even better than the output of multitask deep-learning models, but with 147 times reduced CO2 eq emissions. INTERPRETATION Our findings suggest that widespread use of deep learning in pathology might have considerable global-warming potential. The medical community, policy decision makers, and the public should be aware of this potential and encourage the use of CO2 eq emissions reduction strategies where possible. FUNDING German Research Foundation, European Research Council, German Federal Ministry of Education and Research, Health, Economic Affairs and Climate Action, and the Innovation Fund of the Federal Joint Committee.
Collapse
Affiliation(s)
- Alireza Vafaei Sadr
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany; Department of Public Health Sciences, College of Medicine, Pennsylvania State University, Hershey, PA, USA
| | - Roman Bülow
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - Saskia von Stillfried
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - Nikolas E J Schmitz
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - Pourya Pilva
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - David L Hölscher
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - Peiman Pilehchi Ha
- Healthy Living Spaces Lab, Institute for Occupational, Social and Environmental Medicine, Medical Faculty, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - Marcel Schweiker
- Healthy Living Spaces Lab, Institute for Occupational, Social and Environmental Medicine, Medical Faculty, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany
| | - Peter Boor
- Institute of Pathology, University Hospital Aachen, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany; Department of Nephrology and Immunology, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany.
| |
Collapse
|
27
|
Silva TP, Andrade-Bortoletto MF, Freitas DQ, Oliveira-Santos C, Takeshita WM. Metaverse and oral and maxillofacial radiology: Where do they meet? Eur J Radiol 2024; 170:111210. [PMID: 38101195 DOI: 10.1016/j.ejrad.2023.111210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 11/17/2023] [Indexed: 12/17/2023]
Abstract
Since previous literatureregarding the application of the metaverse in educationis scarce, the present letter aimed to highlight possible applications, as a complementary tool for the classroom, in the oral and maxillofacial radiology academic experience.Thepotential risksof the metaverse are also discussed. The metaverse and its possible applications, especially related to enhanced teaching and learning, will become a hot topic in the near future, and therefore, there will be a challenging learning curve before the educator makes the most of these innovative educational tools empowered by deeply interactive virtual reality technology.
Collapse
Affiliation(s)
- Thaísa Pinheiro Silva
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Zip Code 13414-903, Piracicaba, Sao Paulo, Brazil.
| | - Maria Fernanda Andrade-Bortoletto
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Zip Code 13414-903, Piracicaba, Sao Paulo, Brazil
| | - Deborah Queiroz Freitas
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Zip Code 13414-903, Piracicaba, Sao Paulo, Brazil
| | - Christiano Oliveira-Santos
- Department of Diagnosis and Oral Health, University of Louisville School of Dentistry, Louisville/KY, USA
| | - Wilton Mitsunari Takeshita
- Department of Diagnosis and Surgery, Paulista State University Júlio de Mesquita Filho, Zip Code 16015-050 Araçatuba, São Paulo, Brazil
| |
Collapse
|
28
|
Iruvuri AG, Miryala G, Khan Y, Ramalingam NT, Sevugaperumal B, Soman M, Padmanabhan A. Revolutionizing Dental Imaging: A Comprehensive Study on the Integration of Artificial Intelligence in Dental and Maxillofacial Radiology. Cureus 2023; 15:e50292. [PMID: 38205468 PMCID: PMC10776831 DOI: 10.7759/cureus.50292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 12/08/2023] [Indexed: 01/12/2024] Open
Abstract
Recent advancements in deep learning and artificial intelligence (AI) have profoundly impacted various fields, including diagnostic imaging. Integrating AI technologies such as deep learning and convolutional neural networks has the potential to drastically improve diagnostic methods in the field of dentistry and maxillofacial radiography. A systematic study that adhered to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standards was carried out to examine the efficacy and uses of AI in dentistry and maxillofacial radiography. Incorporating cohort studies, case-control studies, and randomized clinical trials, the study used an interdisciplinary methodology. A thorough search spanning peer-reviewed research papers from 2009 to 2023 was done in databases including MEDLINE/PubMed and EMBASE. The inclusion criteria were original clinical research in English that employed AI models to recognize anatomical components in oral and maxillofacial pictures, identify anomalies, and diagnose disorders. The study looked at numerous research that used cutting-edge technology to show how accurate and dependable dental imaging is. Among the tasks covered by these investigations were age estimation, periapical lesion detection, segmentation of maxillary structures, assessment of dentofacial abnormalities, and segmentation of the mandibular canal. The study revealed important developments in the precise definition of anatomical structures and the identification of diseases. The use of AI technology in dental imaging marks a revolutionary development that will usher in a time of unmatched accuracy and effectiveness. These technologies have not only improved diagnostic accuracy and enabled early disease detection but have also streamlined intricate procedures, significantly enhancing patient outcomes. The symbiotic collaboration between human expertise and machine intelligence promises a future of more sophisticated and empathetic oral healthcare.
Collapse
Affiliation(s)
- Alekhya G Iruvuri
- General Dentistry, Malla Reddy Dental College for Women, Hyderabad, IND
| | - Gouthami Miryala
- General Dentistry, SVS Institute of Dental Sciences, Mahabubnagar, IND
| | - Yusuf Khan
- Orthodontics and Dentofacial Orthopaedics, Diamond Medical Specialists, Taif, SAU
| | | | - Bharath Sevugaperumal
- General Dentistry, Rajah Muthiah Dental College and Hospital, Annamalai University, Chidambaram, IND
| | - Mrunmayee Soman
- Dentistry, Dr. D. Y. Patil Dental College and Hospital, Pune, IND
| | | |
Collapse
|
29
|
Chen Y, Wu Z, Wang P, Xie L, Yan M, Jiang M, Yang Z, Zheng J, Zhang J, Zhu J. Radiology Residents' Perceptions of Artificial Intelligence: Nationwide Cross-Sectional Survey Study. J Med Internet Res 2023; 25:e48249. [PMID: 37856181 PMCID: PMC10623237 DOI: 10.2196/48249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 07/07/2023] [Accepted: 09/01/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is transforming various fields, with health care, especially diagnostic specialties such as radiology, being a key but controversial battleground. However, there is limited research systematically examining the response of "human intelligence" to AI. OBJECTIVE This study aims to comprehend radiologists' perceptions regarding AI, including their views on its potential to replace them, its usefulness, and their willingness to accept it. We examine the influence of various factors, encompassing demographic characteristics, working status, psychosocial aspects, personal experience, and contextual factors. METHODS Between December 1, 2020, and April 30, 2021, a cross-sectional survey was completed by 3666 radiology residents in China. We used multivariable logistic regression models to examine factors and associations, reporting odds ratios (ORs) and 95% CIs. RESULTS In summary, radiology residents generally hold a positive attitude toward AI, with 29.90% (1096/3666) agreeing that AI may reduce the demand for radiologists, 72.80% (2669/3666) believing AI improves disease diagnosis, and 78.18% (2866/3666) feeling that radiologists should embrace AI. Several associated factors, including age, gender, education, region, eye strain, working hours, time spent on medical images, resilience, burnout, AI experience, and perceptions of residency support and stress, significantly influence AI attitudes. For instance, burnout symptoms were associated with greater concerns about AI replacement (OR 1.89; P<.001), less favorable views on AI usefulness (OR 0.77; P=.005), and reduced willingness to use AI (OR 0.71; P<.001). Moreover, after adjusting for all other factors, perceived AI replacement (OR 0.81; P<.001) and AI usefulness (OR 5.97; P<.001) were shown to significantly impact the intention to use AI. CONCLUSIONS This study profiles radiology residents who are accepting of AI. Our comprehensive findings provide insights for a multidimensional approach to help physicians adapt to AI. Targeted policies, such as digital health care initiatives and medical education, can be developed accordingly.
Collapse
Affiliation(s)
- Yanhua Chen
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Ziye Wu
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Peicheng Wang
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Linbo Xie
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Mengsha Yan
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Maoqing Jiang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Jianjun Zheng
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jingfeng Zhang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jiming Zhu
- Vanke School of Public Health, Tsinghua University, Beijing, China
- Institute for Healthy China, Tsinghua University, Beijing, China
| |
Collapse
|
30
|
Botchu R, Iyengar KP. Will ChatGPT Drive Radiology in the Future? Indian J Radiol Imaging 2023; 33:436-437. [PMID: 37811166 PMCID: PMC10556333 DOI: 10.1055/s-0043-1769591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2023] Open
Affiliation(s)
- Rajesh Botchu
- Department of Musculoskeletal Radiology, Royal Orthopaedic Hospital, Birmingham, United Kingdom
| | | |
Collapse
|
31
|
Nicolson A, Dowling J, Koopman B. Improving chest X-ray report generation by leveraging warm starting. Artif Intell Med 2023; 144:102633. [PMID: 37783533 DOI: 10.1016/j.artmed.2023.102633] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 07/11/2023] [Accepted: 08/11/2023] [Indexed: 10/04/2023]
Abstract
Automatically generating a report from a patient's Chest X-rays (CXRs) is a promising solution to reducing clinical workload and improving patient care. However, current CXR report generators-which are predominantly encoder-to-decoder models-lack the diagnostic accuracy to be deployed in a clinical setting. To improve CXR report generation, we investigate warm starting the encoder and decoder with recent open-source computer vision and natural language processing checkpoints, such as the Vision Transformer (ViT) and PubMedBERT. To this end, each checkpoint is evaluated on the MIMIC-CXR and IU X-ray datasets. Our experimental investigation demonstrates that the Convolutional vision Transformer (CvT) ImageNet-21K and the Distilled Generative Pre-trained Transformer 2 (DistilGPT2) checkpoints are best for warm starting the encoder and decoder, respectively. Compared to the state-of-the-art (M2 Transformer Progressive), CvT2DistilGPT2 attained an improvement of 8.3% for CE F-1, 1.8% for BLEU-4, 1.6% for ROUGE-L, and 1.0% for METEOR. The reports generated by CvT2DistilGPT2 have a higher similarity to radiologist reports than previous approaches. This indicates that leveraging warm starting improves CXR report generation. Code and checkpoints for CvT2DistilGPT2 are available at https://github.com/aehrc/cvt2distilgpt2.
Collapse
Affiliation(s)
- Aaron Nicolson
- The Australian e-Health Research Centre, CSIRO Health and Biosecurity, Brisbane, Australia.
| | - Jason Dowling
- The Australian e-Health Research Centre, CSIRO Health and Biosecurity, Brisbane, Australia
| | - Bevan Koopman
- The Australian e-Health Research Centre, CSIRO Health and Biosecurity, Brisbane, Australia
| |
Collapse
|
32
|
Bonny T, Al Nassan W, Obaideen K, Al Mallahi MN, Mohammad Y, El-damanhoury HM. Contemporary Role and Applications of Artificial Intelligence in Dentistry. F1000Res 2023; 12:1179. [PMID: 37942018 PMCID: PMC10630586 DOI: 10.12688/f1000research.140204.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/24/2023] [Indexed: 11/10/2023] Open
Abstract
Artificial Intelligence (AI) technologies play a significant role and significantly impact various sectors, including healthcare, engineering, sciences, and smart cities. AI has the potential to improve the quality of patient care and treatment outcomes while minimizing the risk of human error. Artificial Intelligence (AI) is transforming the dental industry, just like it is revolutionizing other sectors. It is used in dentistry to diagnose dental diseases and provide treatment recommendations. Dental professionals are increasingly relying on AI technology to assist in diagnosis, clinical decision-making, treatment planning, and prognosis prediction across ten dental specialties. One of the most significant advantages of AI in dentistry is its ability to analyze vast amounts of data quickly and accurately, providing dental professionals with valuable insights to enhance their decision-making processes. The purpose of this paper is to identify the advancement of artificial intelligence algorithms that have been frequently used in dentistry and assess how well they perform in terms of diagnosis, clinical decision-making, treatment, and prognosis prediction in ten dental specialties; dental public health, endodontics, oral and maxillofacial surgery, oral medicine and pathology, oral & maxillofacial radiology, orthodontics and dentofacial orthopedics, pediatric dentistry, periodontics, prosthodontics, and digital dentistry in general. We will also show the pros and cons of using AI in all dental specialties in different ways. Finally, we will present the limitations of using AI in dentistry, which made it incapable of replacing dental personnel, and dentists, who should consider AI a complimentary benefit and not a threat.
Collapse
Affiliation(s)
- Talal Bonny
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Wafaa Al Nassan
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Khaled Obaideen
- Sustainable Energy and Power Systems Research Centre, RISE, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Maryam Nooman Al Mallahi
- Department of Mechanical and Aerospace Engineering, United Arab Emirates University, Al Ain City, Abu Dhabi, 27272, United Arab Emirates
| | - Yara Mohammad
- College of Engineering and Information Technology, Ajman University, Ajman University, Ajman, Ajman, United Arab Emirates
| | - Hatem M. El-damanhoury
- Department of Preventive and Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, 27272, United Arab Emirates
| |
Collapse
|
33
|
Yoon AP, Chung WT, Wang CW, Kuo CF, Lin C, Chung KC. Can a Deep Learning Algorithm Improve Detection of Occult Scaphoid Fractures in Plain Radiographs? A Clinical Validation Study. Clin Orthop Relat Res 2023; 481:1828-1835. [PMID: 36881548 PMCID: PMC10427075 DOI: 10.1097/corr.0000000000002612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 12/04/2022] [Accepted: 02/02/2023] [Indexed: 03/08/2023]
Abstract
BACKGROUND Occult scaphoid fractures on initial radiographs of an injury are a diagnostic challenge to physicians. Although artificial intelligence models based on the principles of deep convolutional neural networks (CNN) offer a potential method of detection, it is unknown how such models perform in the clinical setting. QUESTIONS/PURPOSES (1) Does CNN-assisted image interpretation improve interobserver agreement for scaphoid fractures? (2) What is the sensitivity and specificity of image interpretation performed with and without CNN assistance (as stratified by type: normal scaphoid, occult fracture, and apparent fracture)? (3) Does CNN assistance improve time to diagnosis and physician confidence level? METHODS This survey-based experiment presented 15 scaphoid radiographs (five normal, five apparent fractures, and five occult fractures) with and without CNN assistance to physicians in a variety of practice settings across the United States and Taiwan. Occult fractures were identified by follow-up CT scans or MRI. Participants met the following criteria: Postgraduate Year 3 or above resident physician in plastic surgery, orthopaedic surgery, or emergency medicine; hand fellows; and attending physicians. Among the 176 invited participants, 120 completed the survey and met the inclusion criteria. Of the participants, 31% (37 of 120) were fellowship-trained hand surgeons, 43% (52 of 120) were plastic surgeons, and 69% (83 of 120) were attending physicians. Most participants (73% [88 of 120]) worked in academic centers, whereas the remainder worked in large, urban private practice hospitals. Recruitment occurred between February 2022 and March 2022. Radiographs with CNN assistance were accompanied by predictions of fracture presence and gradient-weighted class activation mapping of the predicted fracture site. Sensitivity and specificity of the CNN-assisted physician diagnoses were calculated to assess diagnostic performance. We calculated interobserver agreement with the Gwet agreement coefficient (AC1). Physician diagnostic confidence was estimated using a self-assessment Likert scale, and the time to arrive at a diagnosis for each case was measured. RESULTS Interobserver agreement among physicians for occult scaphoid radiographs was higher with CNN assistance than without (AC1 0.42 [95% CI 0.17 to 0.68] versus 0.06 [95% CI 0.00 to 0.17], respectively). No clinically relevant differences were observed in time to arrive at a diagnosis (18 ± 12 seconds versus 30 ± 27 seconds, mean difference 12 seconds [95% CI 6 to 17]; p < 0.001) or diagnostic confidence levels (7.2 ± 1.7 seconds versus 6.2 ± 1.6 seconds; mean difference 1 second [95% CI 0.5 to 1.3]; p < 0.001) for occult fractures. CONCLUSION CNN assistance improves physician diagnostic sensitivity and specificity as well as interobserver agreement for the diagnosis of occult scaphoid fractures. The differences observed in diagnostic speed and confidence is likely not clinically relevant. Despite these improvements in clinical diagnoses of scaphoid fractures with the CNN, it is unknown whether development and implementation of such models is cost effective. LEVEL OF EVIDENCE Level II, diagnostic study.
Collapse
Affiliation(s)
- Alfred P. Yoon
- Section of Plastic Surgery, Department of Surgery, University of Michigan Medical School, Ann Arbor, MI, USA
| | - William T. Chung
- Section of Plastic Surgery, Department of Surgery, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Chien-Wei Wang
- Section of Plastic Surgery, Department of Surgery, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Chang-Fu Kuo
- Center for Artificial Intelligence in Medicine, Chang Gung Memorial Hospital, Taipei, Taiwan
| | - Chihung Lin
- Center for Artificial Intelligence in Medicine, Chang Gung Memorial Hospital, Taipei, Taiwan
| | - Kevin C. Chung
- Section of Plastic Surgery, Department of Surgery, University of Michigan Medical School, Ann Arbor, MI, USA
| |
Collapse
|
34
|
Yearley AG, Goedmakers CMW, Panahi A, Doucette J, Rana A, Ranganathan K, Smith TR. FDA-approved machine learning algorithms in neuroradiology: A systematic review of the current evidence for approval. Artif Intell Med 2023; 143:102607. [PMID: 37673576 DOI: 10.1016/j.artmed.2023.102607] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 05/30/2023] [Accepted: 06/05/2023] [Indexed: 09/08/2023]
Abstract
Over the past decade, machine learning (ML) and artificial intelligence (AI) have become increasingly prevalent in the medical field. In the United States, the Food and Drug Administration (FDA) is responsible for regulating AI algorithms as "medical devices" to ensure patient safety. However, recent work has shown that the FDA approval process may be deficient. In this study, we evaluate the evidence supporting FDA-approved neuroalgorithms, the subset of machine learning algorithms with applications in the central nervous system (CNS), through a systematic review of the primary literature. Articles covering the 53 FDA-approved algorithms with applications in the CNS published in PubMed, EMBASE, Google Scholar and Scopus between database inception and January 25, 2022 were queried. Initial searches identified 1505 studies, of which 92 articles met the criteria for extraction and inclusion. Studies were identified for 26 of the 53 neuroalgorithms, of which 10 algorithms had only a single peer-reviewed publication. Performance metrics were available for 15 algorithms, external validation studies were available for 24 algorithms, and studies exploring the use of algorithms in clinical practice were available for 7 algorithms. Papers studying the clinical utility of these algorithms focused on three domains: workflow efficiency, cost savings, and clinical outcomes. Our analysis suggests that there is a meaningful gap between the FDA approval of machine learning algorithms and their clinical utilization. There appears to be room for process improvement by implementation of the following recommendations: the provision of compelling evidence that algorithms perform as intended, mandating minimum sample sizes, reporting of a predefined set of performance metrics for all algorithms and clinical application of algorithms prior to widespread use. This work will serve as a baseline for future research into the ideal regulatory framework for AI applications worldwide.
Collapse
Affiliation(s)
- Alexander G Yearley
- Harvard Medical School, 25 Shattuck St, Boston, MA 02115, USA; Computational Neuroscience Outcomes Center (CNOC), Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115, USA.
| | - Caroline M W Goedmakers
- Computational Neuroscience Outcomes Center (CNOC), Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115, USA; Department of Neurosurgery, Leiden University Medical Center, Albinusdreef 2, 2333 ZA Leiden, Netherlands
| | - Armon Panahi
- The George Washington University School of Medicine and Health Sciences, 2300 I St NW, Washington, DC 20052, USA
| | - Joanne Doucette
- Computational Neuroscience Outcomes Center (CNOC), Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115, USA; School of Pharmacy, MCPHS University, 179 Longwood Ave, Boston, MA 02115, USA
| | - Aakanksha Rana
- Computational Neuroscience Outcomes Center (CNOC), Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115, USA; Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
| | - Kavitha Ranganathan
- Division of Plastic Surgery, Brigham and Women's Hospital, 75 Francis St, Boston, MA 02115, USA
| | - Timothy R Smith
- Harvard Medical School, 25 Shattuck St, Boston, MA 02115, USA; Computational Neuroscience Outcomes Center (CNOC), Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115, USA
| |
Collapse
|
35
|
Altukroni A, Alsaeedi A, Gonzalez-Losada C, Lee JH, Alabudh M, Mirah M, El-Amri S, Ezz El-Deen O. Detection of the pathological exposure of pulp using an artificial intelligence tool: a multicentric study over periapical radiographs. BMC Oral Health 2023; 23:553. [PMID: 37563659 PMCID: PMC10416487 DOI: 10.1186/s12903-023-03251-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Accepted: 07/25/2023] [Indexed: 08/12/2023] Open
Abstract
BACKGROUND Introducing artificial intelligence (AI) into the medical field proved beneficial in automating tasks and streamlining the practitioners' lives. Hence, this study was conducted to design and evaluate an AI tool called Make Sure Caries Detector and Classifier (MSc) for detecting pathological exposure of pulp on digital periapical radiographs and to compare its performance with dentists. METHODS This study was a diagnostic, multi-centric study, with 3461 digital periapical radiographs from three countries and seven centers. MSc was built using Yolov5-x model, and it was used for exposed and unexposed pulp detection. The dataset was split into a train, validate, and test dataset; the ratio was 8-1-1 to prevent overfitting. 345 images with 752 labels were randomly allocated to test MSc. The performance metrics used to test MSc performance included mean average precision (mAP), precision, F1 score, recall, and area under receiver operating characteristic curve (AUC). The metrics used to compare the performance with that of 10 certified dentists were: right diagnosis exposed (RDE), right diagnosis not exposed (RDNE), false diagnosis exposed (FDE), false diagnosis not exposed (FDNE), missed diagnosis (MD), and over diagnosis (OD). RESULTS MSc achieved a performance of more than 90% in all metrics examined: an average precision of 0.928, recall of 0.918, F1-score of 0.922, and AUC of 0.956 (P<.05). The results showed a higher mean of 1.94 for all right (correct) diagnosis parameters in MSc group, while a higher mean of 0.64 for all wrong diagnosis parameters in the dentists group (P<.05). CONCLUSIONS The designed MSc tool proved itself reliable in the detection and differentiating between exposed and unexposed pulp in the internally validated model. It also showed a better performance for the detection of exposed and unexposed pulp when compared to the 10 dentists' consensus.
Collapse
Affiliation(s)
| | - A Alsaeedi
- Department of Computer Science, College of Computer Science and Engineering, Taibah University, Medina, Saudi Arabia
| | - C Gonzalez-Losada
- School of Dentistry, Complutense University of Madrid, Madrid, Spain
| | - J H Lee
- Department of Periodontology, College of Dentistry and Institute of Oral Bioscience, Jeonbuk National University, Jeonju, Korea
| | - M Alabudh
- Ministry of Health, Medina, Saudi Arabia
| | - M Mirah
- Department of Dental Materials, Taibah University, Medina, Saudi Arabia
| | | | | |
Collapse
|
36
|
Kiełczykowski M, Kamiński K, Perkowski K, Zadurska M, Czochrowska E. Application of Artificial Intelligence (AI) in a Cephalometric Analysis: A Narrative Review. Diagnostics (Basel) 2023; 13:2640. [PMID: 37627899 PMCID: PMC10453867 DOI: 10.3390/diagnostics13162640] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 08/04/2023] [Accepted: 08/08/2023] [Indexed: 08/27/2023] Open
Abstract
In recent years, the application of artificial intelligence (AI) has become more and more widespread in medicine and dentistry. It may contribute to improved quality of health care as diagnostic methods are getting more accurate and diagnostic errors are rarer in daily medical practice. The aim of this paper was to present data from the literature on the effectiveness of AI in orthodontic diagnostics based on the analysis of lateral cephalometric radiographs. A review of the literature from 2009 to 2023 has been performed using PubMed, Medline, Scopus and Dentistry & Oral Sciences Source databases. The accuracy of determining cephalometric landmarks using widely available commercial AI-based software and advanced AI algorithms was presented and discussed. Most AI algorithms used for the automated positioning of landmarks on cephalometric radiographs had relatively high accuracy. At the same time, the effectiveness of using AI in cephalometry varies depending on the algorithm or the application type, which has to be accounted for during the interpretation of the results. In conclusion, artificial intelligence is a promising tool that facilitates the identification of cephalometric landmarks in everyday clinical practice, may support orthodontic treatment planning for less experienced clinicians and shorten radiological examination in orthodontics. In the future, AI algorithms used for the automated localisation of cephalometric landmarks may be more accurate than manual analysis.
Collapse
Affiliation(s)
| | | | | | | | - Ewa Czochrowska
- Department of Orthodontics, Medical University in Warsaw, 02-097 Warsaw, Poland; (M.K.); (K.K.); (K.P.); (M.Z.)
| |
Collapse
|
37
|
Eltawil FA, Atalla M, Boulos E, Amirabadi A, Tyrrell PN. Analyzing Barriers and Enablers for the Acceptance of Artificial Intelligence Innovations into Radiology Practice: A Scoping Review. Tomography 2023; 9:1443-1455. [PMID: 37624108 PMCID: PMC10459931 DOI: 10.3390/tomography9040115] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 07/23/2023] [Accepted: 07/26/2023] [Indexed: 08/26/2023] Open
Abstract
OBJECTIVES This scoping review was conducted to determine the barriers and enablers associated with the acceptance of artificial intelligence/machine learning (AI/ML)-enabled innovations into radiology practice from a physician's perspective. METHODS A systematic search was performed using Ovid Medline and Embase. Keywords were used to generate refined queries with the inclusion of computer-aided diagnosis, artificial intelligence, and barriers and enablers. Three reviewers assessed the articles, with a fourth reviewer used for disagreements. The risk of bias was mitigated by including both quantitative and qualitative studies. RESULTS An electronic search from January 2000 to 2023 identified 513 studies. Twelve articles were found to fulfill the inclusion criteria: qualitative studies (n = 4), survey studies (n = 7), and randomized controlled trials (RCT) (n = 1). Among the most common barriers to AI implementation into radiology practice were radiologists' lack of acceptance and trust in AI innovations; a lack of awareness, knowledge, and familiarity with the technology; and perceived threat to the professional autonomy of radiologists. The most important identified AI implementation enablers were high expectations of AI's potential added value; the potential to decrease errors in diagnosis; the potential to increase efficiency when reaching a diagnosis; and the potential to improve the quality of patient care. CONCLUSIONS This scoping review found that few studies have been designed specifically to identify barriers and enablers to the acceptance of AI in radiology practice. The majority of studies have assessed the perception of AI replacing radiologists, rather than other barriers or enablers in the adoption of AI. To comprehensively evaluate the potential advantages and disadvantages of integrating AI innovations into radiology practice, gathering more robust research evidence on stakeholder perspectives and attitudes is essential.
Collapse
Affiliation(s)
- Fatma A. Eltawil
- Department of Medical Imaging, University of Toronto, Toronto, ON M5S 1A1, Canada; (F.A.E.); (M.A.); (E.B.)
| | - Michael Atalla
- Department of Medical Imaging, University of Toronto, Toronto, ON M5S 1A1, Canada; (F.A.E.); (M.A.); (E.B.)
| | - Emily Boulos
- Department of Medical Imaging, University of Toronto, Toronto, ON M5S 1A1, Canada; (F.A.E.); (M.A.); (E.B.)
| | - Afsaneh Amirabadi
- Diagnostic Imaging Department, The Hospital for Sick Children, Toronto, ON M5G 1E8, Canada;
| | - Pascal N. Tyrrell
- Department of Medical Imaging, University of Toronto, Toronto, ON M5S 1A1, Canada; (F.A.E.); (M.A.); (E.B.)
- Department of Statistical Sciences, University of Toronto, Toronto, ON M5G 1Z5, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON M5S 1A8, Canada
| |
Collapse
|
38
|
Gehrmann J, Herczog E, Decker S, Beyan O. What prevents us from reusing medical real-world data in research. Sci Data 2023; 10:459. [PMID: 37443164 DOI: 10.1038/s41597-023-02361-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023] Open
Affiliation(s)
- Julia Gehrmann
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Institute for Biomedical Informatics, Cologne, Germany.
| | | | - Stefan Decker
- Chair of Computer Science 5, RWTH Aachen University, Aachen, Germany
- Department of Data Science and Artificial Intelligence, Fraunhofer FIT, Sankt Augustin, Germany
| | - Oya Beyan
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Institute for Biomedical Informatics, Cologne, Germany
- Department of Data Science and Artificial Intelligence, Fraunhofer FIT, Sankt Augustin, Germany
| |
Collapse
|
39
|
Brock KK, Chen SR, Sheth RA, Siewerdsen JH. Imaging in Interventional Radiology: 2043 and Beyond. Radiology 2023; 308:e230146. [PMID: 37462500 PMCID: PMC10374939 DOI: 10.1148/radiol.230146] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/21/2023]
Abstract
Since its inception in the early 20th century, interventional radiology (IR) has evolved tremendously and is now a distinct clinical discipline with its own training pathway. The arsenal of modalities at work in IR includes x-ray radiography and fluoroscopy, CT, MRI, US, and molecular and multimodality imaging within hybrid interventional environments. This article briefly reviews the major developments in imaging technology in IR over the past century, summarizes technologies now representative of the standard of care, and reflects on emerging advances in imaging technology that could shape the field in the century ahead. The role of emergent imaging technologies in enabling high-precision interventions is also briefly reviewed, including image-guided ablative therapies.
Collapse
Affiliation(s)
- Kristy K Brock
- From the Departments of Imaging Physics (K.K.B., J.H.S.), Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| | - Stephen R Chen
- From the Departments of Imaging Physics (K.K.B., J.H.S.), Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| | - Rahul A Sheth
- From the Departments of Imaging Physics (K.K.B., J.H.S.), Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| | - Jeffrey H Siewerdsen
- From the Departments of Imaging Physics (K.K.B., J.H.S.), Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| |
Collapse
|
40
|
Estakhraji SIZ, Pirasteh A, Bradshaw T, McMillan A. On the effect of training database size for MR-based synthetic CT generation in the head. Comput Med Imaging Graph 2023; 107:102227. [PMID: 37167815 PMCID: PMC10483321 DOI: 10.1016/j.compmedimag.2023.102227] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 03/22/2023] [Accepted: 03/27/2023] [Indexed: 05/13/2023]
Abstract
Generation of computed tomography (CT) images from magnetic resonance (MR) images using deep learning methods has recently demonstrated promise in improving MR-guided radiotherapy and PET/MR imaging. PURPOSE To investigate the performance of unsupervised training using a large number of unpaired data sets as well as the potential gain in performance after fine-tuning with supervised training using spatially registered data sets in generation of synthetic computed tomography (sCT) from magnetic resonance (MR) images. MATERIALS AND METHODS A cycleGAN method consisting of two generators (residual U-Net) and two discriminators (patchGAN) was used for unsupervised training. Unsupervised training utilized unpaired T1-weighted MR and CT images (2061 sets for each modality). Five supervised models were then fine-tuned starting with the generator of the unsupervised model for 1, 10, 25, 50, and 100 pairs of spatially registered MR and CT images. Four supervised training models were also trained from scratch for 10, 25, 50, and 100 pairs of spatially registered MR and CT images using only the residual U-Net generator. All models were evaluated on a holdout test set of spatially registered images from 253 patients, including 30 with significant pathology. sCT images were compared against the acquired CT images using mean absolute error (MAE), Dice coefficient, and structural similarity index (SSIM). sCT images from 60 test subjects generated by the unsupervised, and most accurate of the fine-tuned and supervised models were qualitatively evaluated by a radiologist. RESULTS While unsupervised training produced realistic-appearing sCT images, addition of even one set of registered images improved quantitative metrics. Addition of more paired data sets to the training further improved image quality, with the best results obtained using the highest number of paired data sets (n=100). Supervised training was found to be superior to unsupervised training, while fine-tuned training showed no clear benefit over supervised learning, regardless of the training sample size. CONCLUSION Supervised learning (using either fine tuning or full supervision) leads to significantly higher quantitative accuracy in the generation of sCT from MR images. However, fine-tuned training using both a large number of unpaired image sets was generally no better than supervised learning using registered image sets alone, suggesting the importance of well registered paired data set for training compared to a large set of unpaired data.
Collapse
Affiliation(s)
| | - Ali Pirasteh
- Department of Radiology, University of Wisconsin-Madison, United States of America; Department of Medical Physics, University of Wisconsin-Madison, United States of America
| | - Tyler Bradshaw
- Department of Radiology, University of Wisconsin-Madison, United States of America
| | - Alan McMillan
- Department of Radiology, University of Wisconsin-Madison, United States of America; Department of Medical Physics, University of Wisconsin-Madison, United States of America; Department of Electrical and Computer Engineering, University of Wisconsin-Madison, United States of America; Department of Biomedical Engineering, University of Wisconsin-Madison, United States of America
| |
Collapse
|
41
|
Harris CS, Pozzar RA, Conley Y, Eicher M, Hammer MJ, Kober KM, Miaskowski C, Colomer-Lahiguera S. Big Data in Oncology Nursing Research: State of the Science. Semin Oncol Nurs 2023; 39:151428. [PMID: 37085404 DOI: 10.1016/j.soncn.2023.151428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 03/21/2023] [Indexed: 04/23/2023]
Abstract
OBJECTIVE To review the state of oncology nursing science as it pertains to big data. The authors aim to define and characterize big data, describe key considerations for accessing and analyzing big data, provide examples of analyses of big data in oncology nursing science, and highlight ethical considerations related to the collection and analysis of big data. DATA SOURCES Peer-reviewed articles published by investigators specializing in oncology, nursing, and related disciplines. CONCLUSION Big data is defined as data that are high in volume, velocity, and variety. To date, oncology nurse scientists have used big data to predict patient outcomes from clinician notes, identify distinct symptom phenotypes, and identify predictors of chemotherapy toxicity, among other applications. Although the emergence of big data and advances in computational methods provide new and exciting opportunities to advance oncology nursing science, several challenges are associated with accessing and using big data. Data security, research participant privacy, and the underrepresentation of minoritized individuals in big data are important concerns. IMPLICATIONS FOR NURSING PRACTICE With their unique focus on the interplay between the whole person, the environment, and health, nurses bring an indispensable perspective to the interpretation and application of big data research findings. Given the increasing ubiquity of passive data collection, all nurses should be taught the definition, characteristics, applications, and limitations of big data. Nurses who are trained in big data and advanced computational methods will be poised to contribute to guidelines and policies that preserve the rights of human research participants.
Collapse
Affiliation(s)
- Carolyn S Harris
- Postdoctoral Scholar, School of Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Rachel A Pozzar
- Nurse Scientist at Phyllis F. Cantor Center for Research in Nursing and Patient Care Services, Dana-Farber Cancer Institute, Boston, Massachusetts, USA and Instructor at Harvard Medical School, Boston, Massachusetts, USA
| | - Yvette Conley
- Professor, School of Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Manuela Eicher
- Associate Professor and Director of the Institute of Higher Education and Research in Healthcare (IUFRS), Faculty of Biology and Medicine, University of Lausanne, and Lausanne University Hospital, Lausanne, Switzerland
| | - Marilyn J Hammer
- Director, The Phyllis F. Cantor Center for Research in Nursing and Patient Care Services, Dana-Farber Cancer Institute, Boston, Massachusetts, USA and Lecturer at Harvard Medical School, Boston, Massachusetts, USA
| | - Kord M Kober
- Associate Professor, School of Nursing, University of California, San Francisco, California, USA
| | - Christine Miaskowski
- Professor, Schools of Medicine and Nursing, University of California, San Francisco, California, USA
| | - Sara Colomer-Lahiguera
- Senior Nurse Scientist and Junior Lecturer, Institute of Higher Education and Research in Healthcare (IUFRS), Faculty of Biology and Medicine, University of Lausanne, and Lausanne University Hospital, Lausanne, Switzerland.
| |
Collapse
|
42
|
Yuce F, Öziç MÜ, Tassoker M. Detection of pulpal calcifications on bite-wing radiographs using deep learning. Clin Oral Investig 2023; 27:2679-2689. [PMID: 36564651 DOI: 10.1007/s00784-022-04839-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 12/21/2022] [Indexed: 12/25/2022]
Abstract
OBJECTIVES Pulpal calcifications are discrete hard calcified masses of varying sizes in the dental pulp cavity. This study is aimed at measuring the performance of the YOLOv4 deep learning algorithm to automatically determine whether there is calcification in the pulp chambers in bite-wing radiographs. MATERIALS AND METHODS In this study, 2000 bite-wing radiographs were collected from the faculty database. The oral radiologists labeled the pulp chambers on the radiographs as "Present" and "Absent" according to whether there was calcification. The data were randomly divided into 80% training, 10% validation, and 10% testing. The weight file for pulpal calcification was obtained by training the YOLOv4 algorithm with the transfer learning method. Using the weights obtained, pulp chambers and calcifications were automatically detected on the test radiographs that the algorithm had never seen. Two oral radiologists evaluated the test results, and performance criteria were calculated. RESULTS The results obtained on the test data were evaluated in two stages: detection of pulp chambers and detection of pulpal calcification. The detection performance of pulp chambers was as follows: recall 86.98%, precision 98.94%, F1-score 91.60%, and accuracy 86.18%. Pulpal calcification "Absent" and "Present" detection performance was as follows: recall 86.39%, precision 85.23%, specificity 97.94%, F1-score 85.49%, and accuracy 96.54%. CONCLUSION The YOLOv4 algorithm trained with bite-wing radiographs detected pulp chambers and calcification with high success rates. CLINICAL RELEVANCE Automatic detection of pulpal calcifications with deep learning will be used in clinical practice as a decision support system with high accuracy rates in diagnosing dentists.
Collapse
Affiliation(s)
- Fatma Yuce
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Okan University, Istanbul, Turkey
| | - Muhammet Üsame Öziç
- Faculty of Technology Department of Biomedical Engineering, Pamukkale University, Denizli, Turkey
| | - Melek Tassoker
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Necmettin Erbakan University, Konya, Turkey.
| |
Collapse
|
43
|
Kondylakis H, Kalokyri V, Sfakianakis S, Marias K, Tsiknakis M, Jimenez-Pastor A, Camacho-Ramos E, Blanquer I, Segrelles JD, López-Huguet S, Barelle C, Kogut-Czarkowska M, Tsakou G, Siopis N, Sakellariou Z, Bizopoulos P, Drossou V, Lalas A, Votis K, Mallol P, Marti-Bonmati L, Alberich LC, Seymour K, Boucher S, Ciarrocchi E, Fromont L, Rambla J, Harms A, Gutierrez A, Starmans MPA, Prior F, Gelpi JL, Lekadir K. Data infrastructures for AI in medical imaging: a report on the experiences of five EU projects. Eur Radiol Exp 2023; 7:20. [PMID: 37150779 PMCID: PMC10164664 DOI: 10.1186/s41747-023-00336-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 03/02/2023] [Indexed: 05/09/2023] Open
Abstract
Artificial intelligence (AI) is transforming the field of medical imaging and has the potential to bring medicine from the era of 'sick-care' to the era of healthcare and prevention. The development of AI requires access to large, complete, and harmonized real-world datasets, representative of the population, and disease diversity. However, to date, efforts are fragmented, based on single-institution, size-limited, and annotation-limited datasets. Available public datasets (e.g., The Cancer Imaging Archive, TCIA, USA) are limited in scope, making model generalizability really difficult. In this direction, five European Union projects are currently working on the development of big data infrastructures that will enable European, ethically and General Data Protection Regulation-compliant, quality-controlled, cancer-related, medical imaging platforms, in which both large-scale data and AI algorithms will coexist. The vision is to create sustainable AI cloud-based platforms for the development, implementation, verification, and validation of trustable, usable, and reliable AI models for addressing specific unmet needs regarding cancer care provision. In this paper, we present an overview of the development efforts highlighting challenges and approaches selected providing valuable feedback to future attempts in the area.Key points• Artificial intelligence models for health imaging require access to large amounts of harmonized imaging data and metadata.• Main infrastructures adopted either collect centrally anonymized data or enable access to pseudonymized distributed data.• Developing a common data model for storing all relevant information is a challenge.• Trust of data providers in data sharing initiatives is essential.• An online European Union meta-tool-repository is a necessity minimizing effort duplication for the various projects in the area.
Collapse
Affiliation(s)
| | | | | | - Kostas Marias
- FORTH-ICS, FORTH-ICS, N. Plastira 100, Heraklion, Crete, Greece
| | | | | | | | | | | | | | | | | | - Gianna Tsakou
- MAGGIOLI S.P.A., Research and Development Lab, Marousi, Greece
| | - Nikolaos Siopis
- Centre of Research & Technology - Hellas, Information Technologies Institute, Thermi - Thessaloniki, Greece
| | - Zisis Sakellariou
- Centre of Research & Technology - Hellas, Information Technologies Institute, Thermi - Thessaloniki, Greece
| | - Paschalis Bizopoulos
- Centre of Research & Technology - Hellas, Information Technologies Institute, Thermi - Thessaloniki, Greece
| | - Vicky Drossou
- Centre of Research & Technology - Hellas, Information Technologies Institute, Thermi - Thessaloniki, Greece
| | - Antonios Lalas
- Centre of Research & Technology - Hellas, Information Technologies Institute, Thermi - Thessaloniki, Greece
| | - Konstantinos Votis
- Centre of Research & Technology - Hellas, Information Technologies Institute, Thermi - Thessaloniki, Greece
| | - Pedro Mallol
- La Fe Health Research Institute, Valencia, Spain
| | | | | | | | | | | | - Lauren Fromont
- European Genome-Phenome Archive, Centre for Genomic Regulation, Barcelona, Spain
| | - Jordi Rambla
- European Genome-Phenome Archive, Centre for Genomic Regulation, Barcelona, Spain
| | | | | | | | - Fred Prior
- Department of Biomedical Informatics, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | | | | |
Collapse
|
44
|
Paudyal R, Shah AD, Akin O, Do RKG, Konar AS, Hatzoglou V, Mahmood U, Lee N, Wong RJ, Banerjee S, Shin J, Veeraraghavan H, Shukla-Dave A. Artificial Intelligence in CT and MR Imaging for Oncological Applications. Cancers (Basel) 2023; 15:cancers15092573. [PMID: 37174039 PMCID: PMC10177423 DOI: 10.3390/cancers15092573] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 04/13/2023] [Accepted: 04/17/2023] [Indexed: 05/15/2023] Open
Abstract
Cancer care increasingly relies on imaging for patient management. The two most common cross-sectional imaging modalities in oncology are computed tomography (CT) and magnetic resonance imaging (MRI), which provide high-resolution anatomic and physiological imaging. Herewith is a summary of recent applications of rapidly advancing artificial intelligence (AI) in CT and MRI oncological imaging that addresses the benefits and challenges of the resultant opportunities with examples. Major challenges remain, such as how best to integrate AI developments into clinical radiology practice, the vigorous assessment of quantitative CT and MR imaging data accuracy, and reliability for clinical utility and research integrity in oncology. Such challenges necessitate an evaluation of the robustness of imaging biomarkers to be included in AI developments, a culture of data sharing, and the cooperation of knowledgeable academics with vendor scientists and companies operating in radiology and oncology fields. Herein, we will illustrate a few challenges and solutions of these efforts using novel methods for synthesizing different contrast modality images, auto-segmentation, and image reconstruction with examples from lung CT as well as abdome, pelvis, and head and neck MRI. The imaging community must embrace the need for quantitative CT and MRI metrics beyond lesion size measurement. AI methods for the extraction and longitudinal tracking of imaging metrics from registered lesions and understanding the tumor environment will be invaluable for interpreting disease status and treatment efficacy. This is an exciting time to work together to move the imaging field forward with narrow AI-specific tasks. New AI developments using CT and MRI datasets will be used to improve the personalized management of cancer patients.
Collapse
Affiliation(s)
- Ramesh Paudyal
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Akash D Shah
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Oguz Akin
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Richard K G Do
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Amaresha Shridhar Konar
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Vaios Hatzoglou
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Usman Mahmood
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Nancy Lee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Richard J Wong
- Head and Neck Service, Department of Surgery, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | | | | | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Amita Shukla-Dave
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| |
Collapse
|
45
|
Diao K, Liang HQ, Yin HK, Yuan MJ, Gu M, Yu PX, He S, Sun J, Song B, Li K, He Y. Multi-channel deep learning model-based myocardial spatial-temporal morphology feature on cardiac MRI cine images diagnoses the cause of LVH. Insights Imaging 2023; 14:70. [PMID: 37093501 PMCID: PMC10126185 DOI: 10.1186/s13244-023-01401-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 03/08/2023] [Indexed: 04/25/2023] Open
Abstract
BACKGROUND To develop a fully automatic framework for the diagnosis of cause for left ventricular hypertrophy (LVH) via cardiac cine images. METHODS A total of 302 LVH patients with cine MRI images were recruited as the primary cohort. Another 53 LVH patients prospectively collected or from multi-centers were used as the external test dataset. Different models based on the cardiac regions (Model 1), segmented ventricle (Model 2) and ventricle mask (Model 3) were constructed. The diagnostic performance was accessed by the confusion matrix with respect to overall accuracy. The capability of the predictive models for binary classification of cardiac amyloidosis (CA), hypertrophic cardiomyopathy (HCM) or hypertensive heart disease (HHD) were also evaluated. Additionally, the diagnostic performance of best Model was compared with that of 7 radiologists/cardiologists. RESULTS Model 3 showed the best performance with an overall classification accuracy up to 77.4% in the external test datasets. On the subtasks for identifying CA, HCM or HHD only, Model 3 also achieved the best performance with AUCs yielding 0.895-0.980, 0.879-0.984 and 0.848-0.983 in the validation, internal test and external test datasets, respectively. The deep learning model showed non-inferior diagnostic capability to the cardiovascular imaging expert and outperformed other radiologists/cardiologists. CONCLUSION The combined model based on the mask of left ventricular segmented from multi-sequences cine MR images shows favorable and robust performance in diagnosing the cause of left ventricular hypertrophy, which could be served as a noninvasive tool and help clinical decision.
Collapse
Affiliation(s)
- Kaiyue Diao
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Hong-Qing Liang
- Department of Radiology, First Affiliated Hospital to Army Medical University (Third Military Medical University Southwest Hospital), Chongqing, China
| | - Hong-Kun Yin
- Institute of Advanced Research, Infervision Medical Technology Co., Ltd, Beijing, China
| | - Ming-Jing Yuan
- Department of Radiology, Yongchuan Hospital, Chongqing Medical University, Chongqing, China
| | - Min Gu
- Department of Radiology, Chongqing General Hospital, University of Chinese Academy of Sciences, Chongqing, China
| | - Peng-Xin Yu
- Institute of Advanced Research, Infervision Medical Technology Co., Ltd, Beijing, China
| | - Sen He
- Department of Cardiology, West China Hospital of Sichuan University, 37 Guo Xue Xiang, Chengdu, 610041, Sichuan, China
| | - Jiayu Sun
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Bin Song
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, Sichuan, China
- Department of Radiology, Sanya Municipal People's Hospital, Sanya, Hainan, China
| | - Kang Li
- West China Biomedical Big Data Center, Med-X Center for Informatics, West China Hospital, Sichuan University, 37 Guo Xue Xiang, Chengdu, 610041, Sichuan, China.
- Med-X Center for Informatics, Sichuan University, Chengdu, China.
| | - Yong He
- Department of Cardiology, West China Hospital of Sichuan University, 37 Guo Xue Xiang, Chengdu, 610041, Sichuan, China.
| |
Collapse
|
46
|
Fogarty R, Goldgof D, Hall L, Lopez A, Johnson J, Gadara M, Stoyanova R, Punnen S, Pollack A, Pow-Sang J, Balagurunathan Y. Classifying Malignancy in Prostate Glandular Structures from Biopsy Scans with Deep Learning. Cancers (Basel) 2023; 15:cancers15082335. [PMID: 37190264 DOI: 10.3390/cancers15082335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/07/2023] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
Histopathological classification in prostate cancer remains a challenge with high dependence on the expert practitioner. We develop a deep learning (DL) model to identify the most prominent Gleason pattern in a highly curated data cohort and validate it on an independent dataset. The histology images are partitioned in tiles (14,509) and are curated by an expert to identify individual glandular structures with assigned primary Gleason pattern grades. We use transfer learning and fine-tuning approaches to compare several deep neural network architectures that are trained on a corpus of camera images (ImageNet) and tuned with histology examples to be context appropriate for histopathological discrimination with small samples. In our study, the best DL network is able to discriminate cancer grade (GS3/4) from benign with an accuracy of 91%, F1-score of 0.91 and AUC 0.96 in a baseline test (52 patients), while the cancer grade discrimination of the GS3 from GS4 had an accuracy of 68% and AUC of 0.71 (40 patients).
Collapse
Affiliation(s)
- Ryan Fogarty
- Department of Machine Learning, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Dmitry Goldgof
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Lawrence Hall
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Alex Lopez
- Tissue Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Joseph Johnson
- Analytic Microscopy Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Manoj Gadara
- Anatomic Pathology Division, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Quest Diagnostics, Tampa, FL 33612, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Sanoj Punnen
- Desai Sethi Urology Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Alan Pollack
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Julio Pow-Sang
- Genitourinary Cancers, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | | |
Collapse
|
47
|
Optimizing Primary Healthcare in Hong Kong: Strategies for the Successful Integration of Radiology Services. Cureus 2023; 15:e37022. [PMID: 37016673 PMCID: PMC10066850 DOI: 10.7759/cureus.37022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/02/2023] [Indexed: 04/04/2023] Open
Abstract
The primary healthcare system in Hong Kong plays a crucial role in addressing the healthcare needs of its population. However, the integration of radiology services into primary care settings has not been fully realized, and there is significant potential for improvement. Incorporating radiology services into primary healthcare can enhance patient care, promote cost-effectiveness, and increase the overall efficiency of the healthcare system by enabling earlier diagnosis and intervention for various health conditions. To successfully integrate radiology services, key strategies include the establishment of public-private partnerships, the adoption of teleradiology and telemedicine services, the development of comprehensive regulatory and policy frameworks, and the exploration of innovative financial models and incentives. By embracing these strategies, Hong Kong can optimize its primary healthcare system and ensure more equitable, effective care for its population.
Collapse
|
48
|
Zhang H, Li Z. RFID supply chain data deconstruction method based on artificial intelligence technology. OPEN COMPUTER SCIENCE 2023. [DOI: 10.1515/comp-2022-0265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023] Open
Abstract
Abstract
Radio frequency identification (RFID) is a broad rapidly evolving skill in the past few years. It is characterized by non-contact identification, fast read and write speed, small label size, large data storage capacity, and other technical advantages. RFID technology for goods movement has completely changed the traditional supply chain management, greatly improved the operational efficiency of enterprises, and has become an important method for the development of supply chain logistics. This work mainly studies and analyzes the RFID supply chain, introduces the development and application of RFID supply chain sector technology, and discusses the operation of the supply chain in detail. Then, according to the existing RFID supply chain, a RFID supply chain artificial intelligence (AI) based approach to technology is proposed, and the data analysis of RFID supply chain is introduced in detail. In this work, through the research experiment of AI technology RFID supply chain data analysis, the experimental data show that there are several time-consuming links in the supply chain system. The time consumed in the AI RFID system is 9.9, 3.4, 3.5, and 29.9 min, respectively, while each link in the original system takes 13.4, 4.9, 4.9, and 34.9 min. It can be seen from the above data that the amount of time in each system link of the AI RFID supply chain system is less than that of the original supply chain system, which shortens the entire product passing cycle and greatly improves work efficiency.
Collapse
|
49
|
Ong W, Zhu L, Tan YL, Teo EC, Tan JH, Kumar N, Vellayappan BA, Ooi BC, Quek ST, Makmur A, Hallinan JTPD. Application of Machine Learning for Differentiating Bone Malignancy on Imaging: A Systematic Review. Cancers (Basel) 2023; 15:cancers15061837. [PMID: 36980722 PMCID: PMC10047175 DOI: 10.3390/cancers15061837] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/07/2023] [Accepted: 03/16/2023] [Indexed: 03/22/2023] Open
Abstract
An accurate diagnosis of bone tumours on imaging is crucial for appropriate and successful treatment. The advent of Artificial intelligence (AI) and machine learning methods to characterize and assess bone tumours on various imaging modalities may assist in the diagnostic workflow. The purpose of this review article is to summarise the most recent evidence for AI techniques using imaging for differentiating benign from malignant lesions, the characterization of various malignant bone lesions, and their potential clinical application. A systematic search through electronic databases (PubMed, MEDLINE, Web of Science, and clinicaltrials.gov) was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A total of 34 articles were retrieved from the databases and the key findings were compiled and summarised. A total of 34 articles reported the use of AI techniques to distinguish between benign vs. malignant bone lesions, of which 12 (35.3%) focused on radiographs, 12 (35.3%) on MRI, 5 (14.7%) on CT and 5 (14.7%) on PET/CT. The overall reported accuracy, sensitivity, and specificity of AI in distinguishing between benign vs. malignant bone lesions ranges from 0.44–0.99, 0.63–1.00, and 0.73–0.96, respectively, with AUCs of 0.73–0.96. In conclusion, the use of AI to discriminate bone lesions on imaging has achieved a relatively good performance in various imaging modalities, with high sensitivity, specificity, and accuracy for distinguishing between benign vs. malignant lesions in several cohort studies. However, further research is necessary to test the clinical performance of these algorithms before they can be facilitated and integrated into routine clinical practice.
Collapse
Affiliation(s)
- Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Correspondence: ; Tel.: +65-67725207
| | - Lei Zhu
- Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore 117417, Singapore
| | - Yi Liang Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Ee Chin Teo
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Balamurugan A. Vellayappan
- Department of Radiation Oncology, National University Cancer Institute Singapore, National University Hospital, 5 Lower Kent Ridge Road, Singapore 119074, Singapore
| | - Beng Chin Ooi
- Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore 117417, Singapore
| | - Swee Tian Quek
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
50
|
Nakagawa K, Moukheiber L, Celi LA, Patel M, Mahmood F, Gondim D, Hogarth M, Levenson R. AI in Pathology: What could possibly go wrong? Semin Diagn Pathol 2023; 40:100-108. [PMID: 36882343 DOI: 10.1053/j.semdp.2023.02.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 02/25/2023] [Accepted: 02/26/2023] [Indexed: 03/05/2023]
Abstract
The field of medicine is undergoing rapid digital transformation. Pathologists are now striving to digitize their data, workflows, and interpretations, assisted by the enabling development of whole-slide imaging. Going digital means that the analog process of human diagnosis can be augmented or even replaced by rapidly evolving AI approaches, which are just now entering into clinical practice. But with such progress comes challenges that reflect a variety of stressors, including the impact of unrepresentative training data with accompanying implicit bias, data privacy concerns, and fragility of algorithm performance. Beyond such core digital aspects, considerations arise related to difficulties presented by changing disease presentations, diagnostic approaches, and therapeutic options. While some tools such as data federation can help with broadening data diversity while preserving expertise and local control, they may not be the full answer to some of these issues. The impact of AI in pathology on the field's human practitioners is still very much unknown: installation of unconscious bias and deference to AI guidance need to be understood and addressed. If AI is widely adopted, it may remove many inefficiencies in daily practice and compensate for staff shortages. It may also cause practitioner deskilling, dethrilling, and burnout. We discuss the technological, clinical, legal, and sociological factors that will influence the adoption of AI in pathology, and its eventual impact for good or ill.
Collapse
Affiliation(s)
| | | | - Leo A Celi
- Massachusetts Institute of Technology, Cambridge, MA
| | | | | | | | | | | |
Collapse
|