1
|
Lingam G, Shakir T, Kader R, Chand M. Role of artificial intelligence in colorectal cancer. Artif Intell Gastrointest Endosc 2024; 5:90723. [DOI: 10.37126/aige.v5.i2.90723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 04/10/2024] [Accepted: 04/19/2024] [Indexed: 05/11/2024] Open
Abstract
The sphere of artificial intelligence (AI) is ever expanding. Applications for clinical practice have been emerging over recent years. Although its uptake has been most prominent in endoscopy, this represents only one aspect of holistic patient care. There are a multitude of other potential avenues in which gastrointestinal care may be involved. We aim to review the role of AI in colorectal cancer as a whole. We performed broad scoping and focused searches of the applications of AI in the field of colorectal cancer. All trials including qualitative research were included from the year 2000 onwards. Studies were grouped into pre-operative, intra-operative and post-operative aspects. Pre-operatively, the major use is with endoscopic recognition. Colonoscopy has embraced the use for human derived classifications such as Narrow-band Imaging International Colorectal Endoscopic, Japan Narrow-band Imaging Expert Team, Paris and Kudo. However, novel detection and diagnostic methods have arisen from advances in AI classification. Intra-operatively, adjuncts such as image enhanced identification of structures and assessment of perfusion have led to improvements in clinical outcomes. Post-operatively, monitoring and surveillance have taken strides with potential socioeconomic and environmental savings. The uses of AI within the umbrella of colorectal surgery are multiple. We have identified existing technologies which are already augmenting cancer care. The future applications are exciting and could at least match, if not surpass human standards.
Collapse
Affiliation(s)
- Gita Lingam
- Department of General Surgery, Princess Alexandra Hospital, Harlow CM20 1QX, United Kingdom
| | - Taner Shakir
- Department of Colorectal Surgery, University College London, London W1W 7TY, United Kingdom
| | - Rawen Kader
- Department of Gastroenterology, University College London, University College London Hospitals Nhs Foundation Trust, London W1B, United Kingdom
| | - Manish Chand
- Gastroenterological Intervention Centre, University College London, London W1W 7TS, United Kingdom
| |
Collapse
|
2
|
Alami Idrissi Y, Virador GM, Singh RB, Rao D, Stone JA, Sandhu SJS. Imaging 3.0: A scoping review. Curr Probl Diagn Radiol 2024; 53:399-404. [PMID: 38242771 DOI: 10.1067/j.cpradiol.2024.01.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 01/16/2024] [Indexed: 01/21/2024]
Abstract
We aim to provide a comprehensive summary of the current body of literature concerning the Imaging 3.0 initiative and its implications for patient care within the field of radiology. We offer a thorough analysis of the literature pertaining to the Imaging 3.0 initiative, emphasizing the practical application of the five pillars of the program, their cost-effectiveness, and their benefits in patient management. By doing so, we hope to illustrate the impact the Imaging 3.0 Initiative can have on the future of radiology and patient care.
Collapse
Affiliation(s)
- Yassine Alami Idrissi
- Hillman Cancer Center, University of Pittsburgh Medical Center, 5030 Centre avenue, Pittsburgh, PA 15213, United States.
| | - Gabriel M Virador
- Department of Internal Medicine, Medstar Union Memorial Hospital, Baltimore, MD, United States
| | - Rahul B Singh
- Department of Internal Medicine, New York City Health and Hospitals/South Brooklyn Health, Brooklyn, NY, United States
| | - Dinesh Rao
- Department of Radiology, Mayo Clinic, Jacksonville, FL, United States
| | - Jeffrey A Stone
- Department of Radiology, Mayo Clinic, Jacksonville, FL, United States
| | | |
Collapse
|
3
|
Ivanovic V, Broadhead K, Chang YM, Hamer JF, Beck R, Hacein-Bey L, Qi L. Shift Volume Directly Impacts Neuroradiology Error Rate at a Large Academic Medical Center: The Case for Volume Limits. AJNR Am J Neuroradiol 2024; 45:374-378. [PMID: 38238099 DOI: 10.3174/ajnr.a8119] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 12/18/2023] [Indexed: 04/10/2024]
Abstract
BACKGROUND AND PURPOSE Unlike in Europe and Japan, guidelines or recommendations from specialized radiological societies on workflow management and adaptive intervention to reduce error rates are currently lacking in the United States. This study of neuroradiologic reads at a large US academic medical center, which may hopefully contribute to this discussion, found a direct relationship between error rate and shift volume. MATERIALS AND METHODS CT and MR imaging reports from our institution's Neuroradiology Quality Assurance database (years 2014-2020) were searched for attending physician errors. Data were collected on shift volume specific error rates per 1000 interpreted studies and RADPEER scores. Optimal cutoff points for 2, 3 and 4 groups of shift volumes were computed along with subgroups' error rates. RESULTS A total of 643 errors were found, 91.7% of which were clinically significant (RADPEER 2b, 3b). The overall error rate (errors/1000 examinations) was 2.36. The best single shift volume cutoff point generated 2 groups: ≤ 26 studies (error rate 1.59) and > 26 studies (2.58; OR: 1.63, P < .001). The best 2 shift volume cutoff points generated 3 shift volume groups: ≤ 19 (1.34), 20-28 (1.88; OR: 1.4, P = .1) and ≥ 29 (2.6; OR: 1.94, P < .001). The best 3 shift volume cutoff points generated 4 groups: ≤ 24 (1.59), 25-66 (2.44; OR: 1.54, P < .001), 67-90 (3.03; OR: 1.91, P < .001), and ≥ 91 (2.07; OR: 1.30, P = .25). The group with shift volume ≥ 91 had a limited sample size. CONCLUSIONS Lower shift volumes yielded significantly lower error rates. The lowest error rates were observed with shift volumes that were limited to 19-26 studies. Error rates at shift volumes between 67-90 studies were 226% higher, compared with the error rate at shift volumes of ≤ 19 studies.
Collapse
Affiliation(s)
- Vladimir Ivanovic
- From the Department of Radiology, Section of Neuroradiology (V.I., J.F.H., R.B.), Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Kenneth Broadhead
- Department of Statistics (K.B.), Colorado State University, Fort Collins, Colorado
| | - Yu-Ming Chang
- Department of Radiology, Section of Neuroradiology (Y.-M.C.), Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - John F Hamer
- From the Department of Radiology, Section of Neuroradiology (V.I., J.F.H., R.B.), Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Ryan Beck
- From the Department of Radiology, Section of Neuroradiology (V.I., J.F.H., R.B.), Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Lotfi Hacein-Bey
- Department of Radiology, Section of Neuroradiology (L.H.-B.), University of California Davis Medical Center, Sacramento, California
| | - Lihong Qi
- Department of Public Health Sciences (L.Q.), School of Medicine, University of California Davis, Davis, California
| |
Collapse
|
4
|
Gertz RJ, Dratsch T, Bunck AC, Lennartz S, Iuga AI, Hellmich MG, Persigehl T, Pennig L, Gietzen CH, Fervers P, Maintz D, Hahnfeldt R, Kottlors J. Potential of GPT-4 for Detecting Errors in Radiology Reports: Implications for Reporting Accuracy. Radiology 2024; 311:e232714. [PMID: 38625012 DOI: 10.1148/radiol.232714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/17/2024]
Abstract
Background Errors in radiology reports may occur because of resident-to-attending discrepancies, speech recognition inaccuracies, and large workload. Large language models, such as GPT-4 (ChatGPT; OpenAI), may assist in generating reports. Purpose To assess effectiveness of GPT-4 in identifying common errors in radiology reports, focusing on performance, time, and cost-efficiency. Materials and Methods In this retrospective study, 200 radiology reports (radiography and cross-sectional imaging [CT and MRI]) were compiled between June 2023 and December 2023 at one institution. There were 150 errors from five common error categories (omission, insertion, spelling, side confusion, and other) intentionally inserted into 100 of the reports and used as the reference standard. Six radiologists (two senior radiologists, two attending physicians, and two residents) and GPT-4 were tasked with detecting these errors. Overall error detection performance, error detection in the five error categories, and reading time were assessed using Wald χ2 tests and paired-sample t tests. Results GPT-4 (detection rate, 82.7%;124 of 150; 95% CI: 75.8, 87.9) matched the average detection performance of radiologists independent of their experience (senior radiologists, 89.3% [134 of 150; 95% CI: 83.4, 93.3]; attending physicians, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; residents, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; P value range, .522-.99). One senior radiologist outperformed GPT-4 (detection rate, 94.7%; 142 of 150; 95% CI: 89.8, 97.3; P = .006). GPT-4 required less processing time per radiology report than the fastest human reader in the study (mean reading time, 3.5 seconds ± 0.5 [SD] vs 25.1 seconds ± 20.1, respectively; P < .001; Cohen d = -1.08). The use of GPT-4 resulted in lower mean correction cost per report than the most cost-efficient radiologist ($0.03 ± 0.01 vs $0.42 ± 0.41; P < .001; Cohen d = -1.12). Conclusion The radiology report error detection rate of GPT-4 was comparable with that of radiologists, potentially reducing work hours and cost. © RSNA, 2024 See also the editorial by Forman in this issue.
Collapse
Affiliation(s)
- Roman Johannes Gertz
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Thomas Dratsch
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Alexander Christian Bunck
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Simon Lennartz
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Andra-Iza Iuga
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Martin Gunnar Hellmich
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Thorsten Persigehl
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Lenhard Pennig
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Carsten Herbert Gietzen
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Philipp Fervers
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - David Maintz
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Robert Hahnfeldt
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Jonathan Kottlors
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| |
Collapse
|
5
|
Chung R, Demers JP, Tiberio R, Savage CA, McNulty F, Stout M, Kambadakone A, Gilman MD, Sharma A, Alkasab TK. Implementation of an Institution-Wide Rules-Based Automated CT Protocoling System. AJR Am J Roentgenol 2024; 222:e2329806. [PMID: 38230904 DOI: 10.2214/ajr.23.29806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2024]
Abstract
BACKGROUND. Examination protocoling is a noninterpretive task that increases radiologists' workload and can cause workflow inefficiencies. OBJECTIVE. The purpose of this study was to evaluate effects of an automated CT protocoling system on examination process times and protocol error rates. METHODS. This retrospective study included 317,597 CT examinations (mean age, 61.8 ± 18.1 [SD] years; male, 161,125; female, 156,447; unspecified sex, 25) from July 2020 to June 2022. A rules-based automated protocoling system was implemented institution-wide; the system evaluated all CT orders in the EHR and assigned a protocol or directed the order for manual radiologist protocoling. The study period comprised pilot (July 2020 to December 2020), implementation (January 2021 to December 2021), and postimplementation (January 2022 to June 2022) phases. Proportions of automatically protocoled examinations were summarized. Process times were recorded. Protocol error rates were assessed by counts of quality improvement (QI) reports and examination recalls and comparison with retrospectively assigned protocols in 450 randomly selected examinations. RESULTS. Frequency of automatic protocoling was 19,366/70,780 (27.4%), 68,875/163,068 (42.2%), and 54,045/83,749 (64.5%) in pilot, implementation, and postimplementation phases, respectively (p < .001). Mean (± SD) times from order entry to protocol assignment for automatically and manually protocoled examinations for emergency department examinations were 0.2 ± 18.2 and 2.1 ± 69.7 hours, respectively; mean inpatient examination times were 0.5 ± 50.0 and 3.5 ± 105.5 hours; and mean outpatient examination times were 361.7 ± 1165.5 and 1289.9 ± 2050.9 hours (all p < .001). Mean (± SD) times from order entry to examination completion for automatically and manually protocoled examinations for emergency department examinations were 2.6 ± 38.6 and 4.2 ± 73.0 hours, respectively (p < .001); for inpatient examinations were 6.3 ± 74.6 and 8.7 ± 109.3 hours (p = .001); and for outpatient examinations were 1367.2 ± 1795.8 and 1471.8 ± 2118.3 hours (p < .001). In the three phases, there were three, 19, and 25 QI reports and zero, one, and three recalls, respectively, for automatically protocoled examinations, versus nine, 19, and five QI reports and one, seven, and zero recalls for manually protocoled examinations. Retrospectively assigned protocols were concordant with 212/214 (99.1%) of automatically protocoled versus 233/236 (98.7%) of manually protocoled examinations. CONCLUSION. The automated protocoling system substantially reduced radiologists' protocoling workload and decreased times from order entry to protocol assignment and examination completion; protocol errors and recalls were infrequent. CLINICAL IMPACT. The system represents a solution for reducing radiologists' time spent performing noninterpretive tasks and improving care efficiency.
Collapse
Affiliation(s)
- Ryan Chung
- Department of Radiology, Division of Abdominal Imaging, Massachusetts General Hospital, 55 Fruit St, White 270, Boston, MA 02114
| | - John P Demers
- Department of Radiology, Massachusetts General Hospital, Boston, MA
| | - Roberta Tiberio
- Department of Radiology, Massachusetts General Hospital, Boston, MA
| | - Cristy A Savage
- Department of Radiology, CT Operations, Massachusetts General Hospital, Boston, MA
| | - Frederick McNulty
- Department of Radiology, CT Operations, Massachusetts General Hospital, Boston, MA
| | - Markus Stout
- Department of Radiology, Massachusetts General Hospital, Boston, MA
| | - Avinash Kambadakone
- Department of Radiology, Division of Abdominal Imaging, Massachusetts General Hospital, 55 Fruit St, White 270, Boston, MA 02114
| | - Matthew D Gilman
- Department of Radiology, Division of Thoracic Imaging and Intervention, Massachusetts General Hospital, Boston, MA
| | - Amita Sharma
- Department of Radiology, Division of Thoracic Imaging and Intervention, Massachusetts General Hospital, Boston, MA
| | - Tarik K Alkasab
- Department of Radiology, Division of Emergency Imaging, Massachusetts General Hospital, Boston, MA
| |
Collapse
|
6
|
Polzer C, Yilmaz E, Meyer C, Jang H, Jansen O, Lorenz C, Bürger C, Glüer CC, Sedaghat S. AI-based automated detection and stability analysis of traumatic vertebral body fractures on computed tomography. Eur J Radiol 2024; 173:111364. [PMID: 38364589 DOI: 10.1016/j.ejrad.2024.111364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 12/29/2023] [Accepted: 02/08/2024] [Indexed: 02/18/2024]
Abstract
PURPOSE We developed and tested a neural network for automated detection and stability analysis of vertebral body fractures on computed tomography (CT). MATERIALS AND METHODS 257 patients who underwent CT were included in this Institutional Review Board (IRB) approved study. 463 fractured and 1883 non-fractured vertebral bodies were included, with 190 fractures unstable. Two readers identified vertebral body fractures and assessed their stability. A combination of a Hierarchical Convolutional Neural Network (hNet) and a fracture Classification Network (fNet) was used to build a neural network for the automated detection and stability analysis of vertebral body fractures on CT. Two final test settings were chosen: one with vertebral body levels C1/2 included and one where they were excluded. RESULTS The mean age of the patients was 68 ± 14 years. 140 patients were female. The network showed a slightly higher diagnostic performance when excluding C1/2. Accordingly, the network was able to distinguish fractured and non-fractured vertebral bodies with a sensitivity of 75.8 % and a specificity of 80.3 %. Additionally, the network determined the stability of the vertebral bodies with a sensitivity of 88.4 % and a specificity of 80.3 %. The AUC was 87 % and 91 % for fracture detection and stability analysis, respectively. The sensitivity of our network in indicating the presence of at least one fracture / one unstable fracture within the whole spine achieved values of 78.7 % and 97.2 %, respectively, when excluding C1/2. CONCLUSION The developed neural network can automatically detect vertebral body fractures and evaluate their stability concurrently with a high diagnostic performance.
Collapse
Affiliation(s)
- Constanze Polzer
- Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | - Eren Yilmaz
- Section Biomedical Imaging, Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany; Department of Computer Science, Ostfalia University of Applied Sciences, Wolfenbüttel, Germany
| | - Carsten Meyer
- Department of Computer Science, Ostfalia University of Applied Sciences, Wolfenbüttel, Germany; Department of Computer Science, Faculty of Engineering, Kiel University, Kiel, Germany
| | - Hyungseok Jang
- Department of Radiology, University of California San Diego, San Diego, USA
| | - Olav Jansen
- Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | | | | | - Claus-Christian Glüer
- Section Biomedical Imaging, Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | - Sam Sedaghat
- Department of Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
7
|
Velleman T, Hein S, Dierckx RAJO, Noordzij W, Kwee TC. Reading room assistants to reduce workload and interruptions of radiology residents during on-call hours: Initial evaluation. Eur J Radiol 2024; 173:111381. [PMID: 38428253 DOI: 10.1016/j.ejrad.2024.111381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/23/2024] [Accepted: 02/16/2024] [Indexed: 03/03/2024]
Abstract
PURPOSE To determine how much timesaving and reduction of interruptions reading room assistants can provide by taking over non-image interpretation tasks (NITs) from radiology residents during on-call hours. METHODS Reading room assistants are medical students who were trained to take over NITs from radiology residents (e.g. answering telephone calls, administrative tasks and logistics) to reduce residents' workload during on-call hours. Reading room assistants' and residents' activities were tracked during 6 weekend dayshifts in a tertiary care academic center (with more than 2.5 million inhabitants in its catchment area) between 10 a.m. and 5p.m. (7-hour shift, 420 min), and time spent on each activity was recorded. RESULTS Reading room assistants spent the most time on the following timesaving activities for residents: answering incoming (41 min, 19%) and outgoing telephone calls (35 min, 16%), ultrasound machine related activities (19 min, 9%) and paramedical assistance such as supporting residents during ultrasound guided procedures and with patients (17 min, 8%). Reading room assistants saved 132 min of residents' time by taking over NITs while also spending circa 31 min consulting the resident, resulting in a net timesaving of 101 min (24%) during a 7-hour shift. The reading room assistants also prevented residents from being interrupted, at a mean of 18 times during the 7-hour shift. CONCLUSION This study shows that the implementation of reading room assistants to radiology on-call hours could provide a timesaving for residents and also reduce the number of times residents are being interrupted during their work.
Collapse
Affiliation(s)
- Ton Velleman
- Department of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands.
| | - Sandra Hein
- Department of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Rudi A J O Dierckx
- Department of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Walter Noordzij
- Department of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Thomas C Kwee
- Department of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| |
Collapse
|
8
|
Lysø EH, Hesjedal MB, Skolbekken JA, Solbjør M. Men's sociotechnical imaginaries of artificial intelligence for prostate cancer diagnostics - A focus group study. Soc Sci Med 2024; 347:116771. [PMID: 38537333 DOI: 10.1016/j.socscimed.2024.116771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 03/05/2024] [Accepted: 03/08/2024] [Indexed: 04/20/2024]
Abstract
Artificial intelligence (AI) is increasingly used for diagnostic purposes in cancer care. Prostate cancer is one of the most prevalent cancers affecting men worldwide, but current diagnostic approaches have limitations in terms of specificity and sensitivity. Using AI to interpret MR images in prostate cancer diagnostics shows promising results, but raises questions about implementation, user acceptance, trust, and doctor-patient communication. Drawing on approaches from the sociology of expectations and theories about sociotechnical imaginaries, we explore men's expectations of artificial intelligence for prostate cancer diagnostics. We conducted ten focus groups with 48 men aged 54-85 in Norway with various experiences of prostate cancer diagnostics. Five groups of men had been treated for prostate cancer, one group was on active surveillance, two groups had been through prostate cancer diagnostics without having a diagnosis, and two groups of men had no experience with prostate cancer diagnostics or treatment. Data was subject to reflexive thematic analysis. Our analysis suggests that men's expectations of AI for prostate cancer diagnostics come from two perspectives: Technology-centered expectations that build on their conceptions of AI's form and agency, and human-centered expectations of AI that build on their perceptions of patient-professional relationships and decision-making processes. These two perspectives are intertwined in three imaginaries of AI: The tool imaginary, the advanced machine imaginary, and the intelligence imaginary - each carrying distinct expectations and ideas of technologies and humans' role in decision-making processes. These expectations are multifaceted and simultaneously optimistic and pessimistic; while AI is expected to improve the accuracy of cancer diagnoses and facilitate more personalized medicine, AI is also expected to threaten interpersonal and communicational relationships between patients and healthcare professionals, and the maintenance of trust in these relationships. This emphasizes how AI cannot be implemented without caution about maintaining human healthcare relationships.
Collapse
Affiliation(s)
- Emilie Hybertsen Lysø
- Norwegian University of Science and Technology, Department of Public Health and Nursing, Håkon Jarls gate 11, 7030, Trondheim, Norway.
| | - Maria Bårdsen Hesjedal
- Norwegian University of Science and Technology, Department of Public Health and Nursing, Håkon Jarls gate 11, 7030, Trondheim, Norway
| | - John-Arne Skolbekken
- Norwegian University of Science and Technology, Department of Public Health and Nursing, Håkon Jarls gate 11, 7030, Trondheim, Norway
| | - Marit Solbjør
- Norwegian University of Science and Technology, Department of Public Health and Nursing, Håkon Jarls gate 11, 7030, Trondheim, Norway
| |
Collapse
|
9
|
Ko CH, Chien LN, Chiu YT, Hsu HH, Wong HF, Chan WP. Demands for medical imaging and workforce Size: A nationwide population-based Study, 2000-2020. Eur J Radiol 2024; 172:111330. [PMID: 38290203 DOI: 10.1016/j.ejrad.2024.111330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 12/22/2023] [Accepted: 01/18/2024] [Indexed: 02/01/2024]
Abstract
PURPOSE The aim of this study was to investigate associations between workforce and workload among radiologists in Taiwan. MATERIALS AND METHODS Data for the period 2000-2020 describing the demand for imaging services and radiologists have been obtained from databases and statistical reports of the Ministry of Health and Welfare. The future demand for radiologists was based on Taiwanese people aged 40 and over. RESULTS The workforce of Taiwan's radiologists has increased by 6 % annually over the past 20 years (from 450 to 993), performing 2125, 3202 and 3620 monthly examinations (mainly conventional radiography and CT) in medical centers, regional hospitals and district hospitals. Between 2000 and 2020, the use of CT and MRI increased by more than 3.5 times. Demand for interventional radiology also increased by 1.77 times, 2.25 times, and 5 times, respectively. To maintain this volume of services in 2040, at least 1168 radiologists are needed, about 1.18 times more in 2020. CONCLUSION Taiwan has 2.4 to 2.9 times fewer radiologists than the United States and 3 times fewer than Europe, while the annual workload is approximately 2 to 3.4 times greater than that of the United States and 1.4 to 2.5 times greater than that of the United Kingdom. This report may serve as a reference for policy makers who address the challenges of the growing workload among radiologists in countries of similar situations.
Collapse
Affiliation(s)
- Chih-Hsiang Ko
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan; Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei 11031, Taiwan
| | - Li-Nien Chien
- Institute of Health and Welfare Policy, National Yang Ming Chiao Tung University, Taipei City 11221, Taiwan
| | - Yu-Ting Chiu
- School of Health Care Administration, College of Management, Taipei Medical University, New Taipei City 235, Taiwan
| | - Hsian-He Hsu
- Department of Radiology, Tri-Service General Hospital and National Defense Medical Center, Taipei 11490, Taiwan
| | - Ho-Fai Wong
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital, Chang Gung University, Taoyuan 333423, Taiwan
| | - Wing P Chan
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan; Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei 11031, Taiwan.
| |
Collapse
|
10
|
Toxopeus R, Kasalak Ö, Yakar D, Noordzij W, Dierckx RAJO, Kwee TC. Is work overload associated with diagnostic errors on 18F-FDG-PET/CT? Eur J Nucl Med Mol Imaging 2024; 51:1079-1084. [PMID: 38030745 DOI: 10.1007/s00259-023-06543-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 11/22/2023] [Indexed: 12/01/2023]
Abstract
PURPOSE To determine the association between workload and diagnostic errors on 18F-FDG-PET/CT. MATERIALS AND METHODS This study included 103 18F-FDG-PET/CT scans with a diagnostic error that was corrected with an addendum between March 2018 and July 2023. All scans were performed at a tertiary care center. The workload of each nuclear medicine physician or radiologist who authorized the 18F-FDG-PET/CT report was determined on the day the diagnostic error was made and normalized for his or her own average daily production (workloadnormalized). A workloadnormalized of more than 100% indicates that the nuclear medicine physician or radiologist had a relative work overload, while a value of less than 100% indicates a relative work underload on the day the diagnostic error was made. The time of the day the diagnostic error was made was also recorded. Workloadnormalized was compared to 100% using a signed rank sum test, with the hypothesis that it would significantly exceed 100%. A Mann-Kendall test was performed to test the hypothesis that diagnostic errors would increase over the course of the day. RESULTS Workloadnormalized (median of 121%, interquartile range: 71 to 146%) on the days the diagnostic errors were made was significantly higher than 100% (P = 0.014). There was no significant upward trend in the frequency of diagnostic errors over the course of the day (Mann-Kendall tau = 0.05, P = 0.7294). CONCLUSION Work overload seems to be associated with diagnostic errors on 18F-FDG-PET/CT. Diagnostic errors were encountered throughout the entire working day, without any upward trend towards the end of the day.
Collapse
Affiliation(s)
- Romy Toxopeus
- Medical Imaging Center, Departments of Radiology and Nuclear Medicine, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Ömer Kasalak
- Medical Imaging Center, Departments of Radiology and Nuclear Medicine, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Derya Yakar
- Medical Imaging Center, Departments of Radiology and Nuclear Medicine, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Walter Noordzij
- Medical Imaging Center, Departments of Radiology and Nuclear Medicine, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Rudi A J O Dierckx
- Medical Imaging Center, Departments of Radiology and Nuclear Medicine, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Thomas C Kwee
- Medical Imaging Center, Departments of Radiology and Nuclear Medicine, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
11
|
Achangwa NR, Nierobisch N, Ludovichetti R, Negrão de Figueiredo G, Kupka M, De Vere-Tyndall A, Frauenfelder T, Kulcsar Z, Hainc N. Sustainable reduction of phone-call interruptions by 35% in a medical imaging department using an automatic voicemail and custom call redirection system. Curr Probl Diagn Radiol 2024; 53:246-251. [PMID: 38290903 DOI: 10.1067/j.cpradiol.2024.01.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/07/2023] [Accepted: 01/16/2024] [Indexed: 02/01/2024]
Abstract
BACKGROUND Have you ever been in the trenches of a complicated study only to be interrupted by a not-so urgent phone-call? We were, repeatedly- unfortunately. PURPOSE To increase productivity of radiologists by quantifying the main source of interruptions (phone-calls) to the workflow of radiologists, and too assess the implemented solution. MATERIALS AND METHODS To filter calls to the radiology consultant on duty, we introduced an automatic voicemail and custom call redirection system. Thus, instead of directly speaking with radiology consultants, clinicians were to first categorize their request and dial accordingly: 1. Inpatient requests, 2. Outpatient requests, 3. Directly speak with the consultant radiologist. Inpatient requests (1) and outpatient requests (2) were forwarded to MRI technologists or clerks, respectively. Calls were monitored in 15-minute increments continuously for an entire year (March 2022 until and including March 2023). Subsequently, both the frequency and category of requests were assessed. RESULTS 4803 calls were recorded in total: 3122 (65 %) were forwarded to a radiologist on duty. 870 (18.11 %) concerned inpatients, 274 (5.70 %) outpatients, 430 (8.95 %) dialed the wrong number, 107 (2.23 %) made no decision. Throughout the entire year the percentage of successfully avoided interruptions was relatively stable and fluctuated between low to high 30 % range (Mean per month 35 %, Median per month 34.45 %). CONCLUSIONS This is the first analysis of phone-call interruptions to consultant radiologists in an imaging department for 12 continuous months. More than 35 % of requests did not require the input of a specialist trained radiologist. Hence, installing an automated voicemail and custom call redirection system is a sustainable and simple solution to reduce phone-call interruptions by on average 35 % in radiology departments. This solution was well accepted by referring clinicians. The installation required a one-time investment of only 2h and did not cost any money.
Collapse
Affiliation(s)
- Ngwe Rawlings Achangwa
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland; Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Switzerland.
| | - Nathalie Nierobisch
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Riccardo Ludovichetti
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Giovanna Negrão de Figueiredo
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Michael Kupka
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Anthony De Vere-Tyndall
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Thomas Frauenfelder
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Switzerland
| | - Zsolt Kulcsar
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Nicolin Hainc
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| |
Collapse
|
12
|
Kwee TC, Kasalak Ö, Yakar D. Radiologist-patient communication of musculoskeletal ultrasonography results: a choice between added value and costs. Acta Radiol 2024; 65:267-272. [PMID: 34617452 DOI: 10.1177/02841851211044974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Literature on radiologist-patient communication of musculoskeletal ultrasonography (US) results is currently lacking. PURPOSE To investigate the patient's view on receiving the results from a radiologist after a musculoskeletal US examination, and the additional time required to provide such a service. MATERIAL AND METHODS This prospective study included 106 outpatients who underwent musculoskeletal US, and who were equally randomized to either receive or not receive the results from the radiologist directly after the examination. RESULTS In both randomization groups, all quality performance metrics (radiologist's friendliness, explanation, skill, concern for comfort, concern for patient questions/worries, overall rating of the examination, and likelihood of recommending the examination) received median scores of good/high to very good/very high. Patients who had received their US results from the radiologist rated the radiologist's explanation and concern for patient questions/worries as significantly higher (P = 0.009 and P = 0.002) than patients who had not. In both randomization groups, there were no significant differences between anxiety levels before and after the US examination (P = 0.222 and P = 1.000). Of the 48 responding patients, 46 (95.8%) rated a radiologist-patient discussion of US findings as important. US examinations with a radiologist-patient communication regarding US findings (median = 11.29 min) were significantly longer (P < 0.0001) than those without (median = 8.08 min). CONCLUSION Even without communicating musculoskeletal US results directly to patients, radiologists can still achieve high ratings from patients for their communication and empathy. Nevertheless, patient experience can be further enhanced if a radiologist adds this communication to the examination. However, this increases total examination time and therefore costs.
Collapse
Affiliation(s)
- Thomas C Kwee
- Medical Imaging Center, Department of Radiology, Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | - Ömer Kasalak
- Medical Imaging Center, Department of Radiology, Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | - Derya Yakar
- Medical Imaging Center, Department of Radiology, Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| |
Collapse
|
13
|
Guermazi A. AI is indeed helpful but it should always be monitored! Diagn Interv Imaging 2024; 105:83-84. [PMID: 38458733 DOI: 10.1016/j.diii.2024.02.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 02/22/2024] [Indexed: 03/10/2024]
Affiliation(s)
- Ali Guermazi
- Department of Radiology, Boston University School of Medicine, Boston, MA 02118, USA; Department of Radiology, VA Boston Healthcare System, West Roxbury, MA 02132, USA.
| |
Collapse
|
14
|
Vanderbecq Q, Gelard M, Pesquet JC, Wagner M, Arrive L, Zins M, Chouzenoux E. Deep learning for automatic bowel-obstruction identification on abdominal CT. Eur Radiol 2024:10.1007/s00330-024-10657-z. [PMID: 38388719 DOI: 10.1007/s00330-024-10657-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 01/03/2024] [Accepted: 01/23/2024] [Indexed: 02/24/2024]
Abstract
RATIONALE AND OBJECTIVES Automated evaluation of abdominal computed tomography (CT) scans should help radiologists manage their massive workloads, thereby leading to earlier diagnoses and better patient outcomes. Our objective was to develop a machine-learning model capable of reliably identifying suspected bowel obstruction (BO) on abdominal CT. MATERIALS AND METHODS The internal dataset comprised 1345 abdominal CTs obtained in 2015-2022 from 1273 patients with suspected BO; among them, 670 were annotated as BO yes/no by an experienced abdominal radiologist. The external dataset consisted of 88 radiologist-annotated CTs. We developed a full preprocessing pipeline for abdominal CT comprising a model to locate the abdominal-pelvic region and another model to crop the 3D scan around the body. We built, trained, and tested several neural-network architectures for the binary classification (BO, yes/no) of each CT. F1 and balanced accuracy scores were computed to assess model performance. RESULTS The mixed convolutional network pretrained on a Kinetics 400 dataset achieved the best results: with the internal dataset, the F1 score was 0.92, balanced accuracy 0.86, and sensitivity 0.93; with the external dataset, the corresponding values were 0.89, 0.89, and 0.89. When calibrated on sensitivity, this model produced 1.00 sensitivity, 0.84 specificity, and an F1 score of 0.88 with the internal dataset; corresponding values were 0.98, 0.76, and 0.87 with the external dataset. CONCLUSION The 3D mixed convolutional neural network developed here shows great potential for the automated binary classification (BO yes/no) of abdominal CT scans from patients with suspected BO. CLINICAL RELEVANCE STATEMENT The 3D mixed CNN automates bowel obstruction classification, potentially automating patient selection and CT prioritization, leading to an enhanced radiologist workflow. KEY POINTS • Bowel obstruction's rising incidence strains radiologists. AI can aid urgent CT readings. • Employed 1345 CT scans, neural networks for bowel obstruction detection, achieving high accuracy and sensitivity on external testing. • 3D mixed CNN automates CT reading prioritization effectively and speeds up bowel obstruction diagnosis.
Collapse
Affiliation(s)
- Quentin Vanderbecq
- Department of Radiology, AP-HP.Sorbonne, Saint Antoine Hospital, 184 Rue du Faubourg Saint-Antoine, 75012, Paris, France.
- UMR 7371, Université Sorbonne, CNRS, Inserm U114615, rue de l'École de Médecine, 75006, Paris, France.
| | - Maxence Gelard
- Université Paris-Saclay, CentraleSupélec, Gif-sur-Yvette, Inria, CVN, France
| | | | - Mathilde Wagner
- UMR 7371, Université Sorbonne, CNRS, Inserm U114615, rue de l'École de Médecine, 75006, Paris, France
- Department of Radiology, Hospital Pitié Salpêtrière, 47-83 Bd de l'Hôpital, 75013 Paris, Île-de-France, France
| | - Lionel Arrive
- Department of Radiology, AP-HP.Sorbonne, Saint Antoine Hospital, 184 Rue du Faubourg Saint-Antoine, 75012, Paris, France
| | - Marc Zins
- Department of Radiology, Hospital Paris Saint-Joseph, 185 Rue Raymond Losserand, 75014, Paris, Île-de-France, France
| | - Emilie Chouzenoux
- Université Paris-Saclay, CentraleSupélec, Gif-sur-Yvette, Inria, CVN, France
| |
Collapse
|
15
|
Payne DL, Xu X, Faraji F, John K, Pradas KF, Bernard VV, Bangiyev L, Prasanna P. Automated Detection of Cervical Spinal Stenosis and Cord Compression via Vision Transformer and Rules-Based Classification. AJNR Am J Neuroradiol 2024:ajnr.A8141. [PMID: 38360785 DOI: 10.3174/ajnr.a8141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 12/15/2023] [Indexed: 02/17/2024]
Abstract
BACKGROUND AND PURPOSE Cervical spinal cord compression, defined as spinal cord deformity and severe narrowing of the spinal canal in the cervical region, can lead to severe clinical consequences, including intractable pain, sensory disturbance, paralysis, and even death, and may require emergent intervention to prevent negative outcomes. Despite the critical nature of cord compression, no automated tool is available to alert clinical radiologists to the presence of such findings. This study aims to demonstrate the ability of a vision transformer (ViT) model for the accurate detection of cervical cord compression. MATERIALS AND METHODS A clinically diverse cohort of 142 cervical spine MRIs was identified, 34% of which were normal or had mild stenosis, 31% with moderate stenosis, and 35% with cord compression. Utilizing gradient-echo images, slices were labeled as no cord compression/mild stenosis, moderate stenosis, or severe stenosis/cord compression. Segmentation of the spinal canal was performed and confirmed by neuroradiology faculty. A pretrained ViT model was fine-tuned to predict section-level severity by using a train:validation:test split of 60:20:20. Each examination was assigned an overall severity based on the highest level of section severity, with an examination labeled as positive for cord compression if ≥1 section was predicted in the severe category. Additionally, 2 convolutional neural network (CNN) models (ResNet50, DenseNet121) were tested in the same manner. RESULTS The ViT model outperformed both CNN models at the section level, achieving section-level accuracy of 82%, compared with 72% and 78% for ResNet and DenseNet121, respectively. ViT patient-level classification achieved accuracy of 93%, sensitivity of 0.90, positive predictive value of 0.90, specificity of 0.95, and negative predictive value of 0.95. Receiver operating characteristic area under the curve was greater for ViT than either CNN. CONCLUSIONS This classification approach using a ViT model and rules-based classification accurately detects the presence of cervical spinal cord compression at the patient level. In this study, the ViT model outperformed both conventional CNN approaches at the section and patient levels. If implemented into the clinical setting, such a tool may streamline neuroradiology workflow, improving efficiency and consistency.
Collapse
Affiliation(s)
- David L Payne
- From the Department of Radiology (D.L.P., F.F., K.J., K.F.P., V.V.B., L.B.), Stony Brook University Hospital, Stony Brook, New York
- Department of Biomedical Informatics (D.L.P., X.X., F.F., K.J., P.P.), Stony Brook University, Stony Brook, New York
| | - Xuan Xu
- Department of Biomedical Informatics (D.L.P., X.X., F.F., K.J., P.P.), Stony Brook University, Stony Brook, New York
| | - Farshid Faraji
- From the Department of Radiology (D.L.P., F.F., K.J., K.F.P., V.V.B., L.B.), Stony Brook University Hospital, Stony Brook, New York
- Department of Biomedical Informatics (D.L.P., X.X., F.F., K.J., P.P.), Stony Brook University, Stony Brook, New York
| | - Kevin John
- From the Department of Radiology (D.L.P., F.F., K.J., K.F.P., V.V.B., L.B.), Stony Brook University Hospital, Stony Brook, New York
- Department of Biomedical Informatics (D.L.P., X.X., F.F., K.J., P.P.), Stony Brook University, Stony Brook, New York
| | - Katherine Ferra Pradas
- From the Department of Radiology (D.L.P., F.F., K.J., K.F.P., V.V.B., L.B.), Stony Brook University Hospital, Stony Brook, New York
| | - Vahni Vishala Bernard
- From the Department of Radiology (D.L.P., F.F., K.J., K.F.P., V.V.B., L.B.), Stony Brook University Hospital, Stony Brook, New York
| | - Lev Bangiyev
- From the Department of Radiology (D.L.P., F.F., K.J., K.F.P., V.V.B., L.B.), Stony Brook University Hospital, Stony Brook, New York
| | - Prateek Prasanna
- Department of Biomedical Informatics (D.L.P., X.X., F.F., K.J., P.P.), Stony Brook University, Stony Brook, New York
| |
Collapse
|
16
|
Hanneman K, Playford D, Dey D, van Assen M, Mastrodicasa D, Cook TS, Gichoya JW, Williamson EE, Rubin GD. Value Creation Through Artificial Intelligence and Cardiovascular Imaging: A Scientific Statement From the American Heart Association. Circulation 2024; 149:e296-e311. [PMID: 38193315 DOI: 10.1161/cir.0000000000001202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Multiple applications for machine learning and artificial intelligence (AI) in cardiovascular imaging are being proposed and developed. However, the processes involved in implementing AI in cardiovascular imaging are highly diverse, varying by imaging modality, patient subtype, features to be extracted and analyzed, and clinical application. This article establishes a framework that defines value from an organizational perspective, followed by value chain analysis to identify the activities in which AI might produce the greatest incremental value creation. The various perspectives that should be considered are highlighted, including clinicians, imagers, hospitals, patients, and payers. Integrating the perspectives of all health care stakeholders is critical for creating value and ensuring the successful deployment of AI tools in a real-world setting. Different AI tools are summarized, along with the unique aspects of AI applications to various cardiac imaging modalities, including cardiac computed tomography, magnetic resonance imaging, and positron emission tomography. AI is applicable and has the potential to add value to cardiovascular imaging at every step along the patient journey, from selecting the more appropriate test to optimizing image acquisition and analysis, interpreting the results for classification and diagnosis, and predicting the risk for major adverse cardiac events.
Collapse
|
17
|
Tang SM, Durieux JC, Faraji N, Mohamed I, Wien M, Nayate AP. "Are They Listening, and Do They Find It Useful?"-Evaluation of Mid-Rotation Formative Subjective and Objective Feedback to Radiology Trainees. Curr Probl Diagn Radiol 2024; 53:114-120. [PMID: 37690968 DOI: 10.1067/j.cpradiol.2023.08.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 08/23/2023] [Indexed: 09/12/2023]
Abstract
BACKGROUND Residents commonly receive only end-of-rotation evaluations and thus are often unaware of their progress during a rotation. In 2021, our neuroradiology section instituted mid-rotation feedback in which rotating residents received formative subjective and objective feedback. The purpose of this study was to describe our feedback method and to evaluate if residents found it helpful. METHODS Radiology residents rotate 3-4 times on the neuroradiology service for 1-month blocks. At the midpoint of the rotation (2 weeks), 7-10 neuroradiology attendings discussed the rotating residents' subjective performance. One attending was tasked with facilitating this discussion and taking notes. Objective metrics were obtained from our dictation software. Compiled feedback was relayed to residents via email. A 16-question anonymous survey was sent to 39 radiology residents (R1-R4) to evaluate the perceived value of mid-rotation feedback. Odds ratios and 95% confidence intervals were computed using logistic regression. RESULTS Sixty-nine percent (27/39) of residents responded to the survey; 92.6% (25/27) of residents reported receiving mid-rotation feedback in ≥50% of neuroradiology rotations; 92.3% (24/26) of residents found the subjective feedback helpful; 88.4% (23/26) of residents reported modifying their performance as suggested (100% R1-R2 vs 70% R3-R4; OR: 15.4 CI:1.26, >30.0);59.1% (13/22) of residents found the objective metrics helpful (75% R1-R2 vs 40% R3-R4; OR: 3.92 CI:0.74, 24.39) and 68.2% (15/22) stated they modified their performance based on these metrics (83.3% R1-R2 vs 50.0% R3-R4; OR:4.2 CI:0.73, 30.55); and 84.6% (22/26) of residents stated that mid-rotation subjective feedback and 45.5% (10/22) stated that mid-rotation objective feedback should be implemented in other sections. CONCLUSIONS Majority of residents found mid-rotation feedback to be helpful in informing them about their progress and areas for improvement in the neuroradiology rotation, more so for subjective feedback than objective feedback. The majority of residents stated all rotations should provide mid-rotation subjective feedback.
Collapse
Affiliation(s)
- Stephen M Tang
- Case Western Reserve University School of Medicine, Cleveland, OH
| | - Jared C Durieux
- University Hospitals Cleveland Medical Center, Cleveland, OH
| | - Navid Faraji
- University Hospitals Cleveland Medical Center, Cleveland, OH
| | - Inas Mohamed
- University Hospitals Cleveland Medical Center, Cleveland, OH
| | - Michael Wien
- University Hospitals Cleveland Medical Center, Cleveland, OH
| | - Ameya P Nayate
- University Hospitals Cleveland Medical Center, Cleveland, OH.
| |
Collapse
|
18
|
Hua D, Petrina N, Young N, Cho JG, Poon SK. Understanding the factors influencing acceptability of AI in medical imaging domains among healthcare professionals: A scoping review. Artif Intell Med 2024; 147:102698. [PMID: 38184343 DOI: 10.1016/j.artmed.2023.102698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 09/29/2023] [Accepted: 10/29/2023] [Indexed: 01/08/2024]
Abstract
BACKGROUND Artificial intelligence (AI) technology has the potential to transform medical practice within the medical imaging industry and materially improve productivity and patient outcomes. However, low acceptability of AI as a digital healthcare intervention among medical professionals threatens to undermine user uptake levels, hinder meaningful and optimal value-added engagement, and ultimately prevent these promising benefits from being realised. Understanding the factors underpinning AI acceptability will be vital for medical institutions to pinpoint areas of deficiency and improvement within their AI implementation strategies. This scoping review aims to survey the literature to provide a comprehensive summary of the key factors influencing AI acceptability among healthcare professionals in medical imaging domains and the different approaches which have been taken to investigate them. METHODS A systematic literature search was performed across five academic databases including Medline, Cochrane Library, Web of Science, Compendex, and Scopus from January 2013 to September 2023. This was done in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines. Overall, 31 articles were deemed appropriate for inclusion in the scoping review. RESULTS The literature has converged towards three overarching categories of factors underpinning AI acceptability including: user factors involving trust, system understanding, AI literacy, and technology receptiveness; system usage factors entailing value proposition, self-efficacy, burden, and workflow integration; and socio-organisational-cultural factors encompassing social influence, organisational readiness, ethicality, and perceived threat to professional identity. Yet, numerous studies have overlooked a meaningful subset of these factors that are integral to the use of medical AI systems such as the impact on clinical workflow practices, trust based on perceived risk and safety, and compatibility with the norms of medical professions. This is attributable to reliance on theoretical frameworks or ad-hoc approaches which do not explicitly account for healthcare-specific factors, the novelties of AI as software as a medical device (SaMD), and the nuances of human-AI interaction from the perspective of medical professionals rather than lay consumer or business end users. CONCLUSION This is the first scoping review to survey the health informatics literature around the key factors influencing the acceptability of AI as a digital healthcare intervention in medical imaging contexts. The factors identified in this review suggest that existing theoretical frameworks used to study AI acceptability need to be modified to better capture the nuances of AI deployment in healthcare contexts where the user is a healthcare professional influenced by expert knowledge and disciplinary norms. Increasing AI acceptability among medical professionals will critically require designing human-centred AI systems which go beyond high algorithmic performance to consider accessibility to users with varying degrees of AI literacy, clinical workflow practices, the institutional and deployment context, and the cultural, ethical, and safety norms of healthcare professions. As investment into AI for healthcare increases, it would be valuable to conduct a systematic review and meta-analysis of the causal contribution of these factors to achieving high levels of AI acceptability among medical professionals.
Collapse
Affiliation(s)
- David Hua
- School of Computer Science, The University of Sydney, Australia; Sydney Law School, The University of Sydney, Australia
| | - Neysa Petrina
- School of Computer Science, The University of Sydney, Australia
| | - Noel Young
- Sydney Medical School, The University of Sydney, Australia; Lumus Imaging, Australia
| | - Jin-Gun Cho
- Sydney Medical School, The University of Sydney, Australia; Western Sydney Local Health District, Australia; Lumus Imaging, Australia
| | - Simon K Poon
- School of Computer Science, The University of Sydney, Australia; Western Sydney Local Health District, Australia.
| |
Collapse
|
19
|
Burnazovic E, Yee A, Levy J, Gore G, Abbasgholizadeh Rahimi S. Application of Artificial intelligence in COVID-19-related geriatric care: A scoping review. Arch Gerontol Geriatr 2024; 116:105129. [PMID: 37542917 DOI: 10.1016/j.archger.2023.105129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 07/11/2023] [Accepted: 07/13/2023] [Indexed: 08/07/2023]
Abstract
BACKGROUND Older adults have been disproportionately affected by the COVID-19 pandemic. This scoping review aimed to summarize the current evidence of artificial intelligence (AI) use in the screening/monitoring, diagnosis, and/or treatment of COVID-19 among older adults. METHOD The review followed the Joanna Briggs Institute and Arksey and O'Malley frameworks. An information specialist performed a comprehensive search from the date of inception until May 2021, in six bibliographic databases. The selected studies considered all populations, and all AI interventions that had been used in COVID-19-related geriatric care. We focused on patient, healthcare provider, and healthcare system-related outcomes. The studies were restricted to peer-reviewed English publications. Two authors independently screened the titles and abstracts of the identified records, read the selected full texts, and extracted data from the included studies using a validated data extraction form. Disagreements were resolved by consensus, and if this was not possible, the opinion of a third reviewer was sought. RESULTS Six databases were searched , yielding 3,228 articles, of which 10 were included. The majority of articles used a single AI model to assess the association between patients' comorbidities and COVID-19 outcomes. Articles were mainly conducted in high-income countries, with limited representation of females in study participants, and insufficient reporting of participants' race and ethnicity. DISCUSSION This review highlighted how the COVID-19 pandemic has accelerated the application of AI to protect older populations, with most interventions in the pilot testing stage. Further work is required to measure effectiveness of these technologies in a larger scale, use more representative datasets for training of AI models, and expand AI applications to low-income countries.
Collapse
Affiliation(s)
- Emina Burnazovic
- Integrated Biomedical Engineering and Health Sciences, Department of Computing and Software, Faculty of Engineering, McMaster University, Hamilton, ON, Canada
| | - Amanda Yee
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Joshua Levy
- Department of Pharmacology and Therapeutics, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Genevieve Gore
- Schulich Library of Physical Sciences, Life Sciences and Engineering, McGill University, Montreal, QC, Canada
| | - Samira Abbasgholizadeh Rahimi
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada; Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, QC, Canada; Mila-Quebec Artificial Intelligence Institute, Montreal, QC, Canada; Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, QC, Canada.
| |
Collapse
|
20
|
Clancy PW, Tulenko K, Rizvi T. An Innovative Medical Student Neuroradiology Elective Course: Active Learning Through a Case-Based Approach. Acad Radiol 2024; 31:322-328. [PMID: 37973514 DOI: 10.1016/j.acra.2023.09.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 09/13/2023] [Accepted: 09/20/2023] [Indexed: 11/19/2023]
Abstract
RATIONALE AND OBJECTIVES Traditional radiology education in clerkships is focused on observational and passive learning from radiology faculty. The aim of this study was to validate a new case-based radiology course challenging medical students to independently scroll through picture archival and communications system cases, thereby actively learning and improving their understanding and education of radiology. MATERIALS AND METHODS This study used PowerPoint files to present and review various brain, spine, and head and neck clinical cases to simulate real time case review process by Radiologists. Students were tested with an online quiz based on the cases both before and after the review. Quizzes were distributed and responses collected at both times via a Google Form. Students had access to correct answers and feedback after the post-case quiz. A Radiologist was available for an hour of individualized committed teaching time to answer student questions after the post-case quiz. After the elective, there was an option for both quantitative and qualitative feedback. RESULTS All 54 students who took part in this independent case-based program indicated satisfaction and improvement in their understanding of Neuroradiology. Post-quiz classes demonstrated objective improvement in understanding. CONCLUSION This program represents a viable, supplementary approach to traditional radiology education that should be considered for future use and duplication at other institutions.
Collapse
Affiliation(s)
- Paul W Clancy
- School of Medicine, University of Virginia Health System, Charlottesville, Virginia, USA (P.W.C., K.T.).
| | - Kassandra Tulenko
- School of Medicine, University of Virginia Health System, Charlottesville, Virginia, USA (P.W.C., K.T.)
| | - Tanvir Rizvi
- Department of Radiology and Medical Imaging, University of Virginia Health System, Charlottesville, Virginia, USA (T.R.)
| |
Collapse
|
21
|
Chen Z, Yu Y, Liu S, Du W, Hu L, Wang C, Li J, Liu J, Zhang W, Peng X. A deep learning and radiomics fusion model based on contrast-enhanced computer tomography improves preoperative identification of cervical lymph node metastasis of oral squamous cell carcinoma. Clin Oral Investig 2023; 28:39. [PMID: 38151672 DOI: 10.1007/s00784-023-05423-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 11/21/2023] [Indexed: 12/29/2023]
Abstract
OBJECTIVES In this study, we constructed and validated models based on deep learning and radiomics to facilitate preoperative diagnosis of cervical lymph node metastasis (LNM) using contrast-enhanced computed tomography (CECT). MATERIALS AND METHODS CECT scans of 100 patients with OSCC (217 metastatic and 1973 non-metastatic cervical lymph nodes: development set, 76 patients; internally independent test set, 24 patients) who received treatment at the Peking University School and Hospital of Stomatology between 2012 and 2016 were retrospectively collected. Clinical diagnoses and pathological findings were used to establish the gold standard for metastatic cervical LNs. A reader study with two clinicians was also performed to evaluate the lymph node status in the test set. The performance of the proposed models and the clinicians was evaluated and compared by measuring using the area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). RESULTS A fusion model combining deep learning with radiomics showed the best performance (ACC, 89.2%; SEN, 92.0%; SPE, 88.9%; and AUC, 0.950 [95% confidence interval: 0.908-0.993, P < 0.001]) in the test set. In comparison with the clinicians, the fusion model showed higher sensitivity (92.0 vs. 72.0% and 60.0%) but lower specificity (88.9 vs. 97.5% and 98.8%). CONCLUSION A fusion model combining radiomics and deep learning approaches outperformed other single-technique models and showed great potential to accurately predict cervical LNM in patients with OSCC. CLINICAL RELEVANCE The fusion model can complement the preoperative identification of LNM of OSCC performed by the clinicians.
Collapse
Affiliation(s)
- Zhen Chen
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Yao Yu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Shuo Liu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Wen Du
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Leihao Hu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Congwei Wang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Jiaqi Li
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Jianbo Liu
- Huafang Hanying Medical Technology Co., Ltd, No.19, West Bridge Road, Miyun District, Beijing, 101520, People's Republic of China
| | - Wenbo Zhang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Xin Peng
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China.
| |
Collapse
|
22
|
Rašić M, Tropčić M, Karlović P, Gabrić D, Subašić M, Knežević P. Detection and Segmentation of Radiolucent Lesions in the Lower Jaw on Panoramic Radiographs Using Deep Neural Networks. MEDICINA (KAUNAS, LITHUANIA) 2023; 59:2138. [PMID: 38138241 PMCID: PMC10744511 DOI: 10.3390/medicina59122138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 11/29/2023] [Accepted: 12/07/2023] [Indexed: 12/24/2023]
Abstract
Background and Objectives: The purpose of this study was to develop and evaluate a deep learning model capable of autonomously detecting and segmenting radiolucent lesions in the lower jaw by utilizing You Only Look Once (YOLO) v8. Materials and Methods: This study involved the analysis of 226 lesions present in panoramic radiographs captured between 2013 and 2023 at the Clinical Hospital Dubrava and the School of Dental Medicine, University of Zagreb. Panoramic radiographs included radiolucent lesions such as radicular cysts, ameloblastomas, odontogenic keratocysts (OKC), dentigerous cysts and residual cysts. To enhance the database, we applied techniques such as translation, scaling, rotation, horizontal flipping and mosaic effects. We have employed the deep neural network to tackle our detection and segmentation objectives. Also, to improve our model's generalization capabilities, we conducted five-fold cross-validation. The assessment of the model's performance was carried out through metrics like Intersection over Union (IoU), precision, recall and mean average precision (mAP)@50 and mAP@50-95. Results: In the detection task, the precision, recall, mAP@50 and mAP@50-95 scores without augmentation were recorded at 91.8%, 57.1%, 75.8% and 47.3%, while, with augmentation, were 95.2%, 94.4%, 97.5% and 68.7%, respectively. Similarly, in the segmentation task, the precision, recall, mAP@50 and mAP@50-95 values achieved without augmentation were 76%, 75.5%, 75.1% and 48.3%, respectively. Augmentation techniques led to an improvement of these scores to 100%, 94.5%, 96.6% and 72.2%. Conclusions: Our study confirmed that the model developed using the advanced YOLOv8 has the remarkable capability to automatically detect and segment radiolucent lesions in the mandible. With its continual evolution and integration into various medical fields, the deep learning model holds the potential to revolutionize patient care.
Collapse
Affiliation(s)
- Mario Rašić
- Clinic for Tumors, Clinical Hospital Center “Sisters of Mercy”, Ilica 197, 10000 Zagreb, Croatia;
| | - Mario Tropčić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Unska Ulica 3, 10000 Zagreb, Croatia;
| | - Pjetra Karlović
- Department of Maxillofacial and Oral Surgery, Dubrava University Hospital, Avenija Gojka Šuška 6, 10000 Zagreb, Croatia;
| | - Dragana Gabrić
- Department of Oral Surgery, School of Dental Medicine, University of Zagreb, Gundulićeva 5, 10000 Zagreb, Croatia;
| | - Marko Subašić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Unska Ulica 3, 10000 Zagreb, Croatia;
| | - Predrag Knežević
- Department of Maxillofacial and Oral Surgery, Dubrava University Hospital, Avenija Gojka Šuška 6, 10000 Zagreb, Croatia;
| |
Collapse
|
23
|
Nicolaes J, Skjødt MK, Raeymaeckers S, Smith CD, Abrahamsen B, Fuerst T, Debois M, Vandermeulen D, Libanati C. Towards Improved Identification of Vertebral Fractures in Routine Computed Tomography (CT) Scans: Development and External Validation of a Machine Learning Algorithm. J Bone Miner Res 2023; 38:1856-1866. [PMID: 37747147 DOI: 10.1002/jbmr.4916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 09/06/2023] [Accepted: 09/17/2023] [Indexed: 09/26/2023]
Abstract
Vertebral fractures (VFs) are the hallmark of osteoporosis, being one of the most frequent types of fragility fracture and an early sign of the disease. They are associated with significant morbidity and mortality. VFs are incidentally found in one out of five imaging studies, however, more than half of the VFs are not identified nor reported in patient computed tomography (CT) scans. Our study aimed to develop a machine learning algorithm to identify VFs in abdominal/chest CT scans and evaluate its performance. We acquired two independent data sets of routine abdominal/chest CT scans of patients aged 50 years or older: a training set of 1011 scans from a non-interventional, prospective proof-of-concept study at the Universitair Ziekenhuis (UZ) Brussel and a validation set of 2000 subjects from an observational cohort study at the Hospital of Holbaek. Both data sets were externally reevaluated to identify reference standard VF readings using the Genant semiquantitative (SQ) grading. Four independent models have been trained in a cross-validation experiment using the training set and an ensemble of four models has been applied to the external validation set. The validation set contained 15.3% scans with one or more VF (SQ2-3), whereas 663 of 24,930 evaluable vertebrae (2.7%) were fractured (SQ2-3) as per reference standard readings. Comparison of the ensemble model with the reference standard readings in identifying subjects with one or more moderate or severe VF resulted in an area under the receiver operating characteristic curve (AUROC) of 0.88 (95% confidence interval [CI], 0.85-0.90), accuracy of 0.92 (95% CI, 0.91-0.93), kappa of 0.72 (95% CI, 0.67-0.76), sensitivity of 0.81 (95% CI, 0.76-0.85), and specificity of 0.95 (95% CI, 0.93-0.96). We demonstrated that a machine learning algorithm trained for VF detection achieved strong performance on an external validation set. It has the potential to support healthcare professionals with the early identification of VFs and prevention of future fragility fractures. © 2023 UCB S.A. and The Authors. Journal of Bone and Mineral Research published by Wiley Periodicals LLC on behalf of American Society for Bone and Mineral Research (ASBMR).
Collapse
Affiliation(s)
- Joeri Nicolaes
- Department of Electrical Engineering (ESAT), Center for Processing Speech and Images, KU Leuven, Leuven, Belgium
- UCB Pharma, Brussels, Belgium
| | - Michael Kriegbaum Skjødt
- Department of Medicine, Hospital of Holbaek, Holbaek, Denmark
- OPEN-Open Patient Data Explorative Network, Department of Clinical Research, University of Southern Denmark and Odense University Hospital, Odense, Denmark
| | | | - Christopher Dyer Smith
- OPEN-Open Patient Data Explorative Network, Department of Clinical Research, University of Southern Denmark and Odense University Hospital, Odense, Denmark
| | - Bo Abrahamsen
- Department of Medicine, Hospital of Holbaek, Holbaek, Denmark
- OPEN-Open Patient Data Explorative Network, Department of Clinical Research, University of Southern Denmark and Odense University Hospital, Odense, Denmark
- NDORMS, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Oxford University Hospitals, Oxford, UK
| | | | | | - Dirk Vandermeulen
- Department of Electrical Engineering (ESAT), Center for Processing Speech and Images, KU Leuven, Leuven, Belgium
| | | |
Collapse
|
24
|
Kwee TC, Yakar D, Sluijter TE, Pennings JP, Roest C. Can we revolutionize diagnostic imaging by keeping Pandora's box closed? Br J Radiol 2023; 96:20230505. [PMID: 37906185 PMCID: PMC10646642 DOI: 10.1259/bjr.20230505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 08/15/2023] [Accepted: 09/09/2023] [Indexed: 11/02/2023] Open
Abstract
Incidental imaging findings are a considerable health problem, because they generally result in low-value and potentially harmful care. Healthcare professionals struggle how to deal with them, because once detected they can usually not be ignored. In this opinion article, we first reflect on current practice, and then propose and discuss a new potential strategy to pre-emptively tackle incidental findings. The core principle of this concept is to keep the proverbial Pandora's box closed, i.e. to not visualize incidental findings, which can be achieved using deep learning algorithms. This concept may have profound implications for diagnostic imaging.
Collapse
Affiliation(s)
- Thomas C Kwee
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Derya Yakar
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Tim E Sluijter
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Jan P Pennings
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Christian Roest
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| |
Collapse
|
25
|
Kiefer J, Kopp M, Ruettinger T, Heiss R, Wuest W, Amarteifio P, Stroebel A, Uder M, May MS. Diagnostic Accuracy and Performance Analysis of a Scanner-Integrated Artificial Intelligence Model for the Detection of Intracranial Hemorrhages in a Traumatology Emergency Department. Bioengineering (Basel) 2023; 10:1362. [PMID: 38135956 PMCID: PMC10740704 DOI: 10.3390/bioengineering10121362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 11/03/2023] [Accepted: 11/19/2023] [Indexed: 12/24/2023] Open
Abstract
Intracranial hemorrhages require an immediate diagnosis to optimize patient management and outcomes, and CT is the modality of choice in the emergency setting. We aimed to evaluate the performance of the first scanner-integrated artificial intelligence algorithm to detect brain hemorrhages in a routine clinical setting. This retrospective study includes 435 consecutive non-contrast head CT scans. Automatic brain hemorrhage detection was calculated as a separate reconstruction job in all cases. The radiological report (RR) was always conducted by a radiology resident and finalized by a senior radiologist. Additionally, a team of two radiologists reviewed the datasets retrospectively, taking additional information like the clinical record, course, and final diagnosis into account. This consensus reading served as a reference. Statistics were carried out for diagnostic accuracy. Brain hemorrhage detection was executed successfully in 432/435 (99%) of patient cases. The AI algorithm and reference standard were consistent in 392 (90.7%) cases. One false-negative case was identified within the 52 positive cases. However, 39 positive detections turned out to be false positives. The diagnostic performance was calculated as a sensitivity of 98.1%, specificity of 89.7%, positive predictive value of 56.7%, and negative predictive value (NPV) of 99.7%. The execution of scanner-integrated AI detection of brain hemorrhages is feasible and robust. The diagnostic accuracy has a high specificity and a very high negative predictive value and sensitivity. However, many false-positive findings resulted in a relatively moderate positive predictive value.
Collapse
Affiliation(s)
- Jonas Kiefer
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
| | - Markus Kopp
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
| | - Theresa Ruettinger
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
| | - Rafael Heiss
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
| | - Wolfgang Wuest
- Martha-Maria Hospital Nuernberg, Stadenstraße 58, 90491 Nuernberg, Germany;
| | - Patrick Amarteifio
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
- Siemens Healthcare GmbH, Allee am Röthelheimpark 3, 91052 Erlangen, Germany
| | - Armin Stroebel
- Center for Clinical Studies CCS, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Krankenhausstraße 12, 91054 Erlangen, Germany;
| | - Michael Uder
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
| | - Matthias Stefan May
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
| |
Collapse
|
26
|
Joseph PJS, Khattak M, Masudi ST, Minta L, Perry DC. Radiological assessment of hip disease in children with cerebral palsy: development of a core measurement set. Bone Jt Open 2023; 4:825-831. [PMID: 37909150 PMCID: PMC10618048 DOI: 10.1302/2633-1462.411.bjo-2023-0060.r1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/02/2023] Open
Abstract
Aims Hip disease is common in children with cerebral palsy (CP) and can decrease quality of life and function. Surveillance programmes exist to improve outcomes by treating hip disease at an early stage using radiological surveillance. However, studies and surveillance programmes report different radiological outcomes, making it difficult to compare. We aimed to identify the most important radiological measurements and develop a core measurement set (CMS) for clinical practice, research, and surveillance programmes. Methods A systematic review identified a list of measurements previously used in studies reporting radiological hip outcomes in children with CP. These measurements informed a two-round Delphi study, conducted among orthopaedic surgeons and specialist physiotherapists. Participants rated each measurement on a nine-point Likert scale ('not important' to 'critically important'). A consensus meeting was held to finalize the CMS. Results Overall, 14 distinct measurements were identified in the systematic review, with Reimer's migration percentage being the most frequently reported. These measurements were presented over the two rounds of the Delphi process, along with two additional measurements that were suggested by participants. Ultimately, two measurements, Reimer's migration percentage and femoral head-shaft angle, were included in the CMS. Conclusion This use of a minimum standardized set of measurements has the potential to encourage uniformity across hip surveillance programmes, and may streamline the development of tools, such as artificial intelligence systems to automate the analysis in surveillance programmes. This core set should be the minimum requirement in clinical studies, allowing clinicians to add to this as needed, which will facilitate comparisons to be drawn between studies and future meta-analyses.
Collapse
Affiliation(s)
| | - Mohammed Khattak
- University of Liverpool, Liverpool, UK
- Alder Hey Children’s Hospital, Liverpool, UK
| | | | | | - Daniel C. Perry
- University of Liverpool, Liverpool, UK
- Alder Hey Children’s Hospital, Liverpool, UK
| |
Collapse
|
27
|
Compte R, Granville Smith I, Isaac A, Danckert N, McSweeney T, Liantis P, Williams FMK. Are current machine learning applications comparable to radiologist classification of degenerate and herniated discs and Modic change? A systematic review and meta-analysis. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2023; 32:3764-3787. [PMID: 37150769 PMCID: PMC10164619 DOI: 10.1007/s00586-023-07718-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 02/08/2023] [Accepted: 04/09/2023] [Indexed: 05/09/2023]
Abstract
INTRODUCTION Low back pain is the leading contributor to disability burden globally. It is commonly due to degeneration of the lumbar intervertebral discs (LDD). Magnetic resonance imaging (MRI) is the current best tool to visualize and diagnose LDD, but places high time demands on clinical radiologists. Automated reading of spine MRIs could improve speed, accuracy, reliability and cost effectiveness in radiology departments. The aim of this review and meta-analysis was to determine if current machine learning algorithms perform well identifying disc degeneration, herniation, bulge and Modic change compared to radiologists. METHODS A PRISMA systematic review protocol was developed and four electronic databases and reference lists were searched. Strict inclusion and exclusion criteria were defined. A PROBAST risk of bias and applicability analysis was performed. RESULTS 1350 articles were extracted. Duplicates were removed and title and abstract searching identified original research articles that used machine learning (ML) algorithms to identify disc degeneration, herniation, bulge and Modic change from MRIs. 27 studies were included in the review; 25 and 14 studies were included multi-variate and bivariate meta-analysis, respectively. Studies used machine learning algorithms to assess LDD, disc herniation, bulge and Modic change. Models using deep learning, support vector machine, k-nearest neighbors, random forest and naïve Bayes algorithms were included. Meta-analyses found no differences in algorithm or classification performance. When algorithms were tested in replication or external validation studies, they did not perform as well as when assessed in developmental studies. Data augmentation improved algorithm performance when compared to models used with smaller datasets, there were no performance differences between augmented data and large datasets. DISCUSSION This review highlights several shortcomings of current approaches, including few validation attempts or use of large sample sizes. To the best of the authors' knowledge, this is the first systematic review to explore this topic. We suggest the utilization of deep learning coupled with semi- or unsupervised learning approaches. Use of all information contained in MRI data will improve accuracy. Clear and complete reporting of study design, statistics and results will improve the reliability and quality of published literature.
Collapse
Affiliation(s)
- Roger Compte
- Department of Twin Research, King's College London, St Thomas' Hospital Campus, 4th Floor South Wing, Block D, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Isabelle Granville Smith
- Department of Twin Research, King's College London, St Thomas' Hospital Campus, 4th Floor South Wing, Block D, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Amanda Isaac
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Nathan Danckert
- Department of Twin Research, King's College London, St Thomas' Hospital Campus, 4th Floor South Wing, Block D, Westminster Bridge Road, London, SE1 7EH, UK
| | - Terence McSweeney
- Research Unit of Health Sciences and Technology, University of Oulu, Oulu, Finland
| | - Panagiotis Liantis
- Guy's and St Thomas' National Health Services Foundation Trust, London, UK
| | - Frances M K Williams
- Department of Twin Research, King's College London, St Thomas' Hospital Campus, 4th Floor South Wing, Block D, Westminster Bridge Road, London, SE1 7EH, UK
| |
Collapse
|
28
|
Bizjak Ž, Špiclin Ž. A Systematic Review of Deep-Learning Methods for Intracranial Aneurysm Detection in CT Angiography. Biomedicines 2023; 11:2921. [PMID: 38001922 PMCID: PMC10669551 DOI: 10.3390/biomedicines11112921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/26/2023] [Accepted: 10/27/2023] [Indexed: 11/26/2023] Open
Abstract
Background: Subarachnoid hemorrhage resulting from cerebral aneurysm rupture is a significant cause of morbidity and mortality. Early identification of aneurysms on Computed Tomography Angiography (CTA), a frequently used modality for this purpose, is crucial, and artificial intelligence (AI)-based algorithms can improve the detection rate and minimize the intra- and inter-rater variability. Thus, a systematic review and meta-analysis were conducted to assess the diagnostic accuracy of deep-learning-based AI algorithms in detecting cerebral aneurysms using CTA. Methods: PubMed (MEDLINE), Embase, and the Cochrane Library were searched from January 2015 to July 2023. Eligibility criteria involved studies using fully automated and semi-automatic deep-learning algorithms for detecting cerebral aneurysms on the CTA modality. Eligible studies were assessed using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines and the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool. A diagnostic accuracy meta-analysis was conducted to estimate pooled lesion-level sensitivity, size-dependent lesion-level sensitivity, patient-level specificity, and the number of false positives per image. An enhanced FROC curve was utilized to facilitate comparisons between the studies. Results: Fifteen eligible studies were assessed. The findings indicated that the methods exhibited high pooled sensitivity (0.87, 95% confidence interval: 0.835 to 0.91) in detecting intracranial aneurysms at the lesion level. Patient-level sensitivity was not reported due to the lack of a unified patient-level sensitivity definition. Only five studies involved a control group (healthy subjects), whereas two provided information on detection specificity. Moreover, the analysis of size-dependent sensitivity reported in eight studies revealed that the average sensitivity for small aneurysms (<3 mm) was rather low (0.56). Conclusions: The studies included in the analysis exhibited a high level of accuracy in detecting intracranial aneurysms larger than 3 mm in size. Nonetheless, there is a notable gap that necessitates increased attention and research focus on the detection of smaller aneurysms, the use of a common test dataset, and an evaluation of a consistent set of performance metrics.
Collapse
Affiliation(s)
- Žiga Bizjak
- Laboratory of Imaging Technologies, Faculty of Electrical Engineering, University of Ljubljana, 1000 Ljubljana, Slovenia
| | | |
Collapse
|
29
|
Hussain S, Lafarga-Osuna Y, Ali M, Naseem U, Ahmed M, Tamez-Peña JG. Deep learning, radiomics and radiogenomics applications in the digital breast tomosynthesis: a systematic review. BMC Bioinformatics 2023; 24:401. [PMID: 37884877 PMCID: PMC10605943 DOI: 10.1186/s12859-023-05515-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 10/02/2023] [Indexed: 10/28/2023] Open
Abstract
BACKGROUND Recent advancements in computing power and state-of-the-art algorithms have helped in more accessible and accurate diagnosis of numerous diseases. In addition, the development of de novo areas in imaging science, such as radiomics and radiogenomics, have been adding more to personalize healthcare to stratify patients better. These techniques associate imaging phenotypes with the related disease genes. Various imaging modalities have been used for years to diagnose breast cancer. Nonetheless, digital breast tomosynthesis (DBT), a state-of-the-art technique, has produced promising results comparatively. DBT, a 3D mammography, is replacing conventional 2D mammography rapidly. This technological advancement is key to AI algorithms for accurately interpreting medical images. OBJECTIVE AND METHODS This paper presents a comprehensive review of deep learning (DL), radiomics and radiogenomics in breast image analysis. This review focuses on DBT, its extracted synthetic mammography (SM), and full-field digital mammography (FFDM). Furthermore, this survey provides systematic knowledge about DL, radiomics, and radiogenomics for beginners and advanced-level researchers. RESULTS A total of 500 articles were identified, with 30 studies included as the set criteria. Parallel benchmarking of radiomics, radiogenomics, and DL models applied to the DBT images could allow clinicians and researchers alike to have greater awareness as they consider clinical deployment or development of new models. This review provides a comprehensive guide to understanding the current state of early breast cancer detection using DBT images. CONCLUSION Using this survey, investigators with various backgrounds can easily seek interdisciplinary science and new DL, radiomics, and radiogenomics directions towards DBT.
Collapse
Affiliation(s)
- Sadam Hussain
- School of Engineering and Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico.
| | - Yareth Lafarga-Osuna
- School of Engineering and Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico
| | - Mansoor Ali
- School of Engineering and Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico
| | - Usman Naseem
- College of Science and Engineering, James Cook University, Cairns, Australia
| | - Masroor Ahmed
- School of Engineering and Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico
| | - Jose Gerardo Tamez-Peña
- School of Medicine and Health Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico
| |
Collapse
|
30
|
Gupta D, Loane R, Gayen S, Demner-Fushman D. Medical Image Retrieval via Nearest Neighbor Search on Pre-trained Image Features. Knowl Based Syst 2023; 278:110907. [PMID: 37780058 PMCID: PMC10540469 DOI: 10.1016/j.knosys.2023.110907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/03/2023]
Abstract
Nearest neighbor search, also known as NNS, is a technique used to locate the points in a high-dimensional space closest to a given query point. This technique has multiple applications in medicine, such as searching large medical imaging databases, disease classification, and diagnosis. However, when the number of points is significantly large, the brute-force approach for finding the nearest neighbor becomes computationally infeasible. Therefore, various approaches have been developed to make the search faster and more efficient to support the applications. With a focus on medical imaging, this paper proposes DenseLinkSearch (DLS), an effective and efficient algorithm that searches and retrieves the relevant images from heterogeneous sources of medical images. Towards this, given a medical database, the proposed algorithm builds an index that consists of pre-computed links of each point in the database. The search algorithm utilizes the index to efficiently traverse the database in search of the nearest neighbor. We also explore the role of medical image feature representation in content-based medical image retrieval tasks. We propose a Transformer-based feature representation technique that outperformed the existing pre-trained Transformer-based approaches on benchmark medical image retrieval datasets. We extensively tested the proposed NNS approach and compared the performance with state-of-the-art NNS approaches on benchmark datasets and our created medical image datasets. The proposed approach outperformed the existing approaches in terms of retrieving accurate neighbors and retrieval speed. In comparison to the existing approximate NNS approaches, our proposed DLS approach outperformed them in terms of lower average time per query and ≥ 99% R@10 on 11 out of 13 benchmark datasets. We also found that the proposed medical feature representation approach is better for representing medical images compared to the existing pre-trained image models. The proposed feature extraction strategy obtained an improvement of 9.37%, 7.0%, and 13.33% in terms of P@5, P@10, and P@20, respectively, in comparison to the best-performing pre-trained image model. The source code and datasets of our experiments are available at https://github.com/deepaknlp/DLS.
Collapse
Affiliation(s)
- Deepak Gupta
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Russell Loane
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Soumya Gayen
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Dina Demner-Fushman
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
31
|
Souid A, Alsubaie N, Soufiene BO, Alqahtani MS, Abbas M, Jambi LK, Sakli H. Improving diagnosis accuracy with an intelligent image retrieval system for lung pathologies detection: a features extractor approach. Sci Rep 2023; 13:16619. [PMID: 37789095 PMCID: PMC10547797 DOI: 10.1038/s41598-023-42366-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 09/09/2023] [Indexed: 10/05/2023] Open
Abstract
Detecting lung pathologies is critical for precise medical diagnosis. In the realm of diagnostic methods, various approaches, including imaging tests, physical examinations, and laboratory tests, contribute to this process. Of particular note, imaging techniques like X-rays, CT scans, and MRI scans play a pivotal role in identifying lung pathologies with their non-invasive insights. Deep learning, a subset of artificial intelligence, holds significant promise in revolutionizing the detection and diagnosis of lung pathologies. By leveraging expansive datasets, deep learning algorithms autonomously discern intricate patterns and features within medical images, such as chest X-rays and CT scans. These algorithms exhibit an exceptional capacity to recognize subtle markers indicative of lung diseases. Yet, while their potential is evident, inherent limitations persist. The demand for abundant labeled data during training and the susceptibility to data biases challenge their accuracy. To address these formidable challenges, this research introduces a tailored computer-assisted system designed for the automatic retrieval of annotated medical images that share similar content. At its core lies an intelligent deep learning-based features extractor, adept at simplifying the retrieval of analogous images from an extensive chest radiograph database. The crux of our innovation rests upon the fusion of YOLOv5 and EfficientNet within the features extractor module. This strategic fusion synergizes YOLOv5's rapid and efficient object detection capabilities with EfficientNet's proficiency in combating noisy predictions. The result is a distinctive amalgamation that redefines the efficiency and accuracy of features extraction. Through rigorous experimentation conducted on an extensive and diverse dataset, our proposed solution decisively surpasses conventional methodologies. The model's achievement of a mean average precision of 0.488 with a threshold of 0.9 stands as a testament to its effectiveness, overshadowing the results of YOLOv5 + ResNet and EfficientDet, which achieved 0.234 and 0.257 respectively. Furthermore, our model demonstrates a marked precision improvement, attaining a value of 0.864 across all pathologies-a noteworthy leap of approximately 0.352 compared to YOLOv5 + ResNet and EfficientDet. This research presents a significant stride toward enhancing radiologists' workflow efficiency, offering a refined and proficient tool for retrieving analogous annotated medical images.
Collapse
Affiliation(s)
- Abdelbaki Souid
- MACS Research Laboratory RL16ES22, National Engineering School of Gabes, Gabes, Tunisia
| | - Najah Alsubaie
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Ben Othman Soufiene
- PRINCE Laboratory Research, ISITcom, University of Sousse, Hammam Sousse, Tunisia.
| | - Mohammed S Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, 61421, Abha, Saudi Arabia
- BioImaging Unit, Space Research Centre, Michael Atiyah Building, University of Leicester, Leicester, LE17RH, UK
| | - Mohamed Abbas
- Electrical Engineering Department, College of Engineering, King Khalid University, 61421, Abha, Saudi Arabia
| | - Layal K Jambi
- Radiological Sciences Department, College of Applied Medical Sciences, King Saud University, P.O. Box 10219, 11433, Riyadh, Saudi Arabia
| | - Hedi Sakli
- MACS Research Laboratory RL16ES22, National Engineering School of Gabes, Gabes, Tunisia
- EITA Consulting, 5 Rue Du Chant Des Oiseaux, 78360, Montesson, France
| |
Collapse
|
32
|
Nicolson A, Dowling J, Koopman B. Improving chest X-ray report generation by leveraging warm starting. Artif Intell Med 2023; 144:102633. [PMID: 37783533 DOI: 10.1016/j.artmed.2023.102633] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 07/11/2023] [Accepted: 08/11/2023] [Indexed: 10/04/2023]
Abstract
Automatically generating a report from a patient's Chest X-rays (CXRs) is a promising solution to reducing clinical workload and improving patient care. However, current CXR report generators-which are predominantly encoder-to-decoder models-lack the diagnostic accuracy to be deployed in a clinical setting. To improve CXR report generation, we investigate warm starting the encoder and decoder with recent open-source computer vision and natural language processing checkpoints, such as the Vision Transformer (ViT) and PubMedBERT. To this end, each checkpoint is evaluated on the MIMIC-CXR and IU X-ray datasets. Our experimental investigation demonstrates that the Convolutional vision Transformer (CvT) ImageNet-21K and the Distilled Generative Pre-trained Transformer 2 (DistilGPT2) checkpoints are best for warm starting the encoder and decoder, respectively. Compared to the state-of-the-art (M2 Transformer Progressive), CvT2DistilGPT2 attained an improvement of 8.3% for CE F-1, 1.8% for BLEU-4, 1.6% for ROUGE-L, and 1.0% for METEOR. The reports generated by CvT2DistilGPT2 have a higher similarity to radiologist reports than previous approaches. This indicates that leveraging warm starting improves CXR report generation. Code and checkpoints for CvT2DistilGPT2 are available at https://github.com/aehrc/cvt2distilgpt2.
Collapse
Affiliation(s)
- Aaron Nicolson
- The Australian e-Health Research Centre, CSIRO Health and Biosecurity, Brisbane, Australia.
| | - Jason Dowling
- The Australian e-Health Research Centre, CSIRO Health and Biosecurity, Brisbane, Australia
| | - Bevan Koopman
- The Australian e-Health Research Centre, CSIRO Health and Biosecurity, Brisbane, Australia
| |
Collapse
|
33
|
Czum J. Change or No Change: Using AI to Compare Follow-up Chest Radiographs. Radiology 2023; 309:e232481. [PMID: 37874238 DOI: 10.1148/radiol.232481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Affiliation(s)
- Julianna Czum
- From the Department of Radiology and Radiological Sciences, The Johns Hopkins University School of Medicine, 601 N Caroline St, Baltimore, MD 21287
| |
Collapse
|
34
|
Saliba T, Simoni P, Boitsios G. Commentary: How much further can radiologists be pushed? Pediatr Radiol 2023; 53:2309-2310. [PMID: 37561164 DOI: 10.1007/s00247-023-05741-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 07/31/2023] [Accepted: 08/01/2023] [Indexed: 08/11/2023]
Affiliation(s)
- Thomas Saliba
- Hôpital Universitaire Des Enfants Reine Fabiola, Avenue Jean-Joseph Crocq 15, 1020, Brussels, Belgium.
| | - Paolo Simoni
- Hôpital Universitaire Des Enfants Reine Fabiola, Avenue Jean-Joseph Crocq 15, 1020, Brussels, Belgium
| | - Grammatina Boitsios
- Hôpital Universitaire Des Enfants Reine Fabiola, Avenue Jean-Joseph Crocq 15, 1020, Brussels, Belgium
| |
Collapse
|
35
|
Nafees Ahmed S, Prakasam P. A systematic review on intracranial aneurysm and hemorrhage detection using machine learning and deep learning techniques. PROGRESS IN BIOPHYSICS AND MOLECULAR BIOLOGY 2023; 183:1-16. [PMID: 37499766 DOI: 10.1016/j.pbiomolbio.2023.07.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 07/05/2023] [Accepted: 07/15/2023] [Indexed: 07/29/2023]
Abstract
The risk of discovering an intracranial aneurysm during the initial screening and follow-up screening are reported as around 11%, and 7% respectively (Zuurbie et al., 2023) to these mass effects, unruptured aneurysms frequently generate symptoms, however, the real hazard occurs when an aneurysm ruptures and results in a cerebral hemorrhage known as a subarachnoid hemorrhage. The objective is to study the multiple kinds of hemorrhage and aneurysm detection problems and develop machine and deep learning models to recognise them. Due to its early stage, subarachnoid hemorrhage, the most typical symptom after aneurysm rupture, is an important medical condition. It frequently results in severe neurological emergencies or even death. Although most aneurysms are asymptomatic and won't burst, because of their unpredictable growth, even small aneurysms are susceptible. A timely diagnosis is essential to prevent early mortality because a large percentage of hemorrhage cases present can be fatal. Physiological/imaging markers and the degree of the subarachnoid hemorrhage can be used as indicators for potential early treatments in hemorrhage. The hemodynamic pathomechanisms and microcellular environment should remain a priority for academics and medical professionals. There is still disagreement about how and when to care for aneurysms that have not ruptured despite studies reporting on the risk of rupture and outcomes. We are optimistic that with the progress in our understanding of the pathophysiology of hemorrhages and aneurysms and the advancement of artificial intelligence has made it feasible to conduct analyses with a high degree of precision, effectiveness and reliability.
Collapse
Affiliation(s)
- S Nafees Ahmed
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India.
| | - P Prakasam
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India.
| |
Collapse
|
36
|
Tuna IS. Editorial Comment: Analyzing Causes and Generating Strategies to Mitigate Diagnostic Errors in Radiology Practice. AJR Am J Roentgenol 2023; 221:362. [PMID: 37073904 DOI: 10.2214/ajr.23.29460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2023]
Affiliation(s)
- Ibrahim S Tuna
- University of Florida College of Medicine, Gainesville, FL,
| |
Collapse
|
37
|
Lin Z, Zhang D, Tao Q, Shi D, Haffari G, Wu Q, He M, Ge Z. Medical visual question answering: A survey. Artif Intell Med 2023; 143:102611. [PMID: 37673579 DOI: 10.1016/j.artmed.2023.102611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 05/25/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Medical Visual Question Answering (VQA) is a combination of medical artificial intelligence and popular VQA challenges. Given a medical image and a clinically relevant question in natural language, the medical VQA system is expected to predict a plausible and convincing answer. Although the general-domain VQA has been extensively studied, the medical VQA still needs specific investigation and exploration due to its task features. In the first part of this survey, we collect and discuss the publicly available medical VQA datasets up-to-date about the data source, data quantity, and task feature. In the second part, we review the approaches used in medical VQA tasks. We summarize and discuss their techniques, innovations, and potential improvements. In the last part, we analyze some medical-specific challenges for the field and discuss future research directions. Our goal is to provide comprehensive and helpful information for researchers interested in the medical visual question answering field and encourage them to conduct further research in this field.
Collapse
Affiliation(s)
- Zhihong Lin
- Faculty of Engineering, Monash University, Clayton, VIC, 3800, Australia.
| | - Donghao Zhang
- eResearch Center, Monash University, Clayton, VIC, 3800, Australia.
| | - Qingyi Tao
- NVIDIA AI Technology Center, 038988, Singapore.
| | - Danli Shi
- Centre for Eye and Vision Research, The Hong Kong Polytechnic University, Kowloon, TU428, Hong Kong SAR.
| | - Gholamreza Haffari
- Faculty of Information Technology, Monash University, Clayton, 3800, VIC, Australia.
| | - Qi Wu
- Australian Centre for Robotic Vision, The University of Adelaide, Adelaide, SA 5005, Australia.
| | - Mingguang He
- Centre for Eye and Vision Research, The Hong Kong Polytechnic University, Kowloon, TU428, Hong Kong SAR.
| | - Zongyuan Ge
- Faculty of Information Technology, Monash University, Clayton, 3800, VIC, Australia; Airdoc Research, Melbourne, VIC, 3000, Australia; Monash-NVIDIA AI Tech Centre, Melbourne, VIC, 3000, Australia.
| |
Collapse
|
38
|
Arslan M, Haider A, Khurshid M, Abu Bakar SSU, Jani R, Masood F, Tahir T, Mitchell K, Panchagnula S, Mandair S. From Pixels to Pathology: Employing Computer Vision to Decode Chest Diseases in Medical Images. Cureus 2023; 15:e45587. [PMID: 37868395 PMCID: PMC10587792 DOI: 10.7759/cureus.45587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/19/2023] [Indexed: 10/24/2023] Open
Abstract
Radiology has been a pioneer in the healthcare industry's digital transformation, incorporating digital imaging systems like picture archiving and communication system (PACS) and teleradiology over the past thirty years. This shift has reshaped radiology services, positioning the field at a crucial junction for potential evolution into an integrated diagnostic service through artificial intelligence and machine learning. These technologies offer advanced tools for radiology's transformation. The radiology community has advanced computer-aided diagnosis (CAD) tools using machine learning techniques, notably deep learning convolutional neural networks (CNNs), for medical image pattern recognition. However, the integration of CAD tools into clinical practice has been hindered by challenges in workflow integration, unclear business models, and limited clinical benefits, despite development dating back to the 1990s. This comprehensive review focuses on detecting chest-related diseases through techniques like chest X-rays (CXRs), magnetic resonance imaging (MRI), nuclear medicine, and computed tomography (CT) scans. It examines the utilization of computer-aided programs by researchers for disease detection, addressing key areas: the role of computer-aided programs in disease detection advancement, recent developments in MRI, CXR, radioactive tracers, and CT scans for chest disease identification, research gaps for more effective development, and the incorporation of machine learning programs into diagnostic tools.
Collapse
Affiliation(s)
- Muhammad Arslan
- Department of Emergency Medicine, Royal Infirmary of Edinburgh, National Health Service (NHS) Lothian, Edinburgh, GBR
| | - Ali Haider
- Department of Allied Health Sciences, The University of Lahore, Gujrat Campus, Gujrat, PAK
| | - Mohsin Khurshid
- Department of Microbiology, Government College University Faisalabad, Faisalabad, PAK
| | | | - Rutva Jani
- Department of Internal Medicine, C. U. Shah Medical College and Hospital, Gujarat, IND
| | - Fatima Masood
- Department of Internal Medicine, Gulf Medical University, Ajman, ARE
| | - Tuba Tahir
- Department of Business Administration, Iqra University, Karachi, PAK
| | - Kyle Mitchell
- Department of Internal Medicine, University of Science, Arts and Technology, Olveston, MSR
| | - Smruthi Panchagnula
- Department of Internal Medicine, Ganni Subbalakshmi Lakshmi (GSL) Medical College, Hyderabad, IND
| | - Satpreet Mandair
- Department of Internal Medicine, Medical University of the Americas, Charlestown, KNA
| |
Collapse
|
39
|
Zhang W, Chen Z, Su Z, Wang Z, Hai J, Huang C, Wang Y, Yan B, Lu H. Deep learning-based detection and classification of lumbar disc herniation on magnetic resonance images. JOR Spine 2023; 6:e1276. [PMID: 37780833 PMCID: PMC10540823 DOI: 10.1002/jsp2.1276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 07/03/2023] [Accepted: 08/03/2023] [Indexed: 10/03/2023] Open
Abstract
Background The severity assessment of lumbar disc herniation (LDH) on MR images is crucial for selecting suitable surgical candidates. However, the interpretation of MR images is time-consuming and requires repetitive work. This study aims to develop and evaluate a deep learning-based diagnostic model for automated LDH detection and classification on lumbar axial T2-weighted MR images. Methods A total of 1115 patients were analyzed in this retrospective study; both a development dataset (1015 patients, 15 249 images) and an external test dataset (100 patients, 1273 images) were utilized. According to the Michigan State University (MSU) classification criterion, experts labeled all images with consensus, and the final labeled results were regarded as the reference standard. The automated diagnostic model comprised Faster R-CNN and ResNeXt101 as the detection and classification network, respectively. The deep learning-based diagnostic performance was evaluated by calculating mean intersection over union (IoU), accuracy, precision, sensitivity, specificity, F1 score, the area under the receiver operating characteristics curve (AUC), and intraclass correlation coefficient (ICC) with 95% confidence intervals (CIs). Results High detection consistency was obtained in the internal test dataset (mean IoU = 0.82, precision = 98.4%, sensitivity = 99.4%) and external test dataset (mean IoU = 0.70, precision = 96.3%, sensitivity = 97.8%). Overall accuracy for LDH classification was 87.70% (95% CI: 86.59%-88.86%) and 74.23% (95% CI: 71.83%-76.75%) in the internal and external test datasets, respectively. For internal testing, the proposed model achieved a high agreement in classification (ICC = 0.87, 95% CI: 0.86-0.88, P < 0.001), which was higher than that of external testing (ICC = 0.79, 95% CI: 0.76-0.81, P < 0.001). The AUC for model classification was 0.965 (95% CI: 0.962-0.968) and 0.916 (95% CI: 0.908-0.925) in the internal and external test datasets, respectively. Conclusions The automated diagnostic model achieved high performance in detecting and classifying LDH and exhibited considerable consistency with experts' classification.
Collapse
Affiliation(s)
- Weicong Zhang
- Department of Spinal SurgeryThe Fifth Affiliated Hospital of Sun Yat‐sen UniversityZhuhaiGuangdongChina
| | - Ziyang Chen
- Department of Spinal SurgeryThe Fifth Affiliated Hospital of Sun Yat‐sen UniversityZhuhaiGuangdongChina
| | - Zhihai Su
- Department of Spinal SurgeryThe Fifth Affiliated Hospital of Sun Yat‐sen UniversityZhuhaiGuangdongChina
| | - Zhengyan Wang
- Henan Key Laboratory of Imaging and Intelligent ProcessingPLA Strategic Support Force Information Engineering UniversityZhengzhouChina
| | - Jinjin Hai
- Henan Key Laboratory of Imaging and Intelligent ProcessingPLA Strategic Support Force Information Engineering UniversityZhengzhouChina
| | - Chengjie Huang
- Department of Spinal SurgeryThe Fifth Affiliated Hospital of Sun Yat‐sen UniversityZhuhaiGuangdongChina
| | - Yuhan Wang
- Department of Spinal SurgeryThe Fifth Affiliated Hospital of Sun Yat‐sen UniversityZhuhaiGuangdongChina
| | - Bin Yan
- Henan Key Laboratory of Imaging and Intelligent ProcessingPLA Strategic Support Force Information Engineering UniversityZhengzhouChina
| | - Hai Lu
- Department of Spinal SurgeryThe Fifth Affiliated Hospital of Sun Yat‐sen UniversityZhuhaiGuangdongChina
| |
Collapse
|
40
|
Ivanovic V, Broadhead K, Beck R, Chang YM, Paydar A, Biddle G, Hacein-Bey L, Qi L. Factors Associated With Neuroradiologic Diagnostic Errors at a Large Tertiary-Care Academic Medical Center: A Case-Control Study. AJR Am J Roentgenol 2023; 221:355-362. [PMID: 36988269 DOI: 10.2214/ajr.22.28925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
Abstract
BACKGROUND. Numerous studies have explored factors associated with diagnostic errors in neuroradiology; however, large-scale multivariable analyses are lacking. OBJECTIVE. The purpose of this study was to evaluate associations of interpretation time, shift volume, care setting, day of week, and trainee participation with diagnostic errors by neuroradiologists at a large academic medical center. METHODS. This retrospective case-control study using a large tertiary-care academic medical center's neuroradiology quality assurance database evaluated CT and MRI examinations for which neuroradiologists had assigned RADPEER scores. The database was searched from January 2014 through March 2020 for examinations without (RADPEER score of 1) or with (RADPEER scores of 2a, 2b, 3a, 3b, or 4) diagnostic error. For each examination with error, two examinations without error were randomly selected (unless only one examination could be identified) and matched by interpreting radiologist and examination type to form case and control groups. Marginal mixed-effects logistic regression models were used to assess associations of diagnostic error with interpretation time (number of minutes since the immediately preceding report's completion), shift volume (number of examinations interpreted during the shift), emergency/inpatient setting, weekend interpretation, and trainee participation in interpretation. RESULTS. The case group included 564 examinations in 564 patients (mean age, 50.0 ± 25.0 [SD] years; 309 men, 255 women); the control group included 1019 examinations in 1019 patients (mean age, 52.5 ± 23.2 years; 540 men, 479 women). In the case versus control group, mean interpretation time was 16.3 ± 17.2 [SD] minutes versus 14.8 ± 16.7 minutes; mean shift volume was 50.0 ± 22.1 [SD] examinations versus 45.4 ± 22.9 examinations. In univariable models, diagnostic error was associated with shift volume (OR = 1.22, p < .001) and weekend interpretation (OR = 1.60, p < .001) but not interpretation time, emergency/inpatient setting, or trainee participation (p > .05). However, in multivariable models, diagnostic error was independently associated with interpretation time (OR = 1.18, p = .003), shift volume (OR = 1.27, p < .001), and weekend interpretation (OR = 1.69, p = .02). In subanalysis, diagnostic error showed independent associations on weekdays with interpretation time (OR = 1.18, p = .003) and shift volume (OR = 1.27, p < .001); such associations were not observed on weekends (interpretation time: p = .62; shift volume: p = .58). CONCLUSION. Diagnostic errors in neuroradiology were associated with longer interpretation times, higher shift volumes, and weekend interpretation. CLINICAL IMPACT. These findings should be considered when designing work-flow-related interventions seeking to reduce neuroradiology interpretation errors.
Collapse
Affiliation(s)
- Vladimir Ivanovic
- Department of Radiology, Section of Neuroradiology, Medical College of Wisconsin, 8701 Watertown Plank Rd, Milwaukee, WI 53226
| | - Kenneth Broadhead
- Department of Statistics, Colorado State University, Fort Collins, CO
| | - Ryan Beck
- Department of Radiology, Section of Neuroradiology, Medical College of Wisconsin, 8701 Watertown Plank Rd, Milwaukee, WI 53226
| | - Yu-Ming Chang
- Department of Radiology, Section of Neuroradiology, Beth Israel Deaconess Medical Center, Boston, MA
| | - Alireza Paydar
- Department of Radiology, Section of Neuroradiology, University of California, Davis Medical Center, Sacramento, CA
| | - Garrick Biddle
- Department of Radiology, Section of Neuroradiology, University of California, Davis Medical Center, Sacramento, CA
| | - Lotfi Hacein-Bey
- Department of Radiology, Section of Neuroradiology, University of California, Davis Medical Center, Sacramento, CA
| | - Lihong Qi
- Department of Public Health Sciences, School of Medicine, University of California, Davis, Davis, CA
| |
Collapse
|
41
|
Wang D, Jin R, Shieh CC, Ng AY, Pham H, Dugal T, Barnett M, Winoto L, Wang C, Barnett Y. Real world validation of an AI-based CT hemorrhage detection tool. Front Neurol 2023; 14:1177723. [PMID: 37602253 PMCID: PMC10435741 DOI: 10.3389/fneur.2023.1177723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 07/12/2023] [Indexed: 08/22/2023] Open
Abstract
Introduction Intracranial hemorrhage (ICH) is a potentially life-threatening medical event that requires expedited diagnosis with computed tomography (CT). Automated medical imaging triaging tools can rapidly bring scans containing critical abnormalities, such as ICH, to the attention of radiologists and clinicians. Here, we retrospectively investigated the real-world performance of VeriScout™, an artificial intelligence-based CT hemorrhage detection and triage tool. Methods Ground truth for the presence or absence of ICH was iteratively determined by expert consensus in an unselected dataset of 527 consecutively acquired non-contrast head CT scans, which were sub-grouped according to the presence of artefact, post-operative features and referral source. The performance of VeriScout™ was compared with the ground truths for all groups. Results VeriScout™ detected hemorrhage with a sensitivity of 0.92 (CI 0.84-0.96) and a specificity of 0.96 (CI 0.94-0.98) in the global dataset, exceeding the sensitivity of general radiologists (0.88) with only a minor relative decrement in specificity (0.98). Crucially, the AI tool detected 13/14 cases of subarachnoid hemorrhage, a potentially fatal condition that is often missed in emergency department settings. There was no decrement in the performance of VeriScout™ in scans containing artefact or postoperative change. Using an integrated informatics platform, VeriScout™ was deployed into the existing radiology workflow. Detected hemorrhage cases were flagged in the hospital radiology information system (RIS) and relevant, annotated, preview images made available in the picture archiving and communications system (PACS) within 10 min. Conclusion AI-based radiology worklist prioritization for critical abnormalities, such as ICH, may enhance patient care without adding to radiologist or clinician burden.
Collapse
Affiliation(s)
- Dongang Wang
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
- Brain and Mind Centre, University of Sydney, Sydney, NSW, Australia
| | - Ruilin Jin
- Department of Medical Imaging, St. Vincent’s Hospital, Sydney, NSW, Australia
| | | | - Adrian Y. Ng
- Emergency Department, St. Vincent’s Hospital, Sydney, NSW, Australia
| | - Hiep Pham
- Department of Medical Imaging, St. Vincent’s Hospital, Sydney, NSW, Australia
| | - Tej Dugal
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
| | - Michael Barnett
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
- Brain and Mind Centre, University of Sydney, Sydney, NSW, Australia
| | - Luis Winoto
- Emergency Department, St. Vincent’s Hospital, Sydney, NSW, Australia
| | - Chenyu Wang
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
- Brain and Mind Centre, University of Sydney, Sydney, NSW, Australia
| | - Yael Barnett
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
- Department of Medical Imaging, St. Vincent’s Hospital, Sydney, NSW, Australia
| |
Collapse
|
42
|
Morris JM, Wentworth A, Houdek MT, Karim SM, Clarke MJ, Daniels DJ, Rose PS. The Role of 3D Printing in Treatment Planning of Spine and Sacral Tumors. Neuroimaging Clin N Am 2023; 33:507-529. [PMID: 37356866 DOI: 10.1016/j.nic.2023.05.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/27/2023]
Abstract
Three-dimensional (3D) printing technology has proven to have many advantages in spine and sacrum surgery. 3D printing allows the manufacturing of life-size patient-specific anatomic and pathologic models to improve preoperative understanding of patient anatomy and pathology. Additionally, virtual surgical planning using medical computer-aided design software has enabled surgeons to create patient-specific surgical plans and simulate procedures in a virtual environment. This has resulted in reduced operative times, decreased complications, and improved patient outcomes. Combined with new surgical techniques, 3D-printed custom medical devices and instruments using titanium and biocompatible resins and polyamides have allowed innovative reconstructions.
Collapse
Affiliation(s)
- Jonathan M Morris
- Division of Neuroradiology, Department of Radiology, Anatomic Modeling Unit, Biomedical and Scientific Visualization, Mayo Clinic, 200 1st Street, Southwest, Rochester, MN, 55905, USA.
| | - Adam Wentworth
- Department of Radiology, Anatomic Modeling Unit, Mayo Clinic, Rochester, MN, USA
| | - Matthew T Houdek
- Division of Orthopedic Oncology, Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - S Mohammed Karim
- Division of Orthopedic Oncology, Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | | | | | - Peter S Rose
- Division of Orthopedic Oncology, Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
43
|
Ivanovic V, Paydar A, Chang YM, Broadhead K, Smullen D, Klein A, Hacein-Bey L. Impact of Shift Volume on Neuroradiology Diagnostic Errors at a Large Tertiary Academic Center. Acad Radiol 2023; 30:1584-1588. [PMID: 36180325 DOI: 10.1016/j.acra.2022.08.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 08/20/2022] [Accepted: 08/30/2022] [Indexed: 11/01/2022]
Abstract
BACKGROUND AND PURPOSE Medical errors can result in significant morbidity and mortality. The goal of our study is to evaluate correlation between shift volume and errors made by attending neuroradiologists at an academic medical center, using a large data set. MATERIALS AND METHODS CT and MRI reports from our Neuroradiology Quality Assurance database (years 2014 - 2020) were searched for attending physician errors. Data were collected on shift volume, category of missed findings, error type, interpretation setting, exam type, clinical significance. RESULTS 654 reports contained diagnostic error. There was a significant difference between mean volume of interpreted studies on shifts when an error was made compared with shifts in which no error was documented (46.58 (SD=22.37) vs 34.09 (SD=18.60), p<0.00001); and between shifts when perceptual error was made compared with shifts when interpretive errors were made (49.50 (SD=21.9) vs 43.26 (SD=21.75), p=0.0094). 59.6% of errors occurred in the emergency/inpatient setting, 84% were perceptual and 91.1% clinically significant. Categorical distribution of errors was: vascular 25.8%, brain 23.4%, skull base 13.8%, spine 12.4%, head/neck 11.3%, fractures 10.2%, other 3.1%. Errors were detected most often on brain MRI (25.4%), head CT (18.7%), head/neck CTA (13.8%), spine MRI (13.7%). CONCLUSION Errors were associated with higher volume shifts, were primarily perceptual and clinically significant. We need National guidelines establishing a range of what is a safe number of interpreted cross-sectional studies per day.
Collapse
Affiliation(s)
- Vladimir Ivanovic
- Department of Radiology, Section of Neuroradiology, Medical College of Wisconsin, Milwaukee, WI.
| | - Alireza Paydar
- Department of Radiology, Section of Neuroradiology, University of California Davis Medical Center, Sacramento, CA
| | - Yu-Ming Chang
- Department of Radiology, Section of Neuroradiology, Beth Israel Deaconess Medical Center, Harvard School of Medicine, Boston, Massachusetts
| | - Kenneth Broadhead
- Department of statistics, School of Medicine, University of California Davis, Davis, CA
| | - David Smullen
- Department of Radiology, Section of Neuroradiology, Medical College of Wisconsin, Milwaukee, WI
| | - Andrew Klein
- Department of Radiology, Section of Neuroradiology, Medical College of Wisconsin, Milwaukee, WI
| | - Lotfi Hacein-Bey
- Department of Radiology, Section of Neuroradiology, University of California Davis Medical Center, Sacramento, CA
| |
Collapse
|
44
|
Eltawil FA, Atalla M, Boulos E, Amirabadi A, Tyrrell PN. Analyzing Barriers and Enablers for the Acceptance of Artificial Intelligence Innovations into Radiology Practice: A Scoping Review. Tomography 2023; 9:1443-1455. [PMID: 37624108 PMCID: PMC10459931 DOI: 10.3390/tomography9040115] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 07/23/2023] [Accepted: 07/26/2023] [Indexed: 08/26/2023] Open
Abstract
OBJECTIVES This scoping review was conducted to determine the barriers and enablers associated with the acceptance of artificial intelligence/machine learning (AI/ML)-enabled innovations into radiology practice from a physician's perspective. METHODS A systematic search was performed using Ovid Medline and Embase. Keywords were used to generate refined queries with the inclusion of computer-aided diagnosis, artificial intelligence, and barriers and enablers. Three reviewers assessed the articles, with a fourth reviewer used for disagreements. The risk of bias was mitigated by including both quantitative and qualitative studies. RESULTS An electronic search from January 2000 to 2023 identified 513 studies. Twelve articles were found to fulfill the inclusion criteria: qualitative studies (n = 4), survey studies (n = 7), and randomized controlled trials (RCT) (n = 1). Among the most common barriers to AI implementation into radiology practice were radiologists' lack of acceptance and trust in AI innovations; a lack of awareness, knowledge, and familiarity with the technology; and perceived threat to the professional autonomy of radiologists. The most important identified AI implementation enablers were high expectations of AI's potential added value; the potential to decrease errors in diagnosis; the potential to increase efficiency when reaching a diagnosis; and the potential to improve the quality of patient care. CONCLUSIONS This scoping review found that few studies have been designed specifically to identify barriers and enablers to the acceptance of AI in radiology practice. The majority of studies have assessed the perception of AI replacing radiologists, rather than other barriers or enablers in the adoption of AI. To comprehensively evaluate the potential advantages and disadvantages of integrating AI innovations into radiology practice, gathering more robust research evidence on stakeholder perspectives and attitudes is essential.
Collapse
Affiliation(s)
- Fatma A. Eltawil
- Department of Medical Imaging, University of Toronto, Toronto, ON M5S 1A1, Canada; (F.A.E.); (M.A.); (E.B.)
| | - Michael Atalla
- Department of Medical Imaging, University of Toronto, Toronto, ON M5S 1A1, Canada; (F.A.E.); (M.A.); (E.B.)
| | - Emily Boulos
- Department of Medical Imaging, University of Toronto, Toronto, ON M5S 1A1, Canada; (F.A.E.); (M.A.); (E.B.)
| | - Afsaneh Amirabadi
- Diagnostic Imaging Department, The Hospital for Sick Children, Toronto, ON M5G 1E8, Canada;
| | - Pascal N. Tyrrell
- Department of Medical Imaging, University of Toronto, Toronto, ON M5S 1A1, Canada; (F.A.E.); (M.A.); (E.B.)
- Department of Statistical Sciences, University of Toronto, Toronto, ON M5G 1Z5, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON M5S 1A8, Canada
| |
Collapse
|
45
|
Alelyani M, Gameraddin M, Khushayl AMA, Altowaijri AM, Qashqari MI, Alzahrani FAA, Gareeballah A. Work-related musculoskeletal symptoms among Saudi radiologists: a cross-sectional multi-centre study. BMC Musculoskelet Disord 2023; 24:468. [PMID: 37286979 DOI: 10.1186/s12891-023-06596-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 06/01/2023] [Indexed: 06/09/2023] Open
Abstract
BACKGROUND Musculoskeletal disorders are common health problems worldwide. Several factors cause these symptoms, including ergonomics and other individual considerations. Computer users are prone to repetitive strain injuries that increase the risk of developing musculoskeletal symptoms (MSS). Radiologists are susceptible to developing MSS because they work long hours analysing medical images on computers in an increasingly digitalised field. This study aimed to identify the prevalence of MSS among Saudi radiologists and the associated risk factors. METHODS This study was a cross-sectional, non-interventional, self-administered online survey. The study was conducted on 814 Saudi radiologists from various regions in Saudi Arabia. The study's outcome was the presence of MSS in any body region that limited participation in routine activities over the previous 12 months. The results were descriptively examined using binary logistic regression analysis to estimate the odds ratio (OR) of participants who had disabling MSS in the previous 12 months. All university, public, and private radiologists received an online survey containing questions about work surroundings, workload (e.g., spent at a computer workstation), and demographic characteristics. RESULTS The prevalence of MSS among the radiologists was 87.7%. Most of the participants (82%) were younger than 40 years of age. Radiography and computed tomography were the most common imaging modalities that caused MSS (53.4% and 26.8%, respectively). The most common symptoms were neck pain (59.3%) and lower back pain (57.1%). After adjustment, age, years of experience, and part-time employment were significantly associated with increased MSS (OR = .219, 95% CI = .057-.836; OR = .235, 95% CI = 087-.634; and OR = 2.673, 95% CI = 1.434-4.981, respectively). Women were more likely to report MSS than males (OR = 2.12, 95% CI = 1.327-3.377). CONCLUSIONS MSS are common among Saudi radiologists, with neck pain and lower back pain being the most frequently reported symptoms. Gender, age, years of experience, type of imaging modality, and employment status were the most common associated risk factors for developing MSS. These findings are vital for the development of interventional plans to reduce the prevalence of musculoskeletal complaints in clinical radiologists.
Collapse
Affiliation(s)
- Magbool Alelyani
- Department of Radiological Sciences, College of Applied Medical Sciences, King Khalid University, Abha, 62529, Saudi Arabia.
| | - Moawia Gameraddin
- Department of Diagnostic Radiology Technology, College of Applied Medical Sciences, Taibah University, Al-Madinah, Saudi Arabia
- Department of Diagnostic Radiology, Faculty of Radiological Sciences and Medical Imaging, Alzaiem Alazhari University, Khartoum, Sudan
| | | | | | | | | | - Awadia Gareeballah
- Department of Diagnostic Radiology Technology, College of Applied Medical Sciences, Taibah University, Al-Madinah, Saudi Arabia
| |
Collapse
|
46
|
Huang SC, Pareek A, Jensen M, Lungren MP, Yeung S, Chaudhari AS. Self-supervised learning for medical image classification: a systematic review and implementation guidelines. NPJ Digit Med 2023; 6:74. [PMID: 37100953 PMCID: PMC10131505 DOI: 10.1038/s41746-023-00811-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Accepted: 03/30/2023] [Indexed: 04/28/2023] Open
Abstract
Advancements in deep learning and computer vision provide promising solutions for medical image analysis, potentially improving healthcare and patient outcomes. However, the prevailing paradigm of training deep learning models requires large quantities of labeled training data, which is both time-consuming and cost-prohibitive to curate for medical images. Self-supervised learning has the potential to make significant contributions to the development of robust medical imaging models through its ability to learn useful insights from copious medical datasets without labels. In this review, we provide consistent descriptions of different self-supervised learning strategies and compose a systematic review of papers published between 2012 and 2022 on PubMed, Scopus, and ArXiv that applied self-supervised learning to medical imaging classification. We screened a total of 412 relevant studies and included 79 papers for data extraction and analysis. With this comprehensive effort, we synthesize the collective knowledge of prior work and provide implementation guidelines for future researchers interested in applying self-supervised learning to their development of medical imaging classification models.
Collapse
Affiliation(s)
- Shih-Cheng Huang
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA.
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA.
| | - Anuj Pareek
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA
| | - Malte Jensen
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| | - Matthew P Lungren
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Serena Yeung
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA
- Department of Computer Science, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Clinical Excellence Research Center, Stanford University School of Medicine, Stanford, CA, USA
| | - Akshay S Chaudhari
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA
- Department of Radiology, Stanford University, Stanford, CA, USA
- Stanford Cardiovascular Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
47
|
Shetty S, S. AV, Mahale A. Multimodal medical tensor fusion network-based DL framework for abnormality prediction from the radiology CXRs and clinical text reports. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-48. [PMID: 37362656 PMCID: PMC10119019 DOI: 10.1007/s11042-023-14940-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 04/05/2022] [Accepted: 02/22/2023] [Indexed: 06/28/2023]
Abstract
Pulmonary disease is a commonly occurring abnormality throughout this world. The pulmonary diseases include Tuberculosis, Pneumothorax, Cardiomegaly, Pulmonary atelectasis, Pneumonia, etc. A timely prognosis of pulmonary disease is essential. Increasing progress in Deep Learning (DL) techniques has significantly impacted and contributed to the medical domain, specifically in leveraging medical imaging for analysis, prognosis, and therapeutic decisions for clinicians. Many contemporary DL strategies for radiology focus on a single modality of data utilizing imaging features without considering the clinical context that provides more valuable complementary information for clinically consistent prognostic decisions. Also, the selection of the best data fusion strategy is crucial when performing Machine Learning (ML) or DL operation on multimodal heterogeneous data. We investigated multimodal medical fusion strategies leveraging DL techniques to predict pulmonary abnormality from the heterogeneous radiology Chest X-Rays (CXRs) and clinical text reports. In this research, we have proposed two effective unimodal and multimodal subnetworks to predict pulmonary abnormality from the CXR and clinical reports. We have conducted a comprehensive analysis and compared the performance of unimodal and multimodal models. The proposed models were applied to standard augmented data and the synthetic data generated to check the model's ability to predict from the new and unseen data. The proposed models were thoroughly assessed and examined against the publicly available Indiana university dataset and the data collected from the private medical hospital. The proposed multimodal models have given superior results compared to the unimodal models.
Collapse
Affiliation(s)
- Shashank Shetty
- Department of Information Technology, National Institute of Technology Karnataka, Mangalore, 575025 Karnataka India
- Department of Computer Science and Engineering, Nitte (Deemed to be University), NMAM Institute of Technology (NMAMIT), Udupi, 574110 Karnataka India
| | - Ananthanarayana V. S.
- Department of Information Technology, National Institute of Technology Karnataka, Mangalore, 575025 Karnataka India
| | - Ajit Mahale
- Department of Radiology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Mangalore, 575001 Karnataka India
| |
Collapse
|
48
|
Pfeuffer N, Baum L, Stammer W, Abdel-Karim BM, Schramowski P, Bucher AM, Hügel C, Rohde G, Kersting K, Hinz O. Explanatory Interactive Machine Learning. BUSINESS & INFORMATION SYSTEMS ENGINEERING 2023. [PMCID: PMC10119840 DOI: 10.1007/s12599-023-00806-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Accepted: 01/17/2023] [Indexed: 11/22/2023]
Abstract
The most promising standard machine learning methods can deliver highly accurate classification results, often outperforming standard white-box methods. However, it is hardly possible for humans to fully understand the rationale behind the black-box results, and thus, these powerful methods hamper the creation of new knowledge on the part of humans and the broader acceptance of this technology. Explainable Artificial Intelligence attempts to overcome this problem by making the results more interpretable, while Interactive Machine Learning integrates humans into the process of insight discovery. The paper builds on recent successes in combining these two cutting-edge technologies and proposes how Explanatory Interactive Machine Learning (XIL) is embedded in a generalizable Action Design Research (ADR) process – called XIL-ADR. This approach can be used to analyze data, inspect models, and iteratively improve them. The paper shows the application of this process using the diagnosis of viral pneumonia, e.g., Covid-19, as an illustrative example. By these means, the paper also illustrates how XIL-ADR can help identify shortcomings of standard machine learning projects, gain new insights on the part of the human user, and thereby can help to unlock the full potential of AI-based systems for organizations and research.
Collapse
Affiliation(s)
- Nicolas Pfeuffer
- Information Systems and Information Management, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Lorenz Baum
- Information Systems and Information Management, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Wolfgang Stammer
- Machine Learning Group, Department of Computer Science, Technical University of Darmstadt, Darmstadt, Germany
| | - Benjamin M. Abdel-Karim
- Information Systems and Information Management, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Patrick Schramowski
- Machine Learning Group, Department of Computer Science, Technical University of Darmstadt, Darmstadt, Germany
| | - Andreas M. Bucher
- Diagnostic and Interventional Radiology, Center of Radiology, Hospital of the Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Christian Hügel
- Pneumology and Allergology, Center of Internal Medicine, Hospital of the Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Gernot Rohde
- Pneumology and Allergology, Center of Internal Medicine, Hospital of the Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Kristian Kersting
- Machine Learning Group, Department of Computer Science, Technical University of Darmstadt, Darmstadt, Germany
| | - Oliver Hinz
- Information Systems and Information Management, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
49
|
Pang J, Xiu W, Ma X. Application of Artificial Intelligence in the Diagnosis, Treatment, and Prognostic Evaluation of Mediastinal Malignant Tumors. J Clin Med 2023; 12:jcm12082818. [PMID: 37109155 PMCID: PMC10144939 DOI: 10.3390/jcm12082818] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 03/01/2023] [Accepted: 04/06/2023] [Indexed: 04/29/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is widely utilized in the medical field, promoting medical advances. Malignant tumors are the critical focus of medical research and improvement of clinical diagnosis and treatment. Mediastinal malignancy is an important tumor that attracts increasing attention today due to the difficulties in treatment. Combined with artificial intelligence, challenges from drug discovery to survival improvement are constantly being overcome. This article reviews the progress of the use of AI in the diagnosis, treatment, and prognostic prospects of mediastinal malignant tumors based on current literature findings.
Collapse
Affiliation(s)
- Jiyun Pang
- Division of Thoracic Tumor Multimodality Treatment, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
- State Key Laboratory of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
- West China School of Medicine, Sichuan University, Chengdu 610041, China
| | - Weigang Xiu
- Division of Thoracic Tumor Multimodality Treatment, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
- State Key Laboratory of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
- West China School of Medicine, Sichuan University, Chengdu 610041, China
| | - Xuelei Ma
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
| |
Collapse
|
50
|
Cui C, Yang H, Wang Y, Zhao S, Asad Z, Coburn LA, Wilson KT, Landman BA, Huo Y. Deep multimodal fusion of image and non-image data in disease diagnosis and prognosis: a review. PROGRESS IN BIOMEDICAL ENGINEERING (BRISTOL, ENGLAND) 2023; 5:10.1088/2516-1091/acc2fe. [PMID: 37360402 PMCID: PMC10288577 DOI: 10.1088/2516-1091/acc2fe] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
The rapid development of diagnostic technologies in healthcare is leading to higher requirements for physicians to handle and integrate the heterogeneous, yet complementary data that are produced during routine practice. For instance, the personalized diagnosis and treatment planning for a single cancer patient relies on various images (e.g. radiology, pathology and camera images) and non-image data (e.g. clinical data and genomic data). However, such decision-making procedures can be subjective, qualitative, and have large inter-subject variabilities. With the recent advances in multimodal deep learning technologies, an increasingly large number of efforts have been devoted to a key question: how do we extract and aggregate multimodal information to ultimately provide more objective, quantitative computer-aided clinical decision making? This paper reviews the recent studies on dealing with such a question. Briefly, this review will include the (a) overview of current multimodal learning workflows, (b) summarization of multimodal fusion methods, (c) discussion of the performance, (d) applications in disease diagnosis and prognosis, and (e) challenges and future directions.
Collapse
Affiliation(s)
- Can Cui
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Haichun Yang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Yaohong Wang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Shilin Zhao
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Zuhayr Asad
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Lori A Coburn
- Division of Gastroenterology Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States of America
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN 37212, United States of America
| | - Keith T Wilson
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
- Division of Gastroenterology Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States of America
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN 37212, United States of America
| | - Bennett A Landman
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Yuankai Huo
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, United States of America
| |
Collapse
|