101
|
Manimegalai P, Suresh Kumar R, Valsalan P, Dhanagopal R, Vasanth Raj PT, Christhudass J. 3D Convolutional Neural Network Framework with Deep Learning for Nuclear Medicine. SCANNING 2022; 2022:9640177. [PMID: 35924105 PMCID: PMC9308558 DOI: 10.1155/2022/9640177] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 06/27/2022] [Indexed: 05/15/2023]
Abstract
Though artificial intelligence (AI) has been used in nuclear medicine for more than 50 years, more progress has been made in deep learning (DL) and machine learning (ML), which have driven the development of new AI abilities in the field. ANNs are used in both deep learning and machine learning in nuclear medicine. Alternatively, if 3D convolutional neural network (CNN) is used, the inputs may be the actual images that are being analyzed, rather than a set of inputs. In nuclear medicine, artificial intelligence reimagines and reengineers the field's therapeutic and scientific capabilities. Understanding the concepts of 3D CNN and U-Net in the context of nuclear medicine provides for a deeper engagement with clinical and research applications, as well as the ability to troubleshoot problems when they emerge. Business analytics, risk assessment, quality assurance, and basic classifications are all examples of simple ML applications. General nuclear medicine, SPECT, PET, MRI, and CT may benefit from more advanced DL applications for classification, detection, localization, segmentation, quantification, and radiomic feature extraction utilizing 3D CNNs. An ANN may be used to analyze a small dataset at the same time as traditional statistical methods, as well as bigger datasets. Nuclear medicine's clinical and research practices have been largely unaffected by the introduction of artificial intelligence (AI). Clinical and research landscapes have been fundamentally altered by the advent of 3D CNN and U-Net applications. Nuclear medicine professionals must now have at least an elementary understanding of AI principles such as neural networks (ANNs) and convolutional neural networks (CNNs).
Collapse
Affiliation(s)
- P. Manimegalai
- Department of Biomedical Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - R. Suresh Kumar
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - Prajoona Valsalan
- Department of Electrical and Computer Engineering, Dhofar University, Salalah, Oman
| | - R. Dhanagopal
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - P. T. Vasanth Raj
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - Jerome Christhudass
- Department of Biomedical Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| |
Collapse
|
102
|
Goisauf M, Cano Abadía M. Ethics of AI in Radiology: A Review of Ethical and Societal Implications. Front Big Data 2022; 5:850383. [PMID: 35910490 PMCID: PMC9329694 DOI: 10.3389/fdata.2022.850383] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 06/13/2022] [Indexed: 11/13/2022] Open
Abstract
Artificial intelligence (AI) is being applied in medicine to improve healthcare and advance health equity. The application of AI-based technologies in radiology is expected to improve diagnostic performance by increasing accuracy and simplifying personalized decision-making. While this technology has the potential to improve health services, many ethical and societal implications need to be carefully considered to avoid harmful consequences for individuals and groups, especially for the most vulnerable populations. Therefore, several questions are raised, including (1) what types of ethical issues are raised by the use of AI in medicine and biomedical research, and (2) how are these issues being tackled in radiology, especially in the case of breast cancer? To answer these questions, a systematic review of the academic literature was conducted. Searches were performed in five electronic databases to identify peer-reviewed articles published since 2017 on the topic of the ethics of AI in radiology. The review results show that the discourse has mainly addressed expectations and challenges associated with medical AI, and in particular bias and black box issues, and that various guiding principles have been suggested to ensure ethical AI. We found that several ethical and societal implications of AI use remain underexplored, and more attention needs to be paid to addressing potential discriminatory effects and injustices. We conclude with a critical reflection on these issues and the identified gaps in the discourse from a philosophical and STS perspective, underlining the need to integrate a social science perspective in AI developments in radiology in the future.
Collapse
|
103
|
Padash S, Mohebbian MR, Adams SJ, Henderson RDE, Babyn P. Pediatric chest radiograph interpretation: how far has artificial intelligence come? A systematic literature review. Pediatr Radiol 2022; 52:1568-1580. [PMID: 35460035 PMCID: PMC9033522 DOI: 10.1007/s00247-022-05368-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 02/28/2022] [Accepted: 03/24/2022] [Indexed: 10/24/2022]
Abstract
Most artificial intelligence (AI) studies have focused primarily on adult imaging, with less attention to the unique aspects of pediatric imaging. The objectives of this study were to (1) identify all publicly available pediatric datasets and determine their potential utility and limitations for pediatric AI studies and (2) systematically review the literature to assess the current state of AI in pediatric chest radiograph interpretation. We searched PubMed, Web of Science and Embase to retrieve all studies from 1990 to 2021 that assessed AI for pediatric chest radiograph interpretation and abstracted the datasets used to train and test AI algorithms, approaches and performance metrics. Of 29 publicly available chest radiograph datasets, 2 datasets included solely pediatric chest radiographs, and 7 datasets included pediatric and adult patients. We identified 55 articles that implemented an AI model to interpret pediatric chest radiographs or pediatric and adult chest radiographs. Classification of chest radiographs as pneumonia was the most common application of AI, evaluated in 65% of the studies. Although many studies report high diagnostic accuracy, most algorithms were not validated on external datasets. Most AI studies for pediatric chest radiograph interpretation have focused on a limited number of diseases, and progress is hindered by a lack of large-scale pediatric chest radiograph datasets.
Collapse
Affiliation(s)
- Sirwa Padash
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Saskatoon, Saskatchewan, S7N 0W8, Canada.
- Department of Radiology, Mayo Clinic, Rochester, MN, USA.
| | - Mohammad Reza Mohebbian
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Scott J Adams
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Saskatoon, Saskatchewan, S7N 0W8, Canada
| | - Robert D E Henderson
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Saskatoon, Saskatchewan, S7N 0W8, Canada
| | - Paul Babyn
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Saskatoon, Saskatchewan, S7N 0W8, Canada
| |
Collapse
|
104
|
Canoni-Meynet L, Verdot P, Danner A, Calame P, Aubry S. Added value of an artificial intelligence solution for fracture detection in the radiologist's daily trauma emergencies workflow. Diagn Interv Imaging 2022; 103:594-600. [PMID: 35780054 DOI: 10.1016/j.diii.2022.06.004] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 05/25/2022] [Accepted: 06/15/2022] [Indexed: 12/30/2022]
Abstract
PURPOSE The main objective of this study was to compare radiologists' performance without and with artificial intelligence (AI) assistance for the detection of bone fractures from trauma emergencies. MATERIALS AND METHODS Five hundred consecutive patients (232 women, 268 men) with a mean age of 37 ± 28 (SD) years (age range: 0.25-99 years) were retrospectively included. Three radiologists independently interpreted radiographs without then with AI assistance after a 1-month minimum washout period. The ground truth was determined by consensus reading between musculoskeletal radiologists and AI results. Patient-wise sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for fracture detection and reading time were compared between unassisted and AI-assisted readings of radiologists. Their performances were also assessed by receiver operating characteristic (ROC) curves. RESULTS AI improved the patient-wise sensitivity of radiologists for fracture detection by 20% (95% confidence interval [CI]: 14-26), P< 0.001) and their specificity by 0.6% (95% CI: -0.9-1.5; P = 0.47). It increased the PPV by 2.9% (95% CI: 0.4-5.4; P = 0.08) and the NPV by 10% (95% CI: 6.8-13.3; P < 0.001). Thanks to AI, the area under the ROC curve for fracture detection of readers increased respectively by 10.6%, 10.2% and 9.9%. Their mean reading time per patient decreased by respectively 10, 16 and 12 s (P < 0.001). CONCLUSIONS AI-assisted radiologists work better and faster compared to unassisted radiologists. AI is of great aid to radiologists in daily trauma emergencies, and could reduce the cost of missed fractures.
Collapse
Affiliation(s)
| | - Pierre Verdot
- Department of Radiology, CHU de Besancon, Besançon 25030, France
| | - Alexis Danner
- Department of Radiology, CHU de Besancon, Besançon 25030, France
| | - Paul Calame
- Department of Radiology, CHU de Besancon, Besançon 25030, France
| | - Sébastien Aubry
- Department of Radiology, CHU de Besancon, Besançon 25030, France; Nanomedicine Laboratory EA4662, Université de Franche-Comté, Besançon 25030, France.
| |
Collapse
|
105
|
Rao BH, Trieu JA, Nair P, Gressel G, Venu M, Venu RP. Artificial intelligence in endoscopy: More than what meets the eye in screening colonoscopy and endosonographic evaluation of pancreatic lesions. Artif Intell Gastrointest Endosc 2022; 3:16-30. [DOI: 10.37126/aige.v3.i3.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 03/07/2022] [Accepted: 05/07/2022] [Indexed: 02/06/2023] Open
|
106
|
Wu W, Zhang B, Li S, Liu H. Exploring Factors of the Willingness to Accept AI-Assisted Learning Environments: An Empirical Investigation Based on the UTAUT Model and Perceived Risk Theory. Front Psychol 2022; 13:870777. [PMID: 35814061 PMCID: PMC9270016 DOI: 10.3389/fpsyg.2022.870777] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 05/30/2022] [Indexed: 11/15/2022] Open
Abstract
Artificial intelligence (AI) technology has been widely applied in many fields. AI-assisted learning environments have been implemented in classrooms to facilitate the innovation of pedagogical models. However, college students' willingness to accept (WTA) AI-assisted learning environments has been ignored. Exploring the factors that influence college students' willingness to use AI can promote AI technology application in higher education. Based on the Unified Theory of Acceptance and Use of Technology (UTAUT) and the theory of perceived risk, this study identified six factors that influence students' willingness to use AI to analyze their relationships with WTA AI-assisted learning environments. A model including six hypotheses was constructed to test the factors affecting students' WTA. The results indicated that college students showed "weak rejection" of the construction of AI-assisted learning environments. Effort expectancy (EE), performance expectancy (PE), and social influence (SI) were all positively related to college students' WTA AI-assisted learning environments. Psychological risk (PR) significantly negatively influenced students' WTA. The findings of this study will be helpful for carrying out risk communication, which can promote the construction of AI-assisted learning environments.
Collapse
Affiliation(s)
| | | | | | - Hehai Liu
- College of Education Science, Anhui Normal University, Wuhu, China
| |
Collapse
|
107
|
Wang Y, Li Y, Lin G, Zhang Q, Zhong J, Zhang Y, Ma K, Zheng Y, Lu G, Zhang Z. Lower-extremity fatigue fracture detection and grading based on deep learning models of radiographs. Eur Radiol 2022; 33:555-565. [PMID: 35748901 DOI: 10.1007/s00330-022-08950-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 05/18/2022] [Accepted: 06/08/2022] [Indexed: 11/04/2022]
Abstract
OBJECTIVES To identify the feasibility of deep learning-based diagnostic models for detecting and assessing lower-extremity fatigue fracture severity on plain radiographs. METHODS This retrospective study enrolled 1151 X-ray images (tibiofibula/foot: 682/469) of fatigue fractures and 2842 X-ray images (tibiofibula/foot: 2000/842) without abnormal presentations from two clinical centers. After labeling the lesions, images in a center (tibiofibula/foot: 2539/1180) were allocated at 7:1:2 for model construction, and the remaining images from another center (tibiofibula/foot: 143/131) for external validation. A ResNet-50 and a triplet branch network were adopted to construct diagnostic models for detecting and grading. The performances of detection models were evaluated with sensitivity, specificity, and area under the receiver operating characteristic curve (AUC), while grading models were evaluated with accuracy by confusion matrix. Visual estimations by radiologists were performed for comparisons with models. RESULTS For the detection model on tibiofibula, a sensitivity of 95.4%/85.5%, a specificity of 80.1%/77.0%, and an AUC of 0.965/0.877 were achieved in the internal testing/external validation set. The detection model on foot reached a sensitivity of 96.4%/90.8%, a specificity of 76.0%/66.7%, and an AUC of 0.947/0.911. The detection models showed superior performance to the junior radiologist, comparable to the intermediate or senior radiologist. The overall accuracy of the diagnostic model was 78.5%/62.9% for tibiofibula and 74.7%/61.1% for foot in the internal testing/external validation set. CONCLUSIONS The deep learning-based models could be applied to the radiological diagnosis of plain radiographs for assisting in the detection and grading of fatigue fractures on tibiofibula and foot. KEY POINTS • Fatigue fractures on radiographs are relatively difficult to detect, and apt to be misdiagnosed. • Detection and grading models based on deep learning were constructed on a large cohort of radiographs with lower-extremity fatigue fractures. • The detection model with high sensitivity would help to reduce the misdiagnosis of lower-extremity fatigue fractures.
Collapse
Affiliation(s)
- Yanping Wang
- Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, 305 East Zhongshan Rd, Nanjing, 210002, China
| | | | - Guang Lin
- Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, 305 East Zhongshan Rd, Nanjing, 210002, China
| | - Qirui Zhang
- Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, 305 East Zhongshan Rd, Nanjing, 210002, China
| | - Jing Zhong
- Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, 305 East Zhongshan Rd, Nanjing, 210002, China
| | - Yan Zhang
- Department of Radiology, Nanjing Qinhuai Medical Area, Jinling Hospital, 210002, Nanjing, China
| | - Kai Ma
- Tencent Jarvis Lab, Shenzhen, 518000, China
| | | | - Guangming Lu
- Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, 305 East Zhongshan Rd, Nanjing, 210002, China.,State Key Laboratory of Analytical Chemistry for Life Science, Nanjing University, Nanjing, 210093, China
| | - Zhiqiang Zhang
- Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, 305 East Zhongshan Rd, Nanjing, 210002, China. .,State Key Laboratory of Analytical Chemistry for Life Science, Nanjing University, Nanjing, 210093, China.
| |
Collapse
|
108
|
Altini N, Prencipe B, Cascarano GD, Brunetti A, Brunetti G, Triggiani V, Carnimeo L, Marino F, Guerriero A, Villani L, Scardapane A, Bevilacqua V. Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.157] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
109
|
Saba L, Antignani PL, Gupta A, Cau R, Paraskevas KI, Poredos P, Wasserman B, Kamel H, Avgerinos ED, Salgado R, Caobelli F, Aluigi L, Savastano L, Brown M, Hatsukami T, Hussein E, Suri JS, Mansilha A, Wintermark M, Staub D, Montequin JF, Rodriguez RTT, Balu N, Pitha J, Kooi ME, Lal BK, Spence JD, Lanzino G, Marcus HS, Mancini M, Chaturvedi S, Blinc A. International Union of Angiology (IUA) consensus paper on imaging strategies in atherosclerotic carotid artery imaging: From basic strategies to advanced approaches. Atherosclerosis 2022; 354:23-40. [DOI: 10.1016/j.atherosclerosis.2022.06.1014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 06/10/2022] [Accepted: 06/14/2022] [Indexed: 12/24/2022]
|
110
|
AlNuaimi D, AlKetbi R. The role of artificial intelligence in plain chest radiographs interpretation during the Covid-19 pandemic. BJR Open 2022; 4:20210075. [PMID: 36105414 PMCID: PMC9459850 DOI: 10.1259/bjro.20210075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 04/12/2022] [Accepted: 05/09/2022] [Indexed: 11/05/2022] Open
Abstract
Artificial intelligence (AI) plays a crucial role in the future development of all healthcare sectors ranging from clinical assistance of physicians by providing accurate diagnosis, prognosis and treatment to the development of vaccinations and aiding in the combat against the Covid-19 global pandemic. AI has an important role in diagnostic radiology where the algorithms can be trained by large datasets to accurately provide a timely diagnosis of the radiological images given. This has led to the development of several AI algorithms that can be used in regions of scarcity of radiologists during the current pandemic by simply denoting the presence or absence of Covid-19 pneumonia in PCR positive patients on plain chest radiographs as well as in helping to levitate the over-burdened radiology departments by accelerating the time for report delivery. Plain chest radiography is the most common radiological study in the emergency department setting and is readily available, fast and a cheap method that can be used in triaging patients as well as being portable in the medical wards and can be used as the initial radiological examination in Covid-19 positive patients to detect pneumonic changes. Numerous studies have been done comparing several AI algorithms to that of experienced thoracic radiologists in plain chest radiograph reports measuring accuracy of each in Covid-19 patients. The majority of studies have reported performance equal or higher to that of the well-experienced thoracic radiologist in predicting the presence or absence of Covid-19 pneumonic changes in the provided chest radiographs.
Collapse
Affiliation(s)
- Dana AlNuaimi
- Westford University-UCAM, Sharjah, United Arab Emirates
| | - Reem AlKetbi
- Dubai Health Authority, Dubai, United Arab Emirates
| |
Collapse
|
111
|
Gong B, Soyer P, McInnes MDF, Patlas MN. Elements of a Good Radiology Artificial Intelligence Paper. Can Assoc Radiol J 2022; 74:231-233. [PMID: 35535439 DOI: 10.1177/08465371221101195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Affiliation(s)
- Bo Gong
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Philippe Soyer
- Department of Body and Interventional Imaging, Hôpital Cochin, France & Univerité Paris Centre, Paris, France
| | - Matthew D. F. McInnes
- University of Ottawa Department of Radiology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
| | | |
Collapse
|
112
|
Attracting the next generation of radiologists: a statement by the European Society of Radiology (ESR). Insights Imaging 2022; 13:84. [PMID: 35507198 PMCID: PMC9066129 DOI: 10.1186/s13244-022-01221-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 04/08/2022] [Indexed: 11/23/2022] Open
Abstract
With demand increasing each year for diagnostic imaging and imaging guided interventions, it is important for the radiology workforce to expand in line with need. National and international societies such as the European Society of Radiology have an important role to play in showcasing the diversity of radiology, and highlighting the key role radiologists have in patient care and clinical decision-making to attract the next generation of radiologists. Medical students are an important group to engage with early. Meaningful exposure of undergraduates to radiology with an integrated programme and clinical placements in radiology is essential. Elective courses and dedicated 1-year Bachelor or Masters imaging programmes provide medical students with an opportunity for more in-depth study of radiology practice. Undergraduate radiology societies improve opportunities for engagement and mentorship. Innovations in imaging such as augmented-reality simulation and artificial intelligence and image-guided intervention also offer exciting training opportunities. Through these opportunities, students can gain insight into the wide variety of career opportunities in radiology.
Collapse
|
113
|
Singh G, Anand D, Cho W, Joshi GP, Son KC. Hybrid Deep Learning Approach for Automatic Detection in Musculoskeletal Radiographs. BIOLOGY 2022; 11:biology11050665. [PMID: 35625393 PMCID: PMC9138246 DOI: 10.3390/biology11050665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 04/13/2022] [Accepted: 04/21/2022] [Indexed: 11/16/2022]
Abstract
Simple Summary Musculoskeletal disorder is affecting a large population globally and is becoming one of the foremost health concerns. The treatment is often expensive and may cause a severe problem in the case of misdiagnosis. Therefore, a reliable, fast, and inexpensive automatic recognition system is required that can detect and diagnose abnormalities from radiographs to support effective and efficient decision making for further treatment. In this work, the finger study type from the MURA dataset is taken into consideration owing to the fact the existing models were not able to give the desired performance and accuracy in detecting abnormalities in finger radiographs. Herein, a novel deep learning model is proposed, wherein after the preprocessing and augmentation of the finger images, they are fed into a model that learns the discriminative features through multiple hidden layers of dense neural networks and classifies them as normal or abnormal radiographs. The achieved result outperforms all existing state-of-the-art models, making it suitable for clinical settings. This will help society in the early detection of the disorder, which reduces the burden on radiologists and reduces its long-term impact on a large population. Abstract The practice of Deep Convolution neural networks in the field of medicine has congregated immense success and significance in present situations. Previously, researchers have developed numerous models for detecting abnormalities in musculoskeletal radiographs of upper extremities, but did not succeed in achieving respectable accuracy in the case of finger radiographs. A novel deep neural network-based hybrid architecture named ComDNet-512 is proposed in this paper to efficiently detect the bone abnormalities in the musculoskeletal radiograph of a patient. ComDNet-512 comprises a three-phase pipeline structure: compression, training of the dense neural network, and progressive resizing. The ComDNet-512 hybrid model is trained with finger radiographs samples to make a binary prediction, i.e., normal or abnormal bones. The proposed model showed phenomenon outcomes when cross-validated on the testing samples of arthritis patients and gives many superior results when compared with state-of-the-art practices. The model is able to achieve an area under the ROC curve (AUC) equal to 0.894 (sensitivity = 0.941 and specificity = 0.847). The Precision, Recall, F1 Score, and Kappa values, recorded as 0.86, 0.94, 0.89, and 0.78, respectively, are better than any of the previous models’. With an increasing appearance of enormous cases of musculoskeletal conditions in people, deep learning-based computational solutions can play a big role in performing automated detections in the future.
Collapse
Affiliation(s)
- Gurpreet Singh
- Department of Computer Science and Engineering, Chandigarh University, Mohali 140413, India; (G.S.); (D.A.)
| | - Darpan Anand
- Department of Computer Science and Engineering, Chandigarh University, Mohali 140413, India; (G.S.); (D.A.)
| | - Woong Cho
- Department of Software Convergence, Daegu Catholic University, Gyeongsan 38430, Korea;
| | - Gyanendra Prasad Joshi
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea
- Correspondence: (G.P.J.); (K.C.S.); Tel.: +82-2-940-8100 (K.C.S.)
| | - Kwang Chul Son
- Department of Information Contents, Kwangwoon University, Seoul 01897, Korea
- Correspondence: (G.P.J.); (K.C.S.); Tel.: +82-2-940-8100 (K.C.S.)
| |
Collapse
|
114
|
Jussupow E, Spohrer K, Heinzl A. Radiologists’ Usage of Diagnostic AI Systems. BUSINESS & INFORMATION SYSTEMS ENGINEERING 2022. [DOI: 10.1007/s12599-022-00750-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
AbstractWhile diagnostic AI systems are implemented in medical practice, it is still unclear how physicians embed them in diagnostic decision making. This study examines how radiologists come to use diagnostic AI systems in different ways and what role AI assessments play in this process if they confirm or disconfirm radiologists’ own judgment. The study draws on rich qualitative data from a revelatory case study of an AI system for stroke diagnosis at a University Hospital to elaborate how three sensemaking processes revolve around confirming and disconfirming AI assessments. Through context-specific sensedemanding, sensegiving, and sensebreaking, radiologists develop distinct usage patterns of AI systems. The study reveals that diagnostic self-efficacy influences which of the three sensemaking processes radiologists engage in. In deriving six propositions, the account of sensemaking and usage of diagnostic AI systems in medical practice paves the way for future research.
Collapse
|
115
|
Pham N, Ju C, Kong T, Mukherji SK. Artificial Intelligence in Head and Neck Imaging. Semin Ultrasound CT MR 2022; 43:170-175. [PMID: 35339257 DOI: 10.1053/j.sult.2022.02.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Artificial intelligence (AI) can be applied to head and neck imaging to augment image quality and various clinical tasks including segmentation of tumor volumes, tumor characterization, tumor prognostication and treatment response, and prediction of metastatic lymph node disease. Head and neck oncology care is well positioned for the application of AI since treatment is guided by a wealth of information derived from CT, MRI, and PET imaging data. AI-based methods can integrate complex imaging, histologic, molecular, and clinical data to model tumor biology and behavior, and potentially identify associations, far beyond what conventional qualitative imaging can provide alone.
Collapse
Affiliation(s)
- Nancy Pham
- Neuroradiology, Radiology Department, University of California Los Angeles David Geffen School of Medicine, Los Angeles, CA; Neuroradiology, Radiology Department, University of Illinois.
| | - Connie Ju
- Neuroradiology, Radiology Department, University of California Los Angeles David Geffen School of Medicine, Los Angeles, CA
| | - Tracie Kong
- Neuroradiology, Radiology Department, University of California Los Angeles David Geffen School of Medicine, Los Angeles, CA
| | - Suresh K Mukherji
- Neuroradiology, Radiology Department, University of California Los Angeles David Geffen School of Medicine, Los Angeles, CA
| |
Collapse
|
116
|
Abstract
Artificial intelligence (AI) is a branch of computer science in which computer systems are designed to perform tasks that mimic human intelligence. Today, AI is reshaping day-to-day life and has numerous emerging medical applications poised to profoundly reshape the practice of veterinary medicine. In this Currents in One Health, we discuss the essential elements of AI for veterinary practitioners with the aim to help them make informed decisions in applying AI technologies into their practices. Veterinarians will play an integral role in ensuring the appropriate uses and good curation of data. The expertise of veterinary professionals will be vital to ensuring good data and, subsequently, AI that meets the needs of the profession. Readers interested in an in-depth description of AI and veterinary medicine are invited to explore a complementary manuscript of this Currents in One Health available in the May 2022 issue of the American Journal of Veterinary Research.
Collapse
Affiliation(s)
- Ryan B Appleby
- Department of Clinical Studies, Ontario Veterinary College, University of Guelph, Guelph, ON, Canada
| | | |
Collapse
|
117
|
Jussupow E, Spohrer K, Heinzl A. Identity Threats as a Reason for Resistance to Artificial Intelligence: Survey Study With Medical Students and Professionals. JMIR Form Res 2022; 6:e28750. [PMID: 35319465 PMCID: PMC8987955 DOI: 10.2196/28750] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 05/27/2021] [Accepted: 01/03/2022] [Indexed: 01/26/2023] Open
Abstract
Background Information systems based on artificial intelligence (AI) have increasingly spurred controversies among medical professionals as they start to outperform medical experts in tasks that previously required complex human reasoning. Prior research in other contexts has shown that such a technological disruption can result in professional identity threats and provoke negative attitudes and resistance to using technology. However, little is known about how AI systems evoke professional identity threats in medical professionals and under which conditions they actually provoke negative attitudes and resistance. Objective The aim of this study is to investigate how medical professionals’ resistance to AI can be understood because of professional identity threats and temporal perceptions of AI systems. It examines the following two dimensions of medical professional identity threat: threats to physicians’ expert status (professional recognition) and threats to physicians’ role as an autonomous care provider (professional capabilities). This paper assesses whether these professional identity threats predict resistance to AI systems and change in importance under the conditions of varying professional experience and varying perceived temporal relevance of AI systems. Methods We conducted 2 web-based surveys with 164 medical students and 42 experienced physicians across different specialties. The participants were provided with a vignette of a general medical AI system. We measured the experienced identity threats, resistance attitudes, and perceived temporal distance of AI. In a subsample, we collected additional data on the perceived identity enhancement to gain a better understanding of how the participants perceived the upcoming technological change as beyond a mere threat. Qualitative data were coded in a content analysis. Quantitative data were analyzed in regression analyses. Results Both threats to professional recognition and threats to professional capabilities contributed to perceived self-threat and resistance to AI. Self-threat was negatively associated with resistance. Threats to professional capabilities directly affected resistance to AI, whereas the effect of threats to professional recognition was fully mediated through self-threat. Medical students experienced stronger identity threats and resistance to AI than medical professionals. The temporal distance of AI changed the importance of professional identity threats. If AI systems were perceived as relevant only in the distant future, the effect of threats to professional capabilities was weaker, whereas the effect of threats to professional recognition was stronger. The effect of threats remained robust after including perceived identity enhancement. The results show that the distinct dimensions of medical professional identity are affected by the upcoming technological change through AI. Conclusions Our findings demonstrate that AI systems can be perceived as a threat to medical professional identity. Both threats to professional recognition and threats to professional capabilities contribute to resistance attitudes toward AI and need to be considered in the implementation of AI systems in clinical practice.
Collapse
Affiliation(s)
| | - Kai Spohrer
- Frankfurt School of Finance & Management, Frankfurt, Germany
| | | |
Collapse
|
118
|
Alsharif W, Qurashi A, Toonsi F, Alanazi A, Alhazmi F, Abdulaal O, Aldahery S, Alshamrani K. A qualitative study to explore opinions of Saudi Arabian radiologists concerning AI-based applications and their impact on the future of the radiology. BJR Open 2022; 4:20210029. [PMID: 36105424 PMCID: PMC9459863 DOI: 10.1259/bjro.20210029] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 03/03/2022] [Accepted: 03/09/2022] [Indexed: 11/05/2022] Open
Abstract
Objective The aim of this study was to explore opinions and views towards radiology AI among Saudi Arabian radiologists including both consultants and trainees. Methods A qualitative approach was adopted, with radiologists working in radiology departments in the Western region of Saudi Arabia invited to participate in this interview-based study. Semi-structured interviews (n = 30) were conducted with consultant radiologists and trainees. A qualitative data analysis framework was used based on Miles and Huberman's philosophical underpinnings. Results Several factors, such as lack of training and support, were attributed to the non-use of AI-based applications in clinical practice and the absence of radiologists' involvement in AI development. Despite the expected benefits and positive impacts of AI on radiology, a reluctance to use AI-based applications might exist due to a lack of knowledge, fear of error and concerns about losing jobs and/or power. Medical students' radiology education and training appeared to be influenced by the absence of a governing body and training programmes. Conclusion The results of this study support the establishment of a governing body or national association to work in parallel with universities in monitoring training and integrating AI into the medical education curriculum and residency programmes. Advances in knowledge An extensive debate about AI-based applications and their potential effects was noted, and considerable exceptions of transformative impact may occur when AI is fully integrated into clinical practice. Therefore, future education and training programmes on how to work with AI-based applications in clinical practice may be recommended.
Collapse
Affiliation(s)
| | - Abdulaziz Qurashi
- Department of Diagnostic Radiology Technology, College of Applied Medical Sciences, Taibah University, Madinah, Saudi Arabia
| | - Fadi Toonsi
- Department of Radiology, Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | | | - Fahad Alhazmi
- Department of Diagnostic Radiology Technology, College of Applied Medical Sciences, Taibah University, Madinah, Saudi Arabia
| | - Osamah Abdulaal
- Department of Diagnostic Radiology Technology, College of Applied Medical Sciences, Taibah University, Madinah, Saudi Arabia
| | - Shrooq Aldahery
- Applied Radiologic Technology, College of Applied Medical Science, University of Jeddah, Jeddah, Saudi Arabia
| | | |
Collapse
|
119
|
Ruan G, Qi J, Cheng Y, Liu R, Zhang B, Zhi M, Chen J, Xiao F, Shen X, Fan L, Li Q, Li N, Qiu Z, Xiao Z, Xu F, Lv L, Chen M, Ying S, Chen L, Tian Y, Li G, Zhang Z, He M, Qiao L, Zhang Z, Chen D, Cao Q, Nian Y, Wei Y. Development and Validation of a Deep Neural Network for Accurate Identification of Endoscopic Images From Patients With Ulcerative Colitis and Crohn's Disease. Front Med (Lausanne) 2022; 9:854677. [PMID: 35372443 PMCID: PMC8974241 DOI: 10.3389/fmed.2022.854677] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 02/23/2022] [Indexed: 01/10/2023] Open
Abstract
Background and Aim The identification of ulcerative colitis (UC) and Crohn's disease (CD) is a key element interfering with therapeutic response, but it is often difficult for less experienced endoscopists to identify UC and CD. Therefore, we aimed to develop and validate a deep learning diagnostic system trained on a large number of colonoscopy images to distinguish UC and CD. Methods This multicenter, diagnostic study was performed in 5 hospitals in China. Normal individuals and active patients with inflammatory bowel disease (IBD) were enrolled. A dataset of 1,772 participants with 49,154 colonoscopy images was obtained between January 2018 and November 2020. We developed a deep learning model based on a deep convolutional neural network (CNN) in the examination. To generalize the applicability of the deep learning model in clinical practice, we compared the deep model with 10 endoscopists and applied it in 3 hospitals across China. Results The identification accuracy obtained by the deep model was superior to that of experienced endoscopists per patient (deep model vs. trainee endoscopist, 99.1% vs. 78.0%; deep model vs. competent endoscopist, 99.1% vs. 92.2%, P < 0.001) and per lesion (deep model vs. trainee endoscopist, 90.4% vs. 59.7%; deep model vs. competent endoscopist 90.4% vs. 69.9%, P < 0.001). In addition, the mean reading time was reduced by the deep model (deep model vs. endoscopists, 6.20 s vs. 2,425.00 s, P < 0.001). Conclusion We developed a deep model to assist with the clinical diagnosis of IBD. This provides a diagnostic device for medical education and clinicians to improve the efficiency of diagnosis and treatment.
Collapse
Affiliation(s)
- Guangcong Ruan
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Jing Qi
- College of Biomedical Engineering and Imaging Medicine, Army Medical University (Third Military Medical University), Chongqing, China
| | - Yi Cheng
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Rongbei Liu
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Bingqiang Zhang
- Department of Gastroenterology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Min Zhi
- Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Department of Gastroenterology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Junrong Chen
- Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Department of Gastroenterology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Fang Xiao
- Department of Gastroenterology, Tongji Hospital of Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaochun Shen
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Ling Fan
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Qin Li
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Ning Li
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Zhujing Qiu
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Zhifeng Xiao
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Fenghua Xu
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Linling Lv
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Minjia Chen
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Senhong Ying
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Lu Chen
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Yuting Tian
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Guanhu Li
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Zhou Zhang
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Mi He
- College of Biomedical Engineering and Imaging Medicine, Army Medical University (Third Military Medical University), Chongqing, China
| | - Liang Qiao
- College of Biomedical Engineering and Imaging Medicine, Army Medical University (Third Military Medical University), Chongqing, China
| | - Zhu Zhang
- College of Biomedical Engineering and Imaging Medicine, Army Medical University (Third Military Medical University), Chongqing, China
| | - Dongfeng Chen
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Qian Cao
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
- *Correspondence: Qian Cao
| | - Yongjian Nian
- College of Biomedical Engineering and Imaging Medicine, Army Medical University (Third Military Medical University), Chongqing, China
- Yongjian Nian
| | - Yanling Wei
- Department of Gastroenterology, Daping Hospital, Army Medical University (Third Military Medical University), Chongqing, China
- Yanling Wei
| |
Collapse
|
120
|
Lima AA, Mridha MF, Das SC, Kabir MM, Islam MR, Watanobe Y. A Comprehensive Survey on the Detection, Classification, and Challenges of Neurological Disorders. BIOLOGY 2022; 11:469. [PMID: 35336842 PMCID: PMC8945195 DOI: 10.3390/biology11030469] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 03/12/2022] [Accepted: 03/14/2022] [Indexed: 01/19/2023]
Abstract
Neurological disorders (NDs) are becoming more common, posing a concern to pregnant women, parents, healthy infants, and children. Neurological disorders arise in a wide variety of forms, each with its own set of origins, complications, and results. In recent years, the intricacy of brain functionalities has received a better understanding due to neuroimaging modalities, such as magnetic resonance imaging (MRI), magnetoencephalography (MEG), and positron emission tomography (PET), etc. With high-performance computational tools and various machine learning (ML) and deep learning (DL) methods, these modalities have discovered exciting possibilities for identifying and diagnosing neurological disorders. This study follows a computer-aided diagnosis methodology, leading to an overview of pre-processing and feature extraction techniques. The performance of existing ML and DL approaches for detecting NDs is critically reviewed and compared in this article. A comprehensive portion of this study also shows various modalities and disease-specified datasets that detect and records images, signals, and speeches, etc. Limited related works are also summarized on NDs, as this domain has significantly fewer works focused on disease and detection criteria. Some of the standard evaluation metrics are also presented in this study for better result analysis and comparison. This research has also been outlined in a consistent workflow. At the conclusion, a mandatory discussion section has been included to elaborate on open research challenges and directions for future work in this emerging field.
Collapse
Affiliation(s)
- Aklima Akter Lima
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (A.A.L.); (M.F.M.); (S.C.D.); (M.M.K.)
| | - M. Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (A.A.L.); (M.F.M.); (S.C.D.); (M.M.K.)
| | - Sujoy Chandra Das
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (A.A.L.); (M.F.M.); (S.C.D.); (M.M.K.)
| | - Muhammad Mohsin Kabir
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (A.A.L.); (M.F.M.); (S.C.D.); (M.M.K.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
| | - Yutaka Watanobe
- Department of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu 965-8580, Japan;
| |
Collapse
|
121
|
Valen J, Balki I, Mendez M, Qu W, Levman J, Bilbily A, Tyrrell PN. Quantifying uncertainty in machine learning classifiers for medical imaging. Int J Comput Assist Radiol Surg 2022; 17:711-718. [DOI: 10.1007/s11548-022-02578-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 02/07/2022] [Indexed: 12/18/2022]
|
122
|
Gong B, Salehi F, Hurrell C, Patlas MN. 2021 Year in Review. Can Assoc Radiol J 2022; 73:443-445. [PMID: 35272532 DOI: 10.1177/08465371221083860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Affiliation(s)
- Bo Gong
- Department of Radiology, 8166University of British Columbia, Vancouver, BC, Canada
| | - Fateme Salehi
- Department of Radiology, McMaster University, Juravinski Hospital, Hamilton, ON, Canada
| | - Casey Hurrell
- Canadian Association of Radiologists, Ottawa, ON, Canada
| | - Michael N Patlas
- Department of Radiology, McMaster University, Hamilton ON, Canada
| |
Collapse
|
123
|
Bonekamp D, Schlemmer HP. [Artificial intelligence (AI) in radiology? : Do we need as many radiologists in the future?]. Urologe A 2022; 61:392-399. [PMID: 35277758 DOI: 10.1007/s00120-022-01768-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/14/2022] [Indexed: 11/27/2022]
Abstract
We are in the middle of a digital revolution in medicine. This raises the question of whether subjects such as radiology, which is superficially concerned with the interpretation of images, will be particularly changed by this revolution. In particular, it should be discussed whether in the future the completion of initially simpler, then more complex image analysis tasks by computer systems may lead to a reduced need for radiologists. What distinguishes radiology in particular is its key position between advanced technology and medical care. This article discusses that not only radiology but every medical discipline will be affected by innovations due to the digital revolution, and that a redefinition of medical specialties focusing on imaging and visual interpretation makes sense and that the arrival of artificial intelligence (AI) in radiology is to be welcomed in the context of ever larger amounts of image data-to at all be able to handle the increasing amount of image data in the future at the current number of radiologists. In this respect, the balance between research and teaching in comparison to patient care is more difficult to maintain in the academic environment. AI can help improve efficiency and balance in the areas mentioned. With regard to specialist training, information technology topics are expected to be integrated into the radiological curriculum. Radiology acts as a pioneer designing the entry of AI into medicine. It is to be expected that by the time radiologists can be substantially replaced by AI, the replacement of human contributions in other medical and non-medical fields will also be well advanced.
Collapse
Affiliation(s)
- David Bonekamp
- Abteilung für Radiologie (E010), Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 280, 69120, Heidelberg, Deutschland.
| | - H-P Schlemmer
- Abteilung für Radiologie (E010), Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 280, 69120, Heidelberg, Deutschland
| |
Collapse
|
124
|
Vobugari N, Raja V, Sethi U, Gandhi K, Raja K, Surani SR. Advancements in Oncology with Artificial Intelligence-A Review Article. Cancers (Basel) 2022; 14:1349. [PMID: 35267657 PMCID: PMC8909088 DOI: 10.3390/cancers14051349] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 02/26/2022] [Accepted: 02/28/2022] [Indexed: 02/05/2023] Open
Abstract
Well-trained machine learning (ML) and artificial intelligence (AI) systems can provide clinicians with therapeutic assistance, potentially increasing efficiency and improving efficacy. ML has demonstrated high accuracy in oncology-related diagnostic imaging, including screening mammography interpretation, colon polyp detection, glioma classification, and grading. By utilizing ML techniques, the manual steps of detecting and segmenting lesions are greatly reduced. ML-based tumor imaging analysis is independent of the experience level of evaluating physicians, and the results are expected to be more standardized and accurate. One of the biggest challenges is its generalizability worldwide. The current detection and screening methods for colon polyps and breast cancer have a vast amount of data, so they are ideal areas for studying the global standardization of artificial intelligence. Central nervous system cancers are rare and have poor prognoses based on current management standards. ML offers the prospect of unraveling undiscovered features from routinely acquired neuroimaging for improving treatment planning, prognostication, monitoring, and response assessment of CNS tumors such as gliomas. By studying AI in such rare cancer types, standard management methods may be improved by augmenting personalized/precision medicine. This review aims to provide clinicians and medical researchers with a basic understanding of how ML works and its role in oncology, especially in breast cancer, colorectal cancer, and primary and metastatic brain cancer. Understanding AI basics, current achievements, and future challenges are crucial in advancing the use of AI in oncology.
Collapse
Affiliation(s)
- Nikitha Vobugari
- Department of Internal Medicine, Medstar Washington Hospital Center, Washington, DC 20010, USA; (N.V.); (K.G.)
| | - Vikranth Raja
- Department of Medicine, P.S.G Institute of Medical Sciences and Research, Coimbatore 641004, Tamil Nadu, India;
| | - Udhav Sethi
- School of Computer Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Kejal Gandhi
- Department of Internal Medicine, Medstar Washington Hospital Center, Washington, DC 20010, USA; (N.V.); (K.G.)
| | - Kishore Raja
- Department of Pediatric Cardiology, University of Minnesota, Minneapolis, MN 55454, USA;
| | - Salim R. Surani
- Department of Pulmonary and Critical Care, Texas A&M University, College Station, TX 77843, USA
| |
Collapse
|
125
|
The current state of knowledge on imaging informatics: a survey among Spanish radiologists. Insights Imaging 2022; 13:34. [PMID: 35235068 PMCID: PMC8891400 DOI: 10.1186/s13244-022-01164-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 01/22/2022] [Indexed: 11/22/2022] Open
Abstract
Background There is growing concern about the impact of artificial intelligence (AI) on radiology and the future of the profession. The aim of this study is to evaluate general knowledge and concerns about trends on imaging informatics among radiologists working in Spain (residents and attending physicians). For this purpose, an online survey among radiologists working in Spain was conducted with questions related to: knowledge about terminology and technologies, need for a regulated academic training on AI and concerns about the implications of the use of these technologies. Results A total of 223 radiologists answered the survey, of whom 76.7% were attending physicians and 23.3% residents. General terms such as AI and algorithm had been heard of or read in at least 75.8% and 57.4% of the cases, respectively, while more specific terms were scarcely known. All the respondents consider that they should pursue academic training in medical informatics and new technologies, and 92.9% of them reckon this preparation should be incorporated in the training program of the specialty. Patient safety was found to be the main concern for 54.2% of the respondents. Job loss was not seen as a peril by 45.7% of the participants.
Conclusions Although there is a lack of knowledge about AI among Spanish radiologists, there is a will to explore such topics and a general belief that radiologists should be trained in these matters. Based on the results, a consensus is needed to change the current training curriculum to better prepare future radiologists.
Collapse
|
126
|
Yang L, Ene IC, Arabi Belaghi R, Koff D, Stein N, Santaguida PL. Stakeholders' perspectives on the future of artificial intelligence in radiology: a scoping review. Eur Radiol 2022; 32:1477-1495. [PMID: 34545445 DOI: 10.1007/s00330-021-08214-z] [Citation(s) in RCA: 55] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 06/11/2021] [Accepted: 07/12/2021] [Indexed: 12/31/2022]
Abstract
OBJECTIVES Artificial intelligence (AI) has the potential to impact clinical practice and healthcare delivery. AI is of particular significance in radiology due to its use in automatic analysis of image characteristics. This scoping review examines stakeholder perspectives on AI use in radiology, the benefits, risks, and challenges to its integration. METHODS A search was conducted from 1960 to November 2019 in EMBASE, PubMed/MEDLINE, Web of Science, Cochrane Library, CINAHL, and grey literature. Publications reflecting stakeholder attitudes toward AI were included with no restrictions. RESULTS Commentaries (n = 32), surveys (n = 13), presentation abstracts (n = 8), narrative reviews (n = 8), and a social media study (n = 1) were included from 62 eligible publications. These represent the views of radiologists, surgeons, medical students, patients, computer scientists, and the general public. Seven themes were identified (predicted impact, potential replacement, trust in AI, knowledge of AI, education, economic considerations, and medicolegal implications). Stakeholders anticipate a significant impact on radiology, though replacement of radiologists is unlikely in the near future. Knowledge of AI is limited for non-computer scientists and further education is desired. Many expressed the need for collaboration between radiologists and AI specialists to successfully improve patient care. CONCLUSIONS Stakeholder views generally suggest that AI can improve the practice of radiology and consider the replacement of radiologists unlikely. Most stakeholders identified the need for education and training on AI, as well as collaborative efforts to improve AI implementation. Further research is needed to gain perspectives from non-Western countries, non-radiologist stakeholders, on economic considerations, and medicolegal implications. KEY POINTS Stakeholders generally expressed that AI alone cannot be used to replace radiologists. The scope of practice is expected to shift with AI use affecting areas from image interpretation to patient care. Patients and the general public do not know how to address potential errors made by AI systems while radiologists believe that they should be "in-the-loop" in terms of responsibility. Ethical accountability strategies must be developed across governance levels. Students, residents, and radiologists believe that there is a lack in AI education during medical school and residency. The radiology community should work with IT specialists to ensure that AI technology benefits their work and centres patients.
Collapse
Affiliation(s)
- Ling Yang
- McMaster University, 1280 Main St W, Hamilton, ON, L8S 4L8, Canada
| | - Ioana Cezara Ene
- McMaster University, 1280 Main St W, Hamilton, ON, L8S 4L8, Canada
| | - Reza Arabi Belaghi
- University of Tabriz, 29 Bahman Boulevard, Tabriz, East Azerbaijan Province, Iran
| | - David Koff
- Department of Radiology, McMaster University, 1280 Main St W, Hamilton, ON, L8S 4L8, Canada
| | - Nina Stein
- McMaster Children's Hospital, McMaster University, 1280 Main St W, Hamilton, ON, L8N 3Z5, Canada
| | | |
Collapse
|
127
|
McGenity C, Bossuyt P, Treanor D. Reporting of Artificial Intelligence Diagnostic Accuracy Studies in Pathology Abstracts: Compliance with STARD for Abstracts Guidelines. J Pathol Inform 2022; 13:100091. [PMID: 36268103 PMCID: PMC9576989 DOI: 10.1016/j.jpi.2022.100091] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Revised: 01/21/2022] [Accepted: 01/27/2022] [Indexed: 11/24/2022] Open
Abstract
Artificial intelligence (AI) research is transforming the range tools and technologies available to pathologists, leading to potentially faster, personalized and more accurate diagnoses for patients. However, to see the use of tools for patient benefit and achieve this safely, the implementation of any algorithm must be underpinned by high quality evidence from research that is understandable, replicable, usable and inclusive of details needed for critical appraisal of potential bias. Evidence suggests that reporting guidelines can improve the completeness of reporting of research, especially with good awareness of guidelines. The quality of evidence provided by abstracts alone is profoundly important, as they influence the decision of a researcher to read a paper, attend a conference presentation or include a study in a systematic review. AI abstracts at two international pathology conferences were assessed to establish completeness of reporting against the STARD for Abstracts criteria. This reporting guideline is for abstracts of diagnostic accuracy studies and includes a checklist of 11 essential items required to accomplish satisfactory reporting of such an investigation. A total of 3488 abstracts were screened from the United States & Canadian Academy of Pathology annual meeting 2019 and the 31st European Congress of Pathology (ESP Congress). Of these, 51 AI diagnostic accuracy abstracts were identified and assessed against the STARD for Abstracts criteria for completeness of reporting. Completeness of reporting was suboptimal for the 11 essential criteria, a mean of 5.8 (SD 1.5) items were detailed per abstract. Inclusion was variable across the different checklist items, with all abstracts including study objectives and no abstracts including a registration number or registry. Greater use and awareness of the STARD for Abstracts criteria could improve completeness of reporting and further consideration is needed for areas where AI studies are vulnerable to bias.
Collapse
Affiliation(s)
- Clare McGenity
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- University of Leeds, Leeds, UK
| | - Patrick Bossuyt
- Department of Epidemiology & Data Science, Amsterdam University Medical Centres, University of Amsterdam, Amsterdam, The Netherlands
| | - Darren Treanor
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- University of Leeds, Leeds, UK
- Department of Clinical Pathology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Centre for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| |
Collapse
|
128
|
Adrien-Maxence H, Emilie B, Alois DLC, Michelle A, Kate A, Mylene A, David B, Marie DS, Jason F, Eric G, Séamus H, Kevin K, Alison L, Megan M, Hester M, Jaime RJ, Zhu X, Micaela Z, Federica M. Comparison of error rates between four pretrained DenseNet convolutional neural network models and 13 board-certified veterinary radiologists when evaluating 15 labels of canine thoracic radiographs. Vet Radiol Ultrasound 2022; 63:456-468. [PMID: 35137490 DOI: 10.1111/vru.13069] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 12/15/2021] [Accepted: 12/21/2021] [Indexed: 11/29/2022] Open
Abstract
Convolutional neural networks (CNNs) are commonly used as artificial intelligence (AI) tools for evaluating radiographs, but published studies testing their performance in veterinary patients are currently lacking. The purpose of this retrospective, secondary analysis, diagnostic accuracy study was to compare the error rates of four CNNs to the error rates of 13 veterinary radiologists for evaluating canine thoracic radiographs using an independent gold standard. Radiographs acquired at a referral institution were used to evaluate the four CNNs sharing a common architecture. Fifty radiographic studies were selected at random. The studies were evaluated independently by three board-certified veterinary radiologists for the presence or absence of 15 thoracic labels, thus creating the gold standard through the majority rule. The labels included "cardiovascular," "pulmonary," "pleural," "airway," and "other categories." The error rates for each of the CNNs and for 13 additional board-certified veterinary radiologists were calculated on those same studies. There was no statistical difference in the error rates among the four CNNs for the majority of the labels. However, the CNN's training method impacted the overall error rate for three of 15 labels. The veterinary radiologists had a statistically lower error rate than all four CNNs overall and for five labels (33%). There was only one label ("esophageal dilation") for which two CNNs were superior to the veterinary radiologists. Findings from the current study raise numerous questions that need to be addressed to further develop and standardize AI in the veterinary radiology environment and to optimize patient care.
Collapse
Affiliation(s)
- Hespel Adrien-Maxence
- Department of Small Animal Clinical Sciences, University of Tennessee, Knoxville, Tennessee, USA
| | | | | | - Acierno Michelle
- Michelle Acierno Veterinary Radiology Consulting, Kirkland, WA and Summit Veterinary Referral Center, Tacoma, Washington, USA
| | - Alexander Kate
- DMV Veterinary Center, Diagnostic Imaging, Montreal, Quebec, Canada
| | | | - Biller David
- Kansas State University College of Veterinary Medicine, Clinical Sciences, Manhattan, Kansas, USA
| | | | | | - Green Eric
- The Ohio State University, Veterinary Clinical Sciences, Columbus, Ohio, USA
| | - Hoey Séamus
- University College Dublin, Veterinary Diagnostic Imaging, Dublin, Ireland
| | | | - Lee Alison
- Mississippi State University College of Veterinary Medicine, Department of Clinical Sciences, Starkville, Mississippi, USA
| | - MacLellan Megan
- BluePearl, Veterinary Partners, Elden Prairie, Minnesota, USA
| | - McAllister Hester
- University College Dublin, Veterinary Diagnostic Imaging, Dublin, Ireland
| | | | - Xiaojuan Zhu
- Office of Information Technology, The University of Tennessee, Knoxville, Tennessee, USA
| | | | - Morandi Federica
- Department of Small Animal Clinical Sciences, University of Tennessee, Knoxville, Tennessee, USA
| |
Collapse
|
129
|
Drewe-Boss P, Enders D, Walker J, Ohler U. Deep learning for prediction of population health costs. BMC Med Inform Decis Mak 2022; 22:32. [PMID: 35114978 PMCID: PMC8812208 DOI: 10.1186/s12911-021-01743-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 12/30/2021] [Indexed: 11/13/2022] Open
Abstract
Background Accurate prediction of healthcare costs is important for optimally managing health costs. However, methods leveraging the medical richness from data such as health insurance claims or electronic health records are missing. Methods Here, we developed a deep neural network to predict future cost from health insurance claims records. We applied the deep network and a ridge regression model to a sample of 1.4 million German insurants to predict total one-year health care costs. Both methods were compared to existing models with various performance measures and were also used to predict patients with a change in costs and to identify relevant codes for this prediction. Results We showed that the neural network outperformed the ridge regression as well as all considered models for cost prediction. Further, the neural network was superior to ridge regression in predicting patients with cost change and identified more specific codes. Conclusion In summary, we showed that our deep neural network can leverage the full complexity of the patient records and outperforms standard approaches. We suggest that the better performance is due to the ability to incorporate complex interactions in the model and that the model might also be used for predicting other health phenotypes. Supplementary Information The online version contains supplementary material available at 10.1186/s12911-021-01743-z.
Collapse
Affiliation(s)
- Philipp Drewe-Boss
- Berlin Institute for Medical Systems Biology, Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Robert-Rössle-Strasse 10, 13125, Berlin, Germany.
| | - Dirk Enders
- Institute for Applied Health Research (InGef), Spittelmarkt 12, 10117, Berlin, Germany
| | - Jochen Walker
- Institute for Applied Health Research (InGef), Spittelmarkt 12, 10117, Berlin, Germany
| | - Uwe Ohler
- Berlin Institute for Medical Systems Biology, Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Robert-Rössle-Strasse 10, 13125, Berlin, Germany
| |
Collapse
|
130
|
Immonen E, Wong J, Nieminen M, Kekkonen L, Roine S, Törnroos S, Lanca L, Guan F, Metsälä E. The use of deep learning towards dose optimization in low-dose computed tomography: A scoping review. Radiography (Lond) 2022; 28:208-214. [PMID: 34325998 DOI: 10.1016/j.radi.2021.07.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/10/2021] [Accepted: 07/09/2021] [Indexed: 11/21/2022]
Abstract
INTRODUCTION Low-dose computed tomography tends to produce lower image quality than normal dose computed tomography (CT) although it can help to reduce radiation hazards of CT scanning. Research has shown that Artificial Intelligence (AI) technologies, especially deep learning can help enhance the image quality of low-dose CT by denoising images. This scoping review aims to create an overview on how AI technologies, especially deep learning, can be used in dose optimisation for low-dose CT. METHODS Literature searches of ProQuest, PubMed, Cinahl, ScienceDirect, EbscoHost Ebook Collection and Ovid were carried out to find research articles published between the years 2015 and 2020. In addition, manual search was conducted in SweMed+, SwePub, NORA, Taylor & Francis Online and Medic. RESULTS Following a systematic search process, the review comprised of 16 articles. Articles were organised according to the effects of the deep learning networks, e.g. image noise reduction, image restoration. Deep learning can be used in multiple ways to facilitate dose optimisation in low-dose CT. Most articles discuss image noise reduction in low-dose CT. CONCLUSION Deep learning can be used in the optimisation of patients' radiation dose. Nevertheless, the image quality is normally lower in low-dose CT (LDCT) than in regular-dose CT scans because of smaller radiation doses. With the help of deep learning, the image quality can be improved to equate the regular-dose computed tomography image quality. IMPLICATIONS TO PRACTICE Lower dose may decrease patients' radiation risk but may affect the image quality of CT scans. Artificial intelligence technologies can be used to improve image quality in low-dose CT scans. Radiologists and radiographers should have proper education and knowledge about the techniques used.
Collapse
Affiliation(s)
- E Immonen
- Metropolia University of Applied Sciences, Finland.
| | - J Wong
- Singapore Institute of Technology (SIT), Singapore.
| | - M Nieminen
- Metropolia University of Applied Sciences, Finland.
| | - L Kekkonen
- Metropolia University of Applied Sciences, Finland.
| | - S Roine
- Metropolia University of Applied Sciences, Finland.
| | - S Törnroos
- Metropolia University of Applied Sciences, Finland.
| | - L Lanca
- Singapore Institute of Technology (SIT), Singapore.
| | - F Guan
- Singapore Institute of Technology (SIT), Singapore.
| | - E Metsälä
- Metropolia University of Applied Sciences, Finland.
| |
Collapse
|
131
|
Mutasa S, Yi PH. Deciphering musculoskeletal artificial intelligence for clinical applications: how do I get started? Skeletal Radiol 2022; 51:271-278. [PMID: 34191083 DOI: 10.1007/s00256-021-03850-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 06/09/2021] [Accepted: 06/21/2021] [Indexed: 02/02/2023]
Abstract
Artificial intelligence (AI) represents a broad category of algorithms for which deep learning is currently the most impactful. When electing to begin the process of building an adequate fundamental knowledge base allowing them to decipher machine learning research and algorithms, clinical musculoskeletal radiologists currently have few options to turn to. In this article, we provide an introduction to the vital terminology to understand, how to make sense of data splits and regularization, an introduction to the statistical analyses used in AI research, a primer on what deep learning can or cannot do, and a brief overview of clinical integration methods. Our goal is to improve the readers' understanding of this field.
Collapse
Affiliation(s)
- Simukayi Mutasa
- The Center of Artificial Intelligence in Medical Imaging, Division of Musculoskeletal Imaging, The University of California At Irvine, 101 The City Dr S, Orange, CA, 92868, USA.
| | - Paul H Yi
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland Intelligent Imaging Center, University of Maryland School of Medicine, 22 South Greene Street, First Floor, Baltimore, MD, 21201, USA
| |
Collapse
|
132
|
Zopfs D, Laukamp K, Reimer R, Grosse Hokamp N, Kabbasch C, Borggrefe J, Pennig L, Bunck AC, Schlamann M, Lennartz S. Automated Color-Coding of Lesion Changes in Contrast-Enhanced 3D T1-Weighted Sequences for MRI Follow-up of Brain Metastases. AJNR Am J Neuroradiol 2022; 43:188-194. [PMID: 34992128 PMCID: PMC8985679 DOI: 10.3174/ajnr.a7380] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 10/06/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND AND PURPOSE MR imaging is the technique of choice for follow-up of patients with brain metastases, yet the radiologic assessment is often tedious and error-prone, especially in examinations with multiple metastases or subtle changes. This study aimed to determine whether using automated color-coding improves the radiologic assessment of brain metastases compared with conventional reading. MATERIALS AND METHODS One hundred twenty-one pairs of follow-up examinations of patients with brain metastases were assessed. Two radiologists determined the presence of progression, regression, mixed changes, or stable disease between the follow-up examinations and indicated subjective diagnostic certainty regarding their decisions in a conventional reading and a second reading using automated color-coding after an interval of 8 weeks. RESULTS The rate of correctly classified diagnoses was higher (91.3%, 221/242, versus 74.0%, 179/242, P < .01) when using automated color-coding, and the median Likert score for diagnostic certainty improved from 2 (interquartile range, 2-3) to 4 (interquartile range, 3-5) (P < .05) compared with the conventional reading. Interrater agreement was excellent (κ = 0.80; 95% CI, 0.71-0.89) with automated color-coding compared with a moderate agreement (κ = 0.46; 95% CI, 0.34-0.58) with the conventional reading approach. When considering the time required for image preprocessing, the overall average time for reading an examination was longer in the automated color-coding approach (91.5 [SD, 23.1] seconds versus 79.4 [SD, 34.7 ] seconds, P < .001). CONCLUSIONS Compared with the conventional reading, automated color-coding of lesion changes in follow-up examinations of patients with brain metastases significantly increased the rate of correct diagnoses and resulted in higher diagnostic certainty.
Collapse
Affiliation(s)
- D Zopfs
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - K Laukamp
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - R Reimer
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - N Grosse Hokamp
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - C Kabbasch
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - J Borggrefe
- Department of Radiology (J.B.), Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - L Pennig
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - A C Bunck
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - M Schlamann
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - S Lennartz
- From the Institute for Diagnostic and Interventional Radiology (D.Z., K.L., R.R., N.G.H., C.K., L.P., A.C.B., M.S., S.L.), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
133
|
Kizildag Yirgin I, Koyluoglu YO, Seker ME, Ozkan Gurdal S, Ozaydin AN, Ozcinar B, Cabioğlu N, Ozmen V, Aribal E. Diagnostic Performance of AI for Cancers Registered in A Mammography Screening Program: A Retrospective Analysis. Technol Cancer Res Treat 2022; 21:15330338221075172. [PMID: 35060413 PMCID: PMC8796113 DOI: 10.1177/15330338221075172] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Purpose: To evaluate the performance of an artificial intelligence (AI) algorithm in a simulated screening setting and its effectiveness in detecting missed and interval cancers. Methods: Digital mammograms were collected from Bahcesehir Mammographic Screening Program which is the first organized, population-based, 10-year (2009-2019) screening program in Turkey. In total, 211 mammograms were extracted from the archive of the screening program in this retrospective study. One hundred ten of them were diagnosed as breast cancer (74 screen-detected, 27 interval, 9 missed), 101 of them were negative mammograms with a follow-up for at least 24 months. Cancer detection rates of radiologists in the screening program were compared with an AI system. Three different mammography assessment methods were used: (1) 2 radiologists’ assessment at screening center, (2) AI assessment based on the established risk score threshold, (3) a hypothetical radiologist and AI team-up in which AI was considered to be the third reader. Results: Area under curve was 0.853 (95% CI = 0.801-0.905) and the cut-off value for risk score was 34.5% with a sensitivity of 72.8% and a specificity of 88.3% for AI cancer detection in ROC analysis. Cancer detection rates were 67.3% for radiologists, 72.7% for AI, and 83.6% for radiologist and AI team-up. AI detected 72.7% of all cancers on its own, of which 77.5% were screen-detected, 15% were interval cancers, and 7.5% were missed cancers. Conclusion: AI may potentially enhance the capacity of breast cancer screening programs by increasing cancer detection rates and decreasing false-negative evaluations.
Collapse
Affiliation(s)
| | | | | | | | | | - Beyza Ozcinar
- Istanbul University, School of Medicine, Istanbul, Turkey
| | | | - Vahit Ozmen
- Istanbul University, School of Medicine, Istanbul, Turkey
| | - Erkin Aribal
- Acibadem M.A.A University School of Medicine, Istanbul, Turkey
| |
Collapse
|
134
|
Karantanas AH, Efremidis S. The concept of the invisible radiologist in the era of artificial intelligence. Eur J Radiol 2022; 155:110147. [PMID: 35000823 DOI: 10.1016/j.ejrad.2021.110147] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Revised: 12/03/2021] [Accepted: 12/30/2021] [Indexed: 12/12/2022]
Abstract
The radiologists were traditionally working in the background. What upgraded them as physicians during the second half of the past century was their clinical training and function precipitated by the evolution of Interventional Radiology and Medical Imaging, especially with ultrasonography. These allowed them to participate in patient's diagnosis and treatment by direct contact as well asvia multidisciplinary medical consultations. The wide application of teleradiology and PACS pushed radiologists back again which is no longer acceptable, especially in view of the amazing applications of artificial intelligence (AI) in Radiology. It is our belief that clinical radiologists have to be able to control the penetration of AI in Radiology, securing their work for the benefit of both clinicians and patients.
Collapse
Affiliation(s)
- Apostolos H Karantanas
- Department of Radiology, Medical School, University of Crete, 71110 Heraklion, Greece; Department of Medical Imaging, University Hospital, 71110 Heraklion, Greece; Foundation for Research and Technology Hellas (FORTH), Computational Biomedicine Laboratory (CBML) - Hybrid Imaging, 70013 Heraklion, Greece.
| | - Stavros Efremidis
- Prof. Emeritus, Department of Radiology, University of Ioannina, 45110 Ioannina, Greece
| |
Collapse
|
135
|
Abuzaid MM, Elshami W, Tekin H, Issa B. Assessment of the Willingness of Radiologists and Radiographers to Accept the Integration of Artificial Intelligence Into Radiology Practice. Acad Radiol 2022; 29:87-94. [PMID: 33129659 DOI: 10.1016/j.acra.2020.09.014] [Citation(s) in RCA: 61] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 09/13/2020] [Accepted: 09/16/2020] [Indexed: 12/11/2022]
Abstract
RATIONALE AND OBJECTIVES This study aimed to investigate radiologists' and radiographers' knowledge, perception, readiness, and challenges regarding Artificial Intelligence (AI) integration into radiology practice. MATERIALS AND METHODS An electronically distributed cross-sectional study was conducted among radiologists and radiographers in the United Arab Emirates. The questionnaire captured the participants' demographics, qualifications, professional experience, and postgraduate training. Their knowledge, perception, organisational readiness, and challenges regarding AI integration into radiology were examined. RESULTS There was a significant lack of knowledge and appreciation of the integration of AI into radiology practice. Organisations are stepping toward building AI implementation strategies. The availability of appropriate training courses is the main challenge for both radiographers and radiologists. CONCLUSION The excitement of AI implementation into radiology practise was accompanied by a lack of knowledge and effort required to improve the user's appreciation of AI. The knowledge gap requires collaboration between educational institutes and professional bodies to develop structured training programs for radiologists and radiographers.
Collapse
Affiliation(s)
- Mohamed M Abuzaid
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, UAE.
| | - Wiam Elshami
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, UAE
| | - Huseyin Tekin
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, UAE
| | - Bashar Issa
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, UAE
| |
Collapse
|
136
|
Challenges of Radiology education in the era of artificial intelligence. RADIOLOGIA 2022; 64:54-59. [DOI: 10.1016/j.rxeng.2020.10.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Accepted: 10/02/2020] [Indexed: 11/29/2022]
|
137
|
Charow R, Jeyakumar T, Younus S, Dolatabadi E, Salhia M, Al-Mouaswas D, Anderson M, Balakumar S, Clare M, Dhalla A, Gillan C, Haghzare S, Jackson E, Lalani N, Mattson J, Peteanu W, Tripp T, Waldorf J, Williams S, Tavares W, Wiljer D. Artificial Intelligence Education Programs for Health Care Professionals: Scoping Review. JMIR MEDICAL EDUCATION 2021; 7:e31043. [PMID: 34898458 PMCID: PMC8713099 DOI: 10.2196/31043] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 10/04/2021] [Accepted: 10/04/2021] [Indexed: 05/12/2023]
Abstract
BACKGROUND As the adoption of artificial intelligence (AI) in health care increases, it will become increasingly crucial to involve health care professionals (HCPs) in developing, validating, and implementing AI-enabled technologies. However, because of a lack of AI literacy, most HCPs are not adequately prepared for this revolution. This is a significant barrier to adopting and implementing AI that will affect patients. In addition, the limited existing AI education programs face barriers to development and implementation at various levels of medical education. OBJECTIVE With a view to informing future AI education programs for HCPs, this scoping review aims to provide an overview of the types of current or past AI education programs that pertains to the programs' curricular content, modes of delivery, critical implementation factors for education delivery, and outcomes used to assess the programs' effectiveness. METHODS After the creation of a search strategy and keyword searches, a 2-stage screening process was conducted by 2 independent reviewers to determine study eligibility. When consensus was not reached, the conflict was resolved by consulting a third reviewer. This process consisted of a title and abstract scan and a full-text review. The articles were included if they discussed an actual training program or educational intervention, or a potential training program or educational intervention and the desired content to be covered, focused on AI, and were designed or intended for HCPs (at any stage of their career). RESULTS Of the 10,094 unique citations scanned, 41 (0.41%) studies relevant to our eligibility criteria were identified. Among the 41 included studies, 10 (24%) described 13 unique programs and 31 (76%) discussed recommended curricular content. The curricular content of the unique programs ranged from AI use, AI interpretation, and cultivating skills to explain results derived from AI algorithms. The curricular topics were categorized into three main domains: cognitive, psychomotor, and affective. CONCLUSIONS This review provides an overview of the current landscape of AI in medical education and highlights the skills and competencies required by HCPs to effectively use AI in enhancing the quality of care and optimizing patient outcomes. Future education efforts should focus on the development of regulatory strategies, a multidisciplinary approach to curriculum redesign, a competency-based curriculum, and patient-clinician interaction.
Collapse
Affiliation(s)
- Rebecca Charow
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
- University Health Network, Toronto, ON, Canada
| | | | | | - Elham Dolatabadi
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
- Vector Institute, Toronto, ON, Canada
| | - Mohammad Salhia
- Michener Institute of Education, University Health Network, Toronto, ON, Canada
| | - Dalia Al-Mouaswas
- Michener Institute of Education, University Health Network, Toronto, ON, Canada
| | | | - Sarmini Balakumar
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
- Michener Institute of Education, University Health Network, Toronto, ON, Canada
| | - Megan Clare
- Michener Institute of Education, University Health Network, Toronto, ON, Canada
| | | | - Caitlin Gillan
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
- University Health Network, Toronto, ON, Canada
- Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Shabnam Haghzare
- University Health Network, Toronto, ON, Canada
- Vector Institute, Toronto, ON, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | | | | | - Jane Mattson
- Michener Institute of Education, University Health Network, Toronto, ON, Canada
| | - Wanda Peteanu
- Michener Institute of Education, University Health Network, Toronto, ON, Canada
| | - Tim Tripp
- University Health Network, Toronto, ON, Canada
| | - Jacqueline Waldorf
- Michener Institute of Education, University Health Network, Toronto, ON, Canada
| | | | - Walter Tavares
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
- University Health Network, Toronto, ON, Canada
- Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Wilson Centre, Toronto, ON, Canada
| | - David Wiljer
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
- University Health Network, Toronto, ON, Canada
- Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- CAMH Education, Centre for Addictions and Mental Health (CAMH), Toronto, ON, Canada
| |
Collapse
|
138
|
Mansour S, Kamal R, Hashem L, AlKalaawy B. Can artificial intelligence replace ultrasound as a complementary tool to mammogram for the diagnosis of the breast cancer? Br J Radiol 2021; 94:20210820. [PMID: 34613796 PMCID: PMC8631011 DOI: 10.1259/bjr.20210820] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 08/17/2021] [Accepted: 09/21/2021] [Indexed: 01/29/2023] Open
Abstract
OBJECTIVE To study the impact of artificial intelligence (AI) on the performance of mammogram with regard to the classification of the detected breast lesions in correlation to ultrasound-aided mammograms. METHODS Ethics committee approval was obtained in this prospective analysis. The study included 2000 mammograms. The mammograms were interpreted by the radiologists and breast ultrasound was performed for all cases. The Breast Imaging Reporting and Data System (BI-RADS) score was applied regarding the combined evaluation of the mammogram and the ultrasound modalities. Each breast side was individually assessed with the aid of AI scanning in the form of targeted heat-map and then, a probability of malignancy (abnormality scoring percentage) was obtained. Operative and the histopathology data were the standard of reference. RESULTS Normal assigned cases (BI-RADS 1) with no lesions were excluded from the statistical evaluation. The study included 538 benign and 642 malignant breast lesions (n = 1180, 59%). BI-RADS categories for the breast lesions with regard to the combined evaluation of the digital mammogram and ultrasound were assigned BI-RADS 2 (Benign) in 385 lesions with AI median value of the abnormality scoring percentage of 10 (n = 385/1180, 32.6%), and BI-RADS 5 (malignant) in 471, that had showed median percentage AI value of 88 (n = 471/1180, 39.9%). AI abnormality scoring of 59% yielded a sensitivity of 96.8% and specificity of 90.1% in the discrimination of the breast lesions detected on the included mammograms. CONCLUSION AI could be considered as an optional primary reliable complementary tool to the digital mammogram for the evaluation of the breast lesions. The color hue and the abnormality scoring percentage presented a credible method for the detection and discrimination of breast cancer of near accuracy to the breast ultrasound. So consequently, AI- mammogram combination could be used as a one setting method to discriminate between cases that require further imaging or biopsy from those that need only time interval follows up. ADVANCES IN KNOWLEDGE Recently, the indulgence of AI in the work-up of breast cancer was concerned. AI noted as a screening strategy for the detection of breast cancer. In the current work, the performance of AI was studied with regard to the diagnosis not just the detection of breast cancer in the mammographic-detected breast lesions. The evaluation was concerned with AI as a possible complementary reading tool to mammogram and included the qualitative assessment of the color hue and the quantitative integration of the abnormality scoring percentage.
Collapse
|
139
|
de Boer B, Kudina O. What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms. THEORETICAL MEDICINE AND BIOETHICS 2021; 42:245-266. [PMID: 34978638 PMCID: PMC8907081 DOI: 10.1007/s11017-021-09553-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/26/2021] [Indexed: 05/05/2023]
Abstract
In this paper, we examine the qualitative moral impact of machine learning-based clinical decision support systems in the process of medical diagnosis. To date, discussions about machine learning in this context have focused on problems that can be measured and assessed quantitatively, such as by estimating the extent of potential harm or calculating incurred risks. We maintain that such discussions neglect the qualitative moral impact of these technologies. Drawing on the philosophical approaches of technomoral change and technological mediation theory, which explore the interplay between technologies and morality, we present an analysis of concerns related to the adoption of machine learning-aided medical diagnosis. We analyze anticipated moral issues that machine learning systems pose for different stakeholders, such as bias and opacity in the way that models are trained to produce diagnoses, changes to how health care providers, patients, and developers understand their roles and professions, and challenges to existing forms of medical legislation. Albeit preliminary in nature, the insights offered by the technomoral change and the technological mediation approaches expand and enrich the current discussion about machine learning in diagnostic practices, bringing distinct and currently underexplored areas of concern to the forefront. These insights can contribute to a more encompassing and better informed decision-making process when adapting machine learning techniques to medical diagnosis, while acknowledging the interests of multiple stakeholders and the active role that technologies play in generating, perpetuating, and modifying ethical concerns in health care.
Collapse
Affiliation(s)
- Bas de Boer
- University of Twente, Enschede, Netherlands.
| | - Olya Kudina
- Technische Universiteit Delft, Delft, Netherlands
| |
Collapse
|
140
|
Lindqwister AL, Hassanpour S, Lewis PJ, Sin JM. AI-RADS: An Artificial Intelligence Curriculum for Residents. Acad Radiol 2021; 28:1810-1816. [PMID: 33071185 PMCID: PMC7563580 DOI: 10.1016/j.acra.2020.09.017] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 08/27/2020] [Accepted: 09/20/2020] [Indexed: 12/12/2022]
Abstract
Rationale and Objectives Artificial intelligence (AI) has rapidly emerged as a field poised to affect nearly every aspect of medicine, especially radiology. A PubMed search for the terms “artificial intelligence radiology” demonstrates an exponential increase in publications on this topic in recent years. Despite these impending changes, medical education designed for future radiologists have only recently begun. We present our institution's efforts to address this problem as a model for a successful introductory curriculum into artificial intelligence in radiology titled AI-RADS. Materials and Methods The course was based on a sequence of foundational algorithms in AI; these algorithms were presented as logical extensions of each other and were introduced as familiar examples (spam filters, movie recommendations, etc.). Since most trainees enter residency without computational backgrounds, secondary lessons, such as pixel mathematics, were integrated in this progression. Didactic sessions were reinforced with a concurrent journal club highlighting the algorithm discussed in the previous lecture. To circumvent often intimidating technical descriptions, study guides for these papers were produced. Questionnaires were administered before and after each lecture to assess confidence in the material. Surveys were also submitted at each journal club assessing learner preparedness and appropriateness of the article. Results The course received a 9.8/10 rating from residents for overall satisfaction. With the exception of the final lecture, there were significant increases in learner confidence in reading journal articles on AI after each lecture. Residents demonstrated significant increases in perceived understanding of foundational concepts in artificial intelligence across all mastery questions for every lecture. Conclusion The success of our institution's pilot AI-RADS course demonstrates a workable model of including AI in resident education.
Collapse
Affiliation(s)
| | - Saeed Hassanpour
- Dartmouth College, Williamson Translational Research, Lebanon, New Hampshire
| | - Petra J Lewis
- Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Jessica M Sin
- Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| |
Collapse
|
141
|
Esmaeilzadeh P, Mirzaei T, Dharanikota S. Patients' Perceptions Toward Human-Artificial Intelligence Interaction in Health Care: Experimental Study. J Med Internet Res 2021; 23:e25856. [PMID: 34842535 PMCID: PMC8663518 DOI: 10.2196/25856] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 05/04/2021] [Accepted: 10/26/2021] [Indexed: 12/24/2022] Open
Abstract
Background It is believed that artificial intelligence (AI) will be an integral part of health care services in the near future and will be incorporated into several aspects of clinical care such as prognosis, diagnostics, and care planning. Thus, many technology companies have invested in producing AI clinical applications. Patients are one of the most important beneficiaries who potentially interact with these technologies and applications; thus, patients’ perceptions may affect the widespread use of clinical AI. Patients should be ensured that AI clinical applications will not harm them, and that they will instead benefit from using AI technology for health care purposes. Although human-AI interaction can enhance health care outcomes, possible dimensions of concerns and risks should be addressed before its integration with routine clinical care. Objective The main objective of this study was to examine how potential users (patients) perceive the benefits, risks, and use of AI clinical applications for their health care purposes and how their perceptions may be different if faced with three health care service encounter scenarios. Methods We designed a 2×3 experiment that crossed a type of health condition (ie, acute or chronic) with three different types of clinical encounters between patients and physicians (ie, AI clinical applications as substituting technology, AI clinical applications as augmenting technology, and no AI as a traditional in-person visit). We used an online survey to collect data from 634 individuals in the United States. Results The interactions between the types of health care service encounters and health conditions significantly influenced individuals’ perceptions of privacy concerns, trust issues, communication barriers, concerns about transparency in regulatory standards, liability risks, benefits, and intention to use across the six scenarios. We found no significant differences among scenarios regarding perceptions of performance risk and social biases. Conclusions The results imply that incompatibility with instrumental, technical, ethical, or regulatory values can be a reason for rejecting AI applications in health care. Thus, there are still various risks associated with implementing AI applications in diagnostics and treatment recommendations for patients with both acute and chronic illnesses. The concerns are also evident if the AI applications are used as a recommendation system under physician experience, wisdom, and control. Prior to the widespread rollout of AI, more studies are needed to identify the challenges that may raise concerns for implementing and using AI applications. This study could provide researchers and managers with critical insights into the determinants of individuals’ intention to use AI clinical applications. Regulatory agencies should establish normative standards and evaluation guidelines for implementing AI in health care in cooperation with health care institutions. Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors of AI clinical applications.
Collapse
Affiliation(s)
- Pouyan Esmaeilzadeh
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States
| | - Tala Mirzaei
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States
| | - Spurthy Dharanikota
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States
| |
Collapse
|
142
|
Abstract
Artificial intelligence (AI) algorithms, particularly deep learning, have developed to the point that they can be applied in image recognition tasks. The use of AI in medical imaging can guide radiologists to more accurate image interpretation and diagnosis in radiology. The software will provide data that we cannot extract from the images. The rapid development in computational capabilities supports the wide applications of AI in a range of cancers. Among those are its widespread applications in head and neck cancer.
Collapse
|
143
|
Bélisle-Pipon JC, Couture V, Roy MC, Ganache I, Goetghebeur M, Cohen IG. What Makes Artificial Intelligence Exceptional in Health Technology Assessment? Front Artif Intell 2021; 4:736697. [PMID: 34796318 PMCID: PMC8594317 DOI: 10.3389/frai.2021.736697] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 09/23/2021] [Indexed: 12/20/2022] Open
Abstract
The application of artificial intelligence (AI) may revolutionize the healthcare system, leading to enhance efficiency by automatizing routine tasks and decreasing health-related costs, broadening access to healthcare delivery, targeting more precisely patient needs, and assisting clinicians in their decision-making. For these benefits to materialize, governments and health authorities must regulate AI, and conduct appropriate health technology assessment (HTA). Many authors have highlighted that AI health technologies (AIHT) challenge traditional evaluation and regulatory processes. To inform and support HTA organizations and regulators in adapting their processes to AIHTs, we conducted a systematic review of the literature on the challenges posed by AIHTs in HTA and health regulation. Our research question was: What makes artificial intelligence exceptional in HTA? The current body of literature appears to portray AIHTs as being exceptional to HTA. This exceptionalism is expressed along 5 dimensions: 1) AIHT's distinctive features; 2) their systemic impacts on health care and the health sector; 3) the increased expectations towards AI in health; 4) the new ethical, social and legal challenges that arise from deploying AI in the health sector; and 5) the new evaluative constraints that AI poses to HTA. Thus, AIHTs are perceived as exceptional because of their technological characteristics and potential impacts on society at large. As AI implementation by governments and health organizations carries risks of generating new, and amplifying existing, challenges, there are strong arguments for taking into consideration the exceptional aspects of AIHTs, especially as their impacts on the healthcare system will be far greater than that of drugs and medical devices. As AIHTs begin to be increasingly introduced into the health care sector, there is a window of opportunity for HTA agencies and scholars to consider AIHTs' exceptionalism and to work towards only deploying clinically, economically, socially acceptable AIHTs in the health care system.
Collapse
Affiliation(s)
| | | | | | - Isabelle Ganache
- Institut National D’Excellence en Santé et en Services Sociaux (INESSS), Montréal, Québec, QC, Canada
| | - Mireille Goetghebeur
- Institut National D’Excellence en Santé et en Services Sociaux (INESSS), Montréal, Québec, QC, Canada
| | | |
Collapse
|
144
|
Harvey HB, Gowda V. Regulatory Issues and Challenges to Artificial Intelligence Adoption. Radiol Clin North Am 2021; 59:1075-1083. [PMID: 34689875 DOI: 10.1016/j.rcl.2021.07.007] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Artificial intelligence technology promises to redefine the practice of radiology. However, it exists in a nascent phase and remains largely untested in the clinical space. This nature is both a cause and consequence of the uncertain legal-regulatory environment it enters. This discussion aims to shed light on these challenges, tracing the various pathways toward approval by the US Food and Drug Administration, the future of government oversight, privacy issues, ethical dilemmas, and practical considerations related to implementation in radiologist practice.
Collapse
Affiliation(s)
- Harlan Benjamin Harvey
- Radiology, Massachusetts General Hospital, Harvard Medical School, 175 Cambridge Street, Suite 200, Boston, MA 02114, USA.
| | - Vrushab Gowda
- Harvard Law School, 1563 Massachusetts Avenue, Cambridge, MA 02138, USA
| |
Collapse
|
145
|
Li D, Yi PH. Artificial Intelligence in Radiology: A Canadian Environmental Scan. Can Assoc Radiol J 2021; 73:428-429. [PMID: 34569300 DOI: 10.1177/08465371211038940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Affiliation(s)
- David Li
- University of Ottawa Faculty of Medicine, Ottawa, Ontario, Canada
| | - Paul H Yi
- University of Maryland Intelligent Imaging (UMII) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| |
Collapse
|
146
|
Hwang EJ, Goo JM, Yoon SH, Beck KS, Seo JB, Choi BW, Chung MJ, Park CM, Jin KN, Lee SM. Use of Artificial Intelligence-Based Software as Medical Devices for Chest Radiography: A Position Paper from the Korean Society of Thoracic Radiology. Korean J Radiol 2021; 22:1743-1748. [PMID: 34564966 PMCID: PMC8546139 DOI: 10.3348/kjr.2021.0544] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 07/07/2021] [Accepted: 07/07/2021] [Indexed: 12/28/2022] Open
Affiliation(s)
- Eui Jin Hwang
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology and Institution of Radiation Medicine, Seoul National University College of Medicine, Seoul, Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology and Institution of Radiation Medicine, Seoul National University College of Medicine, Seoul, Korea.,Cancer Research Institute, Seoul National University, Seoul, Korea.
| | - Soon Ho Yoon
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology and Institution of Radiation Medicine, Seoul National University College of Medicine, Seoul, Korea.,Department of Radiology, UMass Memorial Medical Center, Worcester, MA, USA
| | - Kyongmin Sarah Beck
- Department of Radiology, Seoul St Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Byoung Wook Choi
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Myung Jin Chung
- Department of Radiology and Medical AI Research Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Chang Min Park
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology and Institution of Radiation Medicine, Seoul National University College of Medicine, Seoul, Korea
| | - Kwang Nam Jin
- Department of Radiology, Seoul Metropolitan Government-Seoul National University Boramae Medical Center, Seoul, Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| |
Collapse
|
147
|
Yousefi Nooraie R, Lyons PG, Baumann AA, Saboury B. Equitable Implementation of Artificial Intelligence in Medical Imaging: What Can be Learned from Implementation Science? PET Clin 2021; 16:643-653. [PMID: 34537134 DOI: 10.1016/j.cpet.2021.07.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Artificial intelligence (AI) has been rapidly adopted in various health care domains. Molecular imaging, accordingly, has demonstrated growing academic and commercial interest in AI. Unprepared and inequitable implementation and scale-up of AI in health care may pose challenges. Implementation of AI, as a complex intervention, may face various barriers, at individual, interindividual, organizational, health system, and community levels. To address these barriers, recommendations have been developed to consider health equity as a critical lens to sensitize implementation, engage stakeholders in implementation and evaluation, recognize and incorporate the iterative nature of implementation, and integrate equity and implementation in early-stage AI research.
Collapse
Affiliation(s)
- Reza Yousefi Nooraie
- Department of Public Health Sciences, University of Rochester School of Medicine and Dentistry, 265 Crittenden Blvd, Rochester, NY 14642, USA.
| | - Patrick G Lyons
- Department of Medicine, Division of Pulmonary and Critical Care Medicine, Washington University School of Medicine in St Louis, 660 South Euclid Avenue, MSC 8052-43-14, St. Louis, MO 63110-1010, USA; Healthcare Innovation Lab, BJC HealthCare, St Louis, MO, USA
| | - Ana A Baumann
- Brown School of Social Work, Washington University in St. Louis, 600 S. Taylor Ave, MSC:8100-0094-02, St. Louis, MO 63110, USA
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Baltimore, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
148
|
Richardson ML, Garwood ER, Lee Y, Li MD, Lo HS, Nagaraju A, Nguyen XV, Probyn L, Rajiah P, Sin J, Wasnik AP, Xu K. Noninterpretive Uses of Artificial Intelligence in Radiology. Acad Radiol 2021; 28:1225-1235. [PMID: 32059956 DOI: 10.1016/j.acra.2020.01.012] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 01/08/2020] [Accepted: 01/09/2020] [Indexed: 12/12/2022]
Abstract
We deem a computer to exhibit artificial intelligence (AI) when it performs a task that would normally require intelligent action by a human. Much of the recent excitement about AI in the medical literature has revolved around the ability of AI models to recognize anatomy and detect pathology on medical images, sometimes at the level of expert physicians. However, AI can also be used to solve a wide range of noninterpretive problems that are relevant to radiologists and their patients. This review summarizes some of the newer noninterpretive uses of AI in radiology.
Collapse
Affiliation(s)
| | - Elisabeth R Garwood
- Department of Radiology, University of Massachusetts, Worcester, Massachusetts
| | - Yueh Lee
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina
| | - Matthew D Li
- Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Boston, Massachusets
| | - Hao S Lo
- Department of Radiology, University of Washington, Seattle, Washington
| | - Arun Nagaraju
- Department of Radiology, University of Chicago, Chicago, Illinois
| | - Xuan V Nguyen
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Linda Probyn
- Department of Radiology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario
| | - Prabhakar Rajiah
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Jessica Sin
- Department of Radiology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Ashish P Wasnik
- Department of Radiology, University of Michigan, Ann Arbor, Michigan
| | - Kali Xu
- Department of Medicine, Santa Clara Valley Medical Center, Santa Clara, California
| |
Collapse
|
149
|
Malamateniou C, McFadden S, McQuinlan Y, England A, Woznitza N, Goldsworthy S, Currie C, Skelton E, Chu KY, Alware N, Matthews P, Hawkesford R, Tucker R, Town W, Matthew J, Kalinka C, O'Regan T. Artificial Intelligence: Guidance for clinical imaging and therapeutic radiography professionals, a summary by the Society of Radiographers AI working group. Radiography (Lond) 2021; 27:1192-1202. [PMID: 34420888 DOI: 10.1016/j.radi.2021.07.028] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/30/2021] [Accepted: 07/31/2021] [Indexed: 10/20/2022]
Abstract
INTRODUCTION Artificial intelligence (AI) has started to be increasingly adopted in medical imaging and radiotherapy clinical practice, however research, education and partnerships have not really caught up yet to facilitate a safe and effective transition. The aim of the document is to provide baseline guidance for radiographers working in the field of AI in education, research, clinical practice and stakeholder partnerships. The guideline is intended for use by the multi-professional clinical imaging and radiotherapy teams, including all staff, volunteers, students and learners. METHODS The format mirrored similar publications from other SCoR working groups in the past. The recommendations have been subject to a rapid period of peer, professional and patient assessment and review. Feedback was sought from a range of SoR members and advisory groups, as well as from the SoR director of professional policy, as well as from external experts. Amendments were then made in line with feedback received and a final consensus was reached. RESULTS AI is an innovative tool radiographers will need to engage with to ensure a safe and efficient clinical service in imaging and radiotherapy. Educational provisions will need to be proportionately adjusted by Higher Education Institutions (HEIs) to offer the necessary knowledge, skills and competences for diagnostic and therapeutic radiographers, to enable them to navigate a future where AI will be central to patient diagnosis and treatment pathways. Radiography-led research in AI should address key clinical challenges and enable radiographers co-design, implement and validate AI solutions. Partnerships are key in ensuring the contribution of radiographers is integrated into healthcare AI ecosystems for the benefit of the patients and service users. CONCLUSION Radiography is starting to work towards a future with AI-enabled healthcare. This guidance offers some recommendations for different areas of radiography practice. There is a need to update our educational curricula, rethink our research priorities, forge new strong clinical-academic-industry partnerships to optimise clinical practice. Specific recommendations in relation to clinical practice, education, research and the forging of partnerships with key stakeholders are discussed, with potential impact on policy and practice in all these domains. These recommendations aim to serve as baseline guidance for UK radiographers. IMPLICATIONS FOR PRACTICE This review offers the most up-to-date recommendations for clinical practitioners, researchers, academics and service users of clinical imaging and therapeutic radiography services. Radiography practice, education and research must gradually adjust to AI-enabled healthcare systems to ensure gains of AI technologies are maximised and challenges and risks are minimised. This guidance will need to be updated regularly given the fast-changing pace of AI development and innovation.
Collapse
Affiliation(s)
- C Malamateniou
- Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, City, University of London, Northampton Square, London, EC1V 0HB, UK; Perinatal Imaging and Health, King's College, London, UK.
| | - S McFadden
- School of Health Sciences, Ulster University, Belfast, Northern Ireland, BT37OQB, UK
| | - Y McQuinlan
- Mirada Medical, UK; Honorary Dosimetrist, Guy's and St Thomas' NHS Trust, UK
| | - A England
- School of Allied Health Professions, Keele University, Staffordshire, UK
| | - N Woznitza
- Radiology Department, University College London Hospitals, UK; School of Allied and Public Health Professions Canterbury Christ Church University, UK
| | - S Goldsworthy
- Beacon Radiotherapy, Musgrove Park Hospital, Somerset NHS Foundation Trust, Taunton, TA1 5DA, UK
| | - C Currie
- Programme Lead MSc Diagnostic Imaging, Glasgow Caledonian University, UK; MRI Specialist Radiographer, Queen Elizabeth University Hospital, Glasgow, UK
| | - E Skelton
- Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, City, University of London, Northampton Square, London, EC1V 0HB, UK; Perinatal Imaging and Health, King's College, London, UK
| | - K-Y Chu
- Department of Oncology, University of Oxford, UK; Radiotherapy Department, Oxford University Hospitals, NHS FT, UK
| | - N Alware
- King George Hospital, BHRUT NHS Trust, London, UK
| | - P Matthews
- Diagnostic Imaging Department, Surrey & Sussex Healthcare NHS Trust, UK
| | | | - R Tucker
- School of Allied Health and Social Care, College of Health, Psychology and Social Care, University of Derby, UK; Radiology Department, Nottingham University Hospital NHS Trust, UK
| | - W Town
- Dartford and Gravesham NHS Trust, UK
| | - J Matthew
- Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, City, University of London, Northampton Square, London, EC1V 0HB, UK; School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
| | - C Kalinka
- Society and College of Radiographers, UK; Programme Manager, Strategic Programme Unit, NHS Collaborative, Wales, United Kingdom
| | - T O'Regan
- The Society and College of Radiographers, 207 Providence Square, Mill Street, London, UK
| |
Collapse
|
150
|
Abuzaid MM, Tekin HO, Reza M, Elhag IR, Elshami W. Assessment of MRI technologists in acceptance and willingness to integrate artificial intelligence into practice. Radiography (Lond) 2021; 27 Suppl 1:S83-S87. [PMID: 34364784 DOI: 10.1016/j.radi.2021.07.007] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 06/21/2021] [Accepted: 07/01/2021] [Indexed: 11/26/2022]
Abstract
INTRODUCTION The integration of AI in medical imaging has tremendous exponential growth, especially in image production, image processing and image interpretation. It is expected that radiographers working across all imaging modalities have adequate knowledge as they are part of the end-user team. The current study aimed to investigate the knowledge, willingness and challenges facing the Magnetic Resonance Imaging (MRI) technologists in the integration of Artificial Intelligence (AI) into MRI practice. METHODS Total of 120 participants were recruited using a snowball sampling technique. A two-phase study was undertaken using survey and focus group discussion (FGD) to capture participants' knowledge, interpretations, needs and obstacles toward AI integrations in MRI practice. The survey and FGD provided the base to understand the participant's' knowledge, acceptance and needs for AI. RESULTS Results showed medium to high knowledge, excitement about AI integration without disturbance of MRI practice. Participants thought that AI can improve MRI protocol selection (91.8%), reduce the scan time (65.3%), and improve image post-processing (79.5%). Education and learning resources concerning AI were the main obstacles facing MRI technologists. CONCLUSION MRI technologists have the knowledge and possess basic technical information. The application of AI in MRI practice might greatly influence and improve MRI technologist's work. A structured and professional program should be integrated in both undergraduate and continuous education to prepare for effective AI implementation. IMPLICATIONS FOR PRACTICE Application of AI in MRI can be used in many aspects, such as optimize image quality and avoidance of image artifacts. Moreover, AI can play an important role in patient's safety at the MRI unit to reduce incidents. Education, infrastructure, and knowledge of end-users are keys for the incorporation of AI use, development and optimisation.
Collapse
Affiliation(s)
- M M Abuzaid
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| | - H O Tekin
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| | - M Reza
- Shaikh Shakeboat Medical City, Radiology Department, AbuDhabi, United Arab Emirates
| | - I R Elhag
- Shaikh Shakeboat Medical City, Radiology Department, AbuDhabi, United Arab Emirates
| | - W Elshami
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates.
| |
Collapse
|