1
|
Brima Y, Atemkeng M. Saliency-driven explainable deep learning in medical imaging: bridging visual explainability and statistical quantitative analysis. BioData Min 2024; 17:18. [PMID: 38909228 PMCID: PMC11193223 DOI: 10.1186/s13040-024-00370-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 06/10/2024] [Indexed: 06/24/2024] Open
Abstract
Deep learning shows great promise for medical image analysis but often lacks explainability, hindering its adoption in healthcare. Attribution techniques that explain model reasoning can potentially increase trust in deep learning among clinical stakeholders. In the literature, much of the research on attribution in medical imaging focuses on visual inspection rather than statistical quantitative analysis.In this paper, we proposed an image-based saliency framework to enhance the explainability of deep learning models in medical image analysis. We use adaptive path-based gradient integration, gradient-free techniques, and class activation mapping along with its derivatives to attribute predictions from brain tumor MRI and COVID-19 chest X-ray datasets made by recent deep convolutional neural network models.The proposed framework integrates qualitative and statistical quantitative assessments, employing Accuracy Information Curves (AICs) and Softmax Information Curves (SICs) to measure the effectiveness of saliency methods in retaining critical image information and their correlation with model predictions. Visual inspections indicate that methods such as ScoreCAM, XRAI, GradCAM, and GradCAM++ consistently produce focused and clinically interpretable attribution maps. These methods highlighted possible biomarkers, exposed model biases, and offered insights into the links between input features and predictions, demonstrating their ability to elucidate model reasoning on these datasets. Empirical evaluations reveal that ScoreCAM and XRAI are particularly effective in retaining relevant image regions, as reflected in their higher AUC values. However, SICs highlight variability, with instances of random saliency masks outperforming established methods, emphasizing the need for combining visual and empirical metrics for a comprehensive evaluation.The results underscore the importance of selecting appropriate saliency methods for specific medical imaging tasks and suggest that combining qualitative and quantitative approaches can enhance the transparency, trustworthiness, and clinical adoption of deep learning models in healthcare. This study advances model explainability to increase trust in deep learning among healthcare stakeholders by revealing the rationale behind predictions. Future research should refine empirical metrics for stability and reliability, include more diverse imaging modalities, and focus on improving model explainability to support clinical decision-making.
Collapse
Affiliation(s)
- Yusuf Brima
- Computer Vision, Institute of Cognitive Science, Osnabrück University, Osnabrueck, D-49090, Lower Saxony, Germany.
| | - Marcellin Atemkeng
- Department of Mathematics, Rhodes University, Grahamstown, 6140, Eastern Cape, South Africa.
| |
Collapse
|
2
|
Dehdab R, Brendlin A, Werner S, Almansour H, Gassenmaier S, Brendel JM, Nikolaou K, Afat S. Evaluating ChatGPT-4V in chest CT diagnostics: a critical image interpretation assessment. Jpn J Radiol 2024:10.1007/s11604-024-01606-3. [PMID: 38867035 DOI: 10.1007/s11604-024-01606-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 05/28/2024] [Indexed: 06/14/2024]
Abstract
PURPOSE To assess the diagnostic accuracy of ChatGPT-4V in interpreting a set of four chest CT slices for each case of COVID-19, non-small cell lung cancer (NSCLC), and control cases, thereby evaluating its potential as an AI tool in radiological diagnostics. MATERIALS AND METHODS In this retrospective study, 60 CT scans from The Cancer Imaging Archive, covering COVID-19, NSCLC, and control cases were analyzed using ChatGPT-4V. A radiologist selected four CT slices from each scan for evaluation. ChatGPT-4V's interpretations were compared against the gold standard diagnoses and assessed by two radiologists. Statistical analyses focused on accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), along with an examination of the impact of pathology location and lobe involvement. RESULTS ChatGPT-4V showed an overall diagnostic accuracy of 56.76%. For NSCLC, sensitivity was 27.27% and specificity was 60.47%. In COVID-19 detection, sensitivity was 13.64% and specificity of 64.29%. For control cases, the sensitivity was 31.82%, with a specificity of 95.24%. The highest sensitivity (83.33%) was observed in cases involving all lung lobes. The chi-squared statistical analysis indicated significant differences in Sensitivity across categories and in relation to the location and lobar involvement of pathologies. CONCLUSION ChatGPT-4V demonstrated variable diagnostic performance in chest CT interpretation, with notable proficiency in specific scenarios. This underscores the challenges of cross-modal AI models like ChatGPT-4V in radiology, pointing toward significant areas for improvement to ensure dependability. The study emphasizes the importance of enhancing these models for broader, more reliable medical use.
Collapse
Affiliation(s)
- Reza Dehdab
- Department of Diagnostic and Interventional Radiology, Tuebingen University Hospital, Hoppe-Seyler-Straße 3, 72076, Tuebingen, Germany.
| | - Andreas Brendlin
- Department of Diagnostic and Interventional Radiology, Tuebingen University Hospital, Hoppe-Seyler-Straße 3, 72076, Tuebingen, Germany
| | - Sebastian Werner
- Department of Diagnostic and Interventional Radiology, Tuebingen University Hospital, Hoppe-Seyler-Straße 3, 72076, Tuebingen, Germany
| | - Haidara Almansour
- Department of Diagnostic and Interventional Radiology, Tuebingen University Hospital, Hoppe-Seyler-Straße 3, 72076, Tuebingen, Germany
| | - Sebastian Gassenmaier
- Department of Diagnostic and Interventional Radiology, Tuebingen University Hospital, Hoppe-Seyler-Straße 3, 72076, Tuebingen, Germany
| | - Jan Michael Brendel
- Department of Diagnostic and Interventional Radiology, Tuebingen University Hospital, Hoppe-Seyler-Straße 3, 72076, Tuebingen, Germany
| | - Konstantin Nikolaou
- Department of Diagnostic and Interventional Radiology, Tuebingen University Hospital, Hoppe-Seyler-Straße 3, 72076, Tuebingen, Germany
| | - Saif Afat
- Department of Diagnostic and Interventional Radiology, Tuebingen University Hospital, Hoppe-Seyler-Straße 3, 72076, Tuebingen, Germany
| |
Collapse
|
3
|
Pape J, Hirsch FW, Deffaa OJ, DiFranco MD, Rosolowski M, Gräfe D. Applicability and robustness of an artificial intelligence-based assessment for Greulich and Pyle bone age in a German cohort. ROFO-FORTSCHR RONTG 2024; 196:600-606. [PMID: 38065542 DOI: 10.1055/a-2203-2997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/24/2024]
Abstract
PURPOSE The determination of bone age (BA) based on the hand and wrist, using the 70-year-old Greulich and Pyle (G&P) atlas, remains a widely employed practice in various institutions today. However, a more recent approach utilizing artificial intelligence (AI) enables automated BA estimation based on the G&P atlas. Nevertheless, AI-based methods encounter limitations when dealing with images that deviate from the standard hand and wrist projections. Generally, the extent to which BA, as determined by the G&P atlas, corresponds to the chronological age (CA) of a contemporary German population remains a subject of continued discourse. This study aims to address two main objectives. Firstly, it seeks to investigate whether the G&P atlas, as applied by the AI software, is still relevant for healthy children in Germany today. Secondly, the study aims to assess the performance of the AI software in handling non-strict posterior-anterior (p. a.) projections of the hand and wrist. MATERIALS AND METHODS The AI software retrospectively estimated the BA in children who had undergone radiographs of a single hand using posterior-anterior and oblique planes. The primary purpose was to rule out any osseous injuries. The prediction error of BA in relation to CA was calculated for each plane and between the two planes. RESULTS A total of 1253 patients (aged 3 to 16 years, median age 10.8 years, 55.7 % male) were included in the study. The average error of BA in posterior-anterior projections compared to CA was 3.0 (± 13.7) months for boys and 1.7 (± 13.7) months for girls. Interestingly, the deviation from CA tended to be even slightly lower in oblique projections than in posterior-anterior projections. The mean error in the posterior-anterior projection plane was 2.5 (± 13.7) months, while in the oblique plane it was 1.8 (± 13.9) months (p = 0.01). CONCLUSION The AI software for BA generally corresponds to the age of the contemporary German population under study, although there is a noticeable prediction error, particularly in younger children. Notably, the software demonstrates robust performance in oblique projections. KEY POINTS · Bone age, as determined by artificial intelligence, aligns with the chronological age of the contemporary German cohort under study.. · As determined by artificial intelligence, bone age is remarkably robust, even when utilizing oblique X-ray projections.. CITATION FORMAT · Pape J, Hirsch F, Deffaa O et al. Applicability and robustness of an artificial intelligence-based assessment for Greulich and Pyle bone age in a German cohort. Fortschr Röntgenstr 2024; 196: 600 - 606.
Collapse
Affiliation(s)
- Johanna Pape
- Pediatric Radiology, University Hospital Leipzig, Germany
| | | | | | | | - Maciej Rosolowski
- Institute for Medical Informatics, Statistics and Epidemiology, Leipzig University, Leipzig, Germany
| | - Daniel Gräfe
- Pediatric Radiology, University Hospital Leipzig, Germany
| |
Collapse
|
4
|
Zeng Y, Zhang X, Wang J, Usui A, Ichiji K, Bukovsky I, Chou S, Funayama M, Homma N. Inconsistency between Human Observation and Deep Learning Models: Assessing Validity of Postmortem Computed Tomography Diagnosis of Drowning. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1-10. [PMID: 38336949 PMCID: PMC11169324 DOI: 10.1007/s10278-024-00974-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 10/18/2023] [Accepted: 11/17/2023] [Indexed: 02/12/2024]
Abstract
Drowning diagnosis is a complicated process in the autopsy, even with the assistance of autopsy imaging and the on-site information from where the body was found. Previous studies have developed well-performed deep learning (DL) models for drowning diagnosis. However, the validity of the DL models was not assessed, raising doubts about whether the learned features accurately represented the medical findings observed by human experts. In this paper, we assessed the medical validity of DL models that had achieved high classification performance for drowning diagnosis. This retrospective study included autopsy cases aged 8-91 years who underwent postmortem computed tomography between 2012 and 2021 (153 drowning and 160 non-drowning cases). We first trained three deep learning models from a previous work and generated saliency maps that highlight important features in the input. To assess the validity of models, pixel-level annotations were created by four radiological technologists and further quantitatively compared with the saliency maps. All the three models demonstrated high classification performance with areas under the receiver operating characteristic curves of 0.94, 0.97, and 0.98, respectively. On the other hand, the assessment results revealed unexpected inconsistency between annotations and models' saliency maps. In fact, each model had, respectively, around 30%, 40%, and 80% of irrelevant areas in the saliency maps, suggesting the predictions of the DL models might be unreliable. The result alerts us in the careful assessment of DL tools, even those with high classification performance.
Collapse
Affiliation(s)
- Yuwen Zeng
- Department of Radiological Imaging and Informatics, Tohoku University Graduate School of Medicine, Sendai, Japan.
| | - Xiaoyong Zhang
- National Institute of Technology, Sendai College, Sendai, Japan
| | - Jiaoyang Wang
- Department of Intelligent Biomedical System Engineering, Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan
| | - Akihito Usui
- Department of Radiological Imaging and Informatics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Kei Ichiji
- Department of Radiological Imaging and Informatics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Ivo Bukovsky
- Faculty of Science, University of South Bohemia in Ceske Budejovice, Ceske Budejovice, Czech Republic
- Mechanical Engineering, Czech Technical University in Prague, Prague, Czech Republic
| | - Shuoyan Chou
- Department of Industrial Management, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Masato Funayama
- Department of Radiological Imaging and Informatics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Noriyasu Homma
- Department of Radiological Imaging and Informatics, Tohoku University Graduate School of Medicine, Sendai, Japan
| |
Collapse
|
5
|
Bhatia A, Khalvati F, Ertl-Wagner BB. Artificial Intelligence in the Future Landscape of Pediatric Neuroradiology: Opportunities and Challenges. AJNR Am J Neuroradiol 2024; 45:549-553. [PMID: 38176730 DOI: 10.3174/ajnr.a8086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 10/17/2023] [Indexed: 01/06/2024]
Abstract
This paper will review how artificial intelligence (AI) will play an increasingly important role in pediatric neuroradiology in the future. A safe, transparent, and human-centric AI is needed to tackle the quadruple aim of improved health outcomes, enhanced patient and family experience, reduced costs, and improved well-being of the healthcare team in pediatric neuroradiology. Equity, diversity and inclusion, data safety, and access to care will need to always be considered. In the next decade, AI algorithms are expected to play an increasingly important role in access to care, workflow management, abnormality detection, classification, response prediction, prognostication, report generation, as well as in the patient and family experience in pediatric neuroradiology. Also, AI algorithms will likely play a role in recognizing and flagging rare diseases and in pattern recognition to identify previously unknown disorders. While AI algorithms will play an important role, humans will not only need to be in the loop, but in the center of pediatric neuroimaging. AI development and deployment will need to be closely watched and monitored by experts in the field. Patient and data safety need to be at the forefront, and the risks of a dependency on technology will need to be contained. The applications and implications of AI in pediatric neuroradiology will differ from adult neuroradiology.
Collapse
Affiliation(s)
- Aashim Bhatia
- From the Children's Hospital of Philadelphia (A.B.), Philadelphia, Pennsylvania
| | - Farzad Khalvati
- Hospital for Sick Children (F.K., B.B.E.-W.), Toronto, Ontario, Canada
| | | |
Collapse
|
6
|
Yao J, Chu LC, Patlas M. Applications of Artificial Intelligence in Acute Abdominal Imaging. Can Assoc Radiol J 2024:8465371241250197. [PMID: 38715249 DOI: 10.1177/08465371241250197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2024] Open
Abstract
Artificial intelligence (AI) is a rapidly growing field with significant implications for radiology. Acute abdominal pain is a common clinical presentation that can range from benign conditions to life-threatening emergencies. The critical nature of these situations renders emergent abdominal imaging an ideal candidate for AI applications. CT, radiographs, and ultrasound are the most common modalities for imaging evaluation of these patients. For each modality, numerous studies have assessed the performance of AI models for detecting common pathologies, such as appendicitis, bowel obstruction, and cholecystitis. The capabilities of these models range from simple classification to detailed severity assessment. This narrative review explores the evolution, trends, and challenges in AI applications for evaluating acute abdominal pathologies. We review implementations of AI for non-traumatic and traumatic abdominal pathologies, with discussion of potential clinical impact, challenges, and future directions for the technology.
Collapse
Affiliation(s)
- Jason Yao
- Department of Radiology, McMaster University, Hamilton, ON, Canada
| | - Linda C Chu
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Michael Patlas
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
7
|
Aasem M, Javed Iqbal M. Toward explainable AI in radiology: Ensemble-CAM for effective thoracic disease localization in chest X-ray images using weak supervised learning. Front Big Data 2024; 7:1366415. [PMID: 38756502 PMCID: PMC11096460 DOI: 10.3389/fdata.2024.1366415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Accepted: 04/08/2024] [Indexed: 05/18/2024] Open
Abstract
Chest X-ray (CXR) imaging is widely employed by radiologists to diagnose thoracic diseases. Recently, many deep learning techniques have been proposed as computer-aided diagnostic (CAD) tools to assist radiologists in minimizing the risk of incorrect diagnosis. From an application perspective, these models have exhibited two major challenges: (1) They require large volumes of annotated data at the training stage and (2) They lack explainable factors to justify their outcomes at the prediction stage. In the present study, we developed a class activation mapping (CAM)-based ensemble model, called Ensemble-CAM, to address both of these challenges via weakly supervised learning by employing explainable AI (XAI) functions. Ensemble-CAM utilizes class labels to predict the location of disease in association with interpretable features. The proposed work leverages ensemble and transfer learning with class activation functions to achieve three objectives: (1) minimizing the dependency on strongly annotated data when locating thoracic diseases, (2) enhancing confidence in predicted outcomes by visualizing their interpretable features, and (3) optimizing cumulative performance via fusion functions. Ensemble-CAM was trained on three CXR image datasets and evaluated through qualitative and quantitative measures via heatmaps and Jaccard indices. The results reflect the enhanced performance and reliability in comparison to existing standalone and ensembled models.
Collapse
Affiliation(s)
- Muhammad Aasem
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | | |
Collapse
|
8
|
Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artif Intell Med 2024; 151:102861. [PMID: 38555850 DOI: 10.1016/j.artmed.2024.102861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 03/19/2024] [Accepted: 03/25/2024] [Indexed: 04/02/2024]
Abstract
Healthcare organizations have realized that Artificial intelligence (AI) can provide a competitive edge through personalized patient experiences, improved patient outcomes, early diagnosis, augmented clinician capabilities, enhanced operational efficiencies, or improved medical service accessibility. However, deploying AI-driven tools in the healthcare ecosystem could be challenging. This paper categorizes AI applications in healthcare and comprehensively examines the challenges associated with deploying AI in medical practices at scale. As AI continues to make strides in healthcare, its integration presents various challenges, including production timelines, trust generation, privacy concerns, algorithmic biases, and data scarcity. The paper highlights that flawed business models and wrong workflows in healthcare practices cannot be rectified merely by deploying AI-driven tools. Healthcare organizations should re-evaluate root problems such as misaligned financial incentives (e.g., fee-for-service models), dysfunctional medical workflows (e.g., high rates of patient readmissions), poor care coordination between different providers, fragmented electronic health records systems, and inadequate patient education and engagement models in tandem with AI adoption. This study also explores the need for a cultural shift in viewing AI not as a threat but as an enabler that can enhance healthcare delivery and create new employment opportunities while emphasizing the importance of addressing underlying operational issues. The necessity of investments beyond finance is discussed, emphasizing the importance of human capital, continuous learning, and a supportive environment for AI integration. The paper also highlights the crucial role of clear regulations in building trust, ensuring safety, and guiding the ethical use of AI, calling for coherent frameworks addressing transparency, model accuracy, data quality control, liability, and ethics. Furthermore, this paper underscores the importance of advancing AI literacy within academia to prepare future healthcare professionals for an AI-driven landscape. Through careful navigation and proactive measures addressing these challenges, the healthcare community can harness AI's transformative power responsibly and effectively, revolutionizing healthcare delivery and patient care. The paper concludes with a vision and strategic suggestions for the future of healthcare with AI, emphasizing thoughtful, responsible, and innovative engagement as the pathway to realizing its full potential to unlock immense benefits for healthcare organizations, physicians, nurses, and patients while proactively mitigating risks.
Collapse
Affiliation(s)
- Pouyan Esmaeilzadeh
- Department of Information Systems and Business Analytics, College of Business, Florida International University (FIU), Modesto A. Maidique Campus, 11200 S.W. 8th St, RB 261B, Miami, FL 33199, United States.
| |
Collapse
|
9
|
Cheng CT, Ooyang CH, Kang SC, Liao CH. Applications of Deep Learning in Trauma Radiology: A Narrative Review. Biomed J 2024:100743. [PMID: 38679199 DOI: 10.1016/j.bj.2024.100743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/26/2024] [Accepted: 04/24/2024] [Indexed: 05/01/2024] Open
Abstract
Diagnostic imaging is essential in modern trauma care for initial evaluation and identifying injuries requiring intervention. Deep learning (DL) has become mainstream in medical image analysis and has shown promising efficacy for classification, segmentation, and lesion detection. This narrative review provides the fundamental concepts for developing DL algorithms in trauma imaging and presents an overview of current progress in each modality. DL has been applied to detect free fluid on Focused Assessment with Sonography for Trauma (FAST), traumatic findings on chest and pelvic X-rays, and computed tomography (CT) scans, identify intracranial hemorrhage on head CT, detect vertebral fractures, and identify injuries to organs like the spleen, liver, and lungs on abdominal and chest CT. Future directions involve expanding dataset size and diversity through federated learning, enhancing model explainability and transparency to build clinician trust, and integrating multimodal data to provide more meaningful insights into traumatic injuries. Though some commercial artificial intelligence products are Food and Drug Administration-approved for clinical use in the trauma field, adoption remains limited, highlighting the need for multi-disciplinary teams to engineer practical, real-world solutions. Overall, DL shows immense potential to improve the efficiency and accuracy of trauma imaging, but thoughtful development and validation are critical to ensure these technologies positively impact patient care.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| | - Chun-Hsiang Ooyang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| | - Shih-Ching Kang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan.
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| |
Collapse
|
10
|
Ayhan MS, Neubauer J, Uzel MM, Gelisken F, Berens P. Interpretable detection of epiretinal membrane from optical coherence tomography with deep neural networks. Sci Rep 2024; 14:8484. [PMID: 38605115 PMCID: PMC11009346 DOI: 10.1038/s41598-024-57798-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 03/21/2024] [Indexed: 04/13/2024] Open
Abstract
This study aimed to automatically detect epiretinal membranes (ERM) in various OCT-scans of the central and paracentral macula region and classify them by size using deep-neural-networks (DNNs). To this end, 11,061 OCT-images were included and graded according to the presence of an ERM and its size (small 100-1000 µm, large > 1000 µm). The data set was divided into training, validation and test sets (75%, 10%, 15% of the data, respectively). An ensemble of DNNs was trained and saliency maps were generated using Guided-Backprob. OCT-scans were also transformed into a one-dimensional-value using t-SNE analysis. The DNNs' receiver-operating-characteristics on the test set showed a high performance for no-ERM, small-ERM and large-ERM cases (AUC: 0.99, 0.92, 0.99, respectively; 3-way accuracy: 89%), with small-ERMs being the most difficult ones to detect. t-SNE analysis sorted cases by size and, in particular, revealed increased classification uncertainty at the transitions between groups. Saliency maps reliably highlighted ERM, regardless of the presence of other OCT features (i.e. retinal-thickening, intraretinal pseudo-cysts, epiretinal-proliferation) and entities such as ERM-retinoschisis, macular-pseudohole and lamellar-macular-hole. This study showed therefore that DNNs can reliably detect and grade ERMs according to their size not only in the fovea but also in the paracentral region. This is also achieved in cases of hard-to-detect, small-ERMs. In addition, the generated saliency maps can be used to highlight small-ERMs that might otherwise be missed. The proposed model could be used for screening-programs or decision-support-systems in the future.
Collapse
Affiliation(s)
- Murat Seçkin Ayhan
- Institute for Ophthalmic Research, University of Tübingen, Elfriede Aulhorn Str. 7, 72076, Tübingen, Germany
| | - Jonas Neubauer
- University Eye Clinic, University of Tübingen, Tübingen, Germany
| | - Mehmet Murat Uzel
- University Eye Clinic, University of Tübingen, Tübingen, Germany
- Department of Ophthalmology, Balıkesir University School of Medicine, Balıkesir, Turkey
| | - Faik Gelisken
- University Eye Clinic, University of Tübingen, Tübingen, Germany.
| | - Philipp Berens
- Institute for Ophthalmic Research, University of Tübingen, Elfriede Aulhorn Str. 7, 72076, Tübingen, Germany.
- Tübingen AI Center, Tübingen, Germany.
| |
Collapse
|
11
|
Kim C, Gadgil SU, DeGrave AJ, Omiye JA, Cai ZR, Daneshjou R, Lee SI. Transparent medical image AI via an image-text foundation model grounded in medical literature. Nat Med 2024; 30:1154-1165. [PMID: 38627560 DOI: 10.1038/s41591-024-02887-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 02/27/2024] [Indexed: 04/21/2024]
Abstract
Building trustworthy and transparent image-based medical artificial intelligence (AI) systems requires the ability to interrogate data and models at all stages of the development pipeline, from training models to post-deployment monitoring. Ideally, the data and associated AI systems could be described using terms already familiar to physicians, but this requires medical datasets densely annotated with semantically meaningful concepts. In the present study, we present a foundation model approach, named MONET (medical concept retriever), which learns how to connect medical images with text and densely scores images on concept presence to enable important tasks in medical AI development and deployment such as data auditing, model auditing and model interpretation. Dermatology provides a demanding use case for the versatility of MONET, due to the heterogeneity in diseases, skin tones and imaging modalities. We trained MONET based on 105,550 dermatological images paired with natural language descriptions from a large collection of medical literature. MONET can accurately annotate concepts across dermatology images as verified by board-certified dermatologists, competitively with supervised models built on previously concept-annotated dermatology datasets of clinical images. We demonstrate how MONET enables AI transparency across the entire AI system development pipeline, from building inherently interpretable models to dataset and model auditing, including a case study dissecting the results of an AI clinical trial.
Collapse
Affiliation(s)
- Chanwoo Kim
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA
| | - Soham U Gadgil
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA
| | - Alex J DeGrave
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA
- Medical Scientist Training Program, University of Washington, Seattle, WA, USA
| | - Jesutofunmi A Omiye
- Department of Dermatology, Stanford School of Medicine, Stanford, CA, USA
- Department of Biomedical Data Science, Stanford School of Medicine, Stanford, CA, USA
| | - Zhuo Ran Cai
- Program for Clinical Research and Technology, Stanford University, Stanford, CA, USA
| | - Roxana Daneshjou
- Department of Dermatology, Stanford School of Medicine, Stanford, CA, USA.
- Department of Biomedical Data Science, Stanford School of Medicine, Stanford, CA, USA.
| | - Su-In Lee
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA.
| |
Collapse
|
12
|
Yang L, Oeding JF, de Marinis R, Marigi E, Sanchez-Sotelo J. Deep learning to automatically classify very large sets of preoperative and postoperative shoulder arthroplasty radiographs. J Shoulder Elbow Surg 2024; 33:773-780. [PMID: 37879598 DOI: 10.1016/j.jse.2023.09.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 09/06/2023] [Accepted: 09/10/2023] [Indexed: 10/27/2023]
Abstract
BACKGROUND Joint arthroplasty registries usually lack information on medical imaging owing to the laborious process of observing and recording, as well as the lack of standard methods to transfer the imaging information to the registries, which can limit the investigation of various research questions. Artificial intelligence (AI) algorithms can automate imaging-feature identification with high accuracy and efficiency. With the purpose of enriching shoulder arthroplasty registries with organized imaging information, it was hypothesized that an automated AI algorithm could be developed to classify and organize preoperative and postoperative radiographs from shoulder arthroplasty patients according to laterality, radiographic projection, and implant type. METHODS This study used a cohort of 2303 shoulder radiographs from 1724 shoulder arthroplasty patients. Two observers manually labeled all radiographs according to (1) laterality (left or right), (2) projection (anteroposterior, axillary, or lateral), and (3) whether the radiograph was a preoperative radiograph or showed an anatomic total shoulder arthroplasty or a reverse shoulder arthroplasty. All these labeled radiographs were randomly split into developmental and testing sets at the patient level and based on stratification. By use of 10-fold cross-validation, a 3-task deep-learning algorithm was trained on the developmental set to classify the 3 aforementioned characteristics. The trained algorithm was then evaluated on the testing set using quantitative metrics and visual evaluation techniques. RESULTS The trained algorithm perfectly classified laterality (F1 scores [harmonic mean values of precision and sensitivity] of 100% on the testing set). When classifying the imaging projection, the algorithm achieved F1 scores of 99.2%, 100%, and 100% on anteroposterior, axillary, and lateral views, respectively. When classifying the implant type, the model achieved F1 scores of 100%, 95.2%, and 100% on preoperative radiographs, anatomic total shoulder arthroplasty radiographs, and reverse shoulder arthroplasty radiographs, respectively. Visual evaluation using integrated maps showed that the algorithm focused on the relevant patient body and prosthesis parts for classification. It took the algorithm 20.3 seconds to analyze 502 images. CONCLUSIONS We developed an efficient, accurate, and reliable AI algorithm to automatically identify key imaging features of laterality, imaging view, and implant type in shoulder radiographs. This algorithm represents the first step to automatically classify and organize shoulder radiographs on a large scale in very little time, which will profoundly enrich shoulder arthroplasty registries.
Collapse
Affiliation(s)
- Linjun Yang
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA; Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Jacob F Oeding
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Rodrigo de Marinis
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Erick Marigi
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Joaquin Sanchez-Sotelo
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
13
|
Chai Y, Maes V, Boudali AM, Rackel B, Walter WL. Inadequate Annotation and Its Impact on Pelvic Tilt Measurement in Clinical Practice. J Clin Med 2024; 13:1394. [PMID: 38592694 PMCID: PMC10931960 DOI: 10.3390/jcm13051394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 02/16/2024] [Accepted: 02/21/2024] [Indexed: 04/10/2024] Open
Abstract
BACKGROUND Accurate pre-surgical templating of the pelvic tilt (PT) angle is essential for hip and spine surgeries, yet the reliability of PT annotations is often compromised by human error, inherent subjectivity, and variations in radiographic quality. This study aims to identify challenges leading to inadequate annotations at a landmark dimension and evaluating their impact on PT. METHODS We retrospectively collected 115 consecutive sagittal radiographs for the measurement of PT based on two definitions: the anterior pelvic plane and a line connecting the femoral head's centre to the sacral plate's midpoint. Five annotators engaged in the measurement, followed by a secondary review to assess the adequacy of the annotations across all the annotators. RESULTS The outcomes indicated that over 60% images had at least one landmark considered inadequate by the majority of the reviewers, with poor image quality, outliers, and unrecognized anomalies being the primary causes. Such inadequacies led to discrepancies in the PT measurements, ranging from -2° to 2°. CONCLUSION This study highlights that landmarks annotated from clear anatomical references were more reliable than those estimated. It also underscores the prevalence of suboptimal annotations in PT measurements, which extends beyond the scope of traditional statistical analysis and could result in significant deviations in individual cases, potentially impacting clinical outcomes.
Collapse
Affiliation(s)
- Yuan Chai
- Sydney Musculoskeletal Health and The Kolling Institute, Northern Clinical School, Faculty of Medicine and Health and the Northern Sydney Local Health District, Sydney, NSW 2006, Australia; (A.M.B.); (W.L.W.)
| | - Vincent Maes
- Department of Orthopaedics and Traumatic Surgery, Royal North Shore Hospital, St. Leonards, NSW 2065, Australia;
| | - A. Mounir Boudali
- Sydney Musculoskeletal Health and The Kolling Institute, Northern Clinical School, Faculty of Medicine and Health and the Northern Sydney Local Health District, Sydney, NSW 2006, Australia; (A.M.B.); (W.L.W.)
| | - Brooke Rackel
- Sydney Medical School, The University of Sydney, Sydney, NSW 2006, Australia;
| | - William L. Walter
- Sydney Musculoskeletal Health and The Kolling Institute, Northern Clinical School, Faculty of Medicine and Health and the Northern Sydney Local Health District, Sydney, NSW 2006, Australia; (A.M.B.); (W.L.W.)
- Department of Orthopaedics and Traumatic Surgery, Royal North Shore Hospital, St. Leonards, NSW 2065, Australia;
| |
Collapse
|
14
|
Tran A, Wang A, Mickaill J, Strbenac D, Larance M, Vernon ST, Grieve SM, Figtree GA, Patrick E, Yang JYH. Construction and optimization of multi-platform precision pathways for precision medicine. Sci Rep 2024; 14:4248. [PMID: 38378802 PMCID: PMC10879206 DOI: 10.1038/s41598-024-54517-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 02/13/2024] [Indexed: 02/22/2024] Open
Abstract
In the enduring challenge against disease, advancements in medical technology have empowered clinicians with novel diagnostic platforms. Whilst in some cases, a single test may provide a confident diagnosis, often additional tests are required. However, to strike a balance between diagnostic accuracy and cost-effectiveness, one must rigorously construct the clinical pathways. Here, we developed a framework to build multi-platform precision pathways in an automated, unbiased way, recommending the key steps a clinician would take to reach a diagnosis. We achieve this by developing a confidence score, used to simulate a clinical scenario, where at each stage, either a confident diagnosis is made, or another test is performed. Our framework provides a range of tools to interpret, visualize and compare the pathways, improving communication and enabling their evaluation on accuracy and cost, specific to different contexts. This framework will guide the development of novel diagnostic pathways for different diseases, accelerating the implementation of precision medicine into clinical practice.
Collapse
Affiliation(s)
- Andy Tran
- School of Mathematics and Statistics, The University of Sydney, Camperdown, NSW, Australia
- Charles Perkins Centre, The University of Sydney, Camperdown, NSW, Australia
- Sydney Precision Data Science Centre, The University of Sydney, Camperdown, NSW, Australia
| | - Andy Wang
- Westmead Medical Institute, Westmead, NSW, Australia
| | - Jamie Mickaill
- School of Mathematics and Statistics, The University of Sydney, Camperdown, NSW, Australia
- School of Computer Science, The University of Sydney, Camperdown, NSW, Australia
| | - Dario Strbenac
- School of Mathematics and Statistics, The University of Sydney, Camperdown, NSW, Australia
- Charles Perkins Centre, The University of Sydney, Camperdown, NSW, Australia
- Sydney Precision Data Science Centre, The University of Sydney, Camperdown, NSW, Australia
| | - Mark Larance
- Charles Perkins Centre, The University of Sydney, Camperdown, NSW, Australia
| | - Stephen T Vernon
- Charles Perkins Centre, The University of Sydney, Camperdown, NSW, Australia
- Kolling Institute of Medical Research, St Leonards, NSW, Australia
| | - Stuart M Grieve
- Charles Perkins Centre, The University of Sydney, Camperdown, NSW, Australia
- Department of Radiology, Royal Prince Alfred Hospital, Camperdown, Australia
| | - Gemma A Figtree
- Charles Perkins Centre, The University of Sydney, Camperdown, NSW, Australia
- Kolling Institute of Medical Research, St Leonards, NSW, Australia
| | - Ellis Patrick
- School of Mathematics and Statistics, The University of Sydney, Camperdown, NSW, Australia
- Charles Perkins Centre, The University of Sydney, Camperdown, NSW, Australia
- Sydney Precision Data Science Centre, The University of Sydney, Camperdown, NSW, Australia
- Laboratory of Data Discovery for Health Limited (D24H), Science Park, Hong Kong SAR, China
| | - Jean Yee Hwa Yang
- School of Mathematics and Statistics, The University of Sydney, Camperdown, NSW, Australia.
- Charles Perkins Centre, The University of Sydney, Camperdown, NSW, Australia.
- Sydney Precision Data Science Centre, The University of Sydney, Camperdown, NSW, Australia.
- Laboratory of Data Discovery for Health Limited (D24H), Science Park, Hong Kong SAR, China.
| |
Collapse
|
15
|
Zhou W, Ye Z, Huang G, Zhang X, Xu M, Liu B, Zhuang B, Tang Z, Wang S, Chen D, Pan Y, Xie X, Wang R, Zhou L. Interpretable artificial intelligence-based app assists inexperienced radiologists in diagnosing biliary atresia from sonographic gallbladder images. BMC Med 2024; 22:29. [PMID: 38267950 PMCID: PMC10809457 DOI: 10.1186/s12916-024-03247-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 01/02/2024] [Indexed: 01/26/2024] Open
Abstract
BACKGROUND A previously trained deep learning-based smartphone app provides an artificial intelligence solution to help diagnose biliary atresia from sonographic gallbladder images, but it might be impractical to launch it in real clinical settings. This study aimed to redevelop a new model using original sonographic images and their derived smartphone photos and then test the new model's performance in assisting radiologists with different experiences to detect biliary atresia in real-world mimic settings. METHODS A new model was first trained retrospectively using 3659 original sonographic gallbladder images and their derived 51,226 smartphone photos and tested on 11,410 external validation smartphone photos. Afterward, the new model was tested in 333 prospectively collected sonographic gallbladder videos from 207 infants by 14 inexperienced radiologists (9 juniors and 5 seniors) and 4 experienced pediatric radiologists in real-world mimic settings. Diagnostic performance was expressed as the area under the receiver operating characteristic curve (AUC). RESULTS The new model outperformed the previously published model in diagnosing BA on the external validation set (AUC 0.924 vs 0.908, P = 0.004) with higher consistency (kappa value 0.708 vs 0.609). When tested in real-world mimic settings using 333 sonographic gallbladder videos, the new model performed comparable to experienced pediatric radiologists (average AUC 0.860 vs 0.876) and outperformed junior radiologists (average AUC 0.838 vs 0.773) and senior radiologists (average AUC 0.829 vs 0.749). Furthermore, the new model could aid both junior and senior radiologists to improve their diagnostic performances, with the average AUC increasing from 0.773 to 0.835 for junior radiologists and from 0.749 to 0.805 for senior radiologists. CONCLUSIONS The interpretable app-based model showed robust and satisfactory performance in diagnosing biliary atresia, and it could aid radiologists with limited experiences to improve their diagnostic performances in real-world mimic settings.
Collapse
Affiliation(s)
- Wenying Zhou
- Department of Medical Ultrasonics, Institute for Diagnostic and Interventional Ultrasound, The First Affiliated Hospital, Sun Yat-Sen University, No. 58, Zhongshan Er Road, Guangzhou, 510080, People's Republic of China
| | - Zejun Ye
- School of Computer Science and Engineering, Sun Yat-Sen University, No. 132, East Outer Ring Road, Guangzhou, 510006, People's Republic of China
| | - Guangliang Huang
- Department of Medical Ultrasonics, Institute for Diagnostic and Interventional Ultrasound, The First Affiliated Hospital, Sun Yat-Sen University, No. 58, Zhongshan Er Road, Guangzhou, 510080, People's Republic of China
| | - Xiaoer Zhang
- Department of Medical Ultrasonics, Institute for Diagnostic and Interventional Ultrasound, The First Affiliated Hospital, Sun Yat-Sen University, No. 58, Zhongshan Er Road, Guangzhou, 510080, People's Republic of China
| | - Ming Xu
- Department of Medical Ultrasonics, Institute for Diagnostic and Interventional Ultrasound, The First Affiliated Hospital, Sun Yat-Sen University, No. 58, Zhongshan Er Road, Guangzhou, 510080, People's Republic of China
| | - Baoxian Liu
- Department of Medical Ultrasonics, Institute for Diagnostic and Interventional Ultrasound, The First Affiliated Hospital, Sun Yat-Sen University, No. 58, Zhongshan Er Road, Guangzhou, 510080, People's Republic of China
| | - Bowen Zhuang
- Department of Medical Ultrasonics, Institute for Diagnostic and Interventional Ultrasound, The First Affiliated Hospital, Sun Yat-Sen University, No. 58, Zhongshan Er Road, Guangzhou, 510080, People's Republic of China
| | - Zijian Tang
- Department of Ultrasound, Shenzhen Children's Hospital, No. 7019, Yitian Road, Futian District, Shenzhen, 518026, People's Republic of China
| | - Shan Wang
- Department of Ultrasound, Shenzhen Children's Hospital, No. 7019, Yitian Road, Futian District, Shenzhen, 518026, People's Republic of China
| | - Dan Chen
- Department of Ultrasound, Guangdong Women and Children's Hospital, No. 521 Xingnan Avenue, Panyu District, Guangzhou, 511400, People's Republic of China
| | - Yunxiang Pan
- Department of Ultrasound, Guangdong Women and Children's Hospital, No. 521 Xingnan Avenue, Panyu District, Guangzhou, 511400, People's Republic of China
| | - Xiaoyan Xie
- Department of Medical Ultrasonics, Institute for Diagnostic and Interventional Ultrasound, The First Affiliated Hospital, Sun Yat-Sen University, No. 58, Zhongshan Er Road, Guangzhou, 510080, People's Republic of China
| | - Ruixuan Wang
- School of Computer Science and Engineering, Sun Yat-Sen University, No. 132, East Outer Ring Road, Guangzhou, 510006, People's Republic of China
| | - Luyao Zhou
- Department of Medical Ultrasonics, Institute for Diagnostic and Interventional Ultrasound, The First Affiliated Hospital, Sun Yat-Sen University, No. 58, Zhongshan Er Road, Guangzhou, 510080, People's Republic of China.
- Department of Ultrasound, Shenzhen Children's Hospital, No. 7019, Yitian Road, Futian District, Shenzhen, 518026, People's Republic of China.
| |
Collapse
|
16
|
Chae A, Yao MS, Sagreiya H, Goldberg AD, Chatterjee N, MacLean MT, Duda J, Elahi A, Borthakur A, Ritchie MD, Rader D, Kahn CE, Witschey WR, Gee JC. Strategies for Implementing Machine Learning Algorithms in the Clinical Practice of Radiology. Radiology 2024; 310:e223170. [PMID: 38259208 PMCID: PMC10831483 DOI: 10.1148/radiol.223170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 08/24/2023] [Accepted: 08/29/2023] [Indexed: 01/24/2024]
Abstract
Despite recent advancements in machine learning (ML) applications in health care, there have been few benefits and improvements to clinical medicine in the hospital setting. To facilitate clinical adaptation of methods in ML, this review proposes a standardized framework for the step-by-step implementation of artificial intelligence into the clinical practice of radiology that focuses on three key components: problem identification, stakeholder alignment, and pipeline integration. A review of the recent literature and empirical evidence in radiologic imaging applications justifies this approach and offers a discussion on structuring implementation efforts to help other hospital practices leverage ML to improve patient care. Clinical trial registration no. 04242667 © RSNA, 2024 Supplemental material is available for this article.
Collapse
Affiliation(s)
| | | | - Hersh Sagreiya
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Ari D. Goldberg
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Neil Chatterjee
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Matthew T. MacLean
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Jeffrey Duda
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Ameena Elahi
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Arijitt Borthakur
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Marylyn D. Ritchie
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Daniel Rader
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Charles E. Kahn
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | | | | |
Collapse
|
17
|
Cacciamani GE, Chen A, Gill IS, Hung AJ. Artificial intelligence and urology: ethical considerations for urologists and patients. Nat Rev Urol 2024; 21:50-59. [PMID: 37524914 DOI: 10.1038/s41585-023-00796-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/22/2023] [Indexed: 08/02/2023]
Abstract
The use of artificial intelligence (AI) in medicine and in urology specifically has increased over the past few years, during which time it has enabled optimization of patient workflow, increased diagnostic accuracy and enhanced computer analysis of radiological and pathological images. However, before further use of AI is undertaken, possible ethical issues need to be evaluated to improve understanding of this technology and to protect patients and providers. Possible ethical issues that require consideration when applying AI in clinical practice include patient safety, cybersecurity, transparency and interpretability of the data, inclusivity and equity, fostering responsibility and accountability, and the preservation of providers' decision-making and autonomy. Ethical principles for the application of AI to health care and in urology are proposed to guide urologists, patients and regulators to improve use of AI technologies and guide policy-making.
Collapse
Affiliation(s)
- Giovanni E Cacciamani
- The Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA.
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| | - Andrew Chen
- The Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Inderbir S Gill
- The Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Andrew J Hung
- The Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
18
|
Samala RK, Drukker K, Shukla-Dave A, Chan HP, Sahiner B, Petrick N, Greenspan H, Mahmood U, Summers RM, Tourassi G, Deserno TM, Regge D, Näppi JJ, Yoshida H, Huo Z, Chen Q, Vergara D, Cha KH, Mazurchuk R, Grizzard KT, Huisman H, Morra L, Suzuki K, Armato SG, Hadjiiski L. AI and machine learning in medical imaging: key points from development to translation. BJR ARTIFICIAL INTELLIGENCE 2024; 1:ubae006. [PMID: 38828430 PMCID: PMC11140849 DOI: 10.1093/bjrai/ubae006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 04/02/2024] [Accepted: 04/25/2024] [Indexed: 06/05/2024]
Abstract
Innovation in medical imaging artificial intelligence (AI)/machine learning (ML) demands extensive data collection, algorithmic advancements, and rigorous performance assessments encompassing aspects such as generalizability, uncertainty, bias, fairness, trustworthiness, and interpretability. Achieving widespread integration of AI/ML algorithms into diverse clinical tasks will demand a steadfast commitment to overcoming issues in model design, development, and performance assessment. The complexities of AI/ML clinical translation present substantial challenges, requiring engagement with relevant stakeholders, assessment of cost-effectiveness for user and patient benefit, timely dissemination of information relevant to robust functioning throughout the AI/ML lifecycle, consideration of regulatory compliance, and feedback loops for real-world performance evidence. This commentary addresses several hurdles for the development and adoption of AI/ML technologies in medical imaging. Comprehensive attention to these underlying and often subtle factors is critical not only for tackling the challenges but also for exploring novel opportunities for the advancement of AI in radiology.
Collapse
Affiliation(s)
- Ravi K Samala
- Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, United States
| | - Karen Drukker
- Department of Radiology, University of Chicago, Chicago, IL, 60637, United States
| | - Amita Shukla-Dave
- Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109, United States
| | - Berkman Sahiner
- Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, United States
| | - Nicholas Petrick
- Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, United States
| | - Hayit Greenspan
- Biomedical Engineering and Imaging Institute, Department of Radiology, Icahn School of Medicine at Mt Sinai, New York, NY, 10029, United States
| | - Usman Mahmood
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States
| | - Ronald M Summers
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, 20892, United States
| | - Georgia Tourassi
- Computing and Computational Sciences Directorate, Oak Ridge National Laboratory, Oak Ridge, TN, 37830, United States
| | - Thomas M Deserno
- Peter L. Reichertz Institute for Medical Informatics, TU Braunschweig and Hannover Medical School, Braunschweig, Niedersachsen, 38106, Germany
| | - Daniele Regge
- Radiology Unit, Candiolo Cancer Institute, FPO-IRCCS, Candiolo, 10060, Italy
- Department of Translational Research and of New Surgical and Medical Technologies of the University of Pisa, Pisa, 56126, Italy
| | - Janne J Näppi
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, United States
| | - Hiroyuki Yoshida
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, United States
| | - Zhimin Huo
- Tencent America, Palo Alto, CA, 94306, United States
| | - Quan Chen
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ, 85054, United States
| | - Daniel Vergara
- Department of Radiology, University of Washington, Seattle, WA, 98195, United States
| | - Kenny H Cha
- Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, United States
| | - Richard Mazurchuk
- Division of Cancer Prevention, National Cancer Institute, National Institutes of Health, Bethesda, MD, 20892, United States
| | - Kevin T Grizzard
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, 06510, United States
| | - Henkjan Huisman
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, Gelderland, 6525 GA, Netherlands
| | - Lia Morra
- Department of Control and Computer Engineering, Politecnico di Torino, Torino, Piemonte, 10129, Italy
| | - Kenji Suzuki
- Institute of Innovative Research, Tokyo Institute of Technology, Midori-ku, Yokohama, Kanagawa, 226-8503, Japan
| | - Samuel G Armato
- Department of Radiology, University of Chicago, Chicago, IL, 60637, United States
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109, United States
| |
Collapse
|
19
|
Morales MA, Manning WJ, Nezafat R. Present and Future Innovations in AI and Cardiac MRI. Radiology 2024; 310:e231269. [PMID: 38193835 PMCID: PMC10831479 DOI: 10.1148/radiol.231269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 10/21/2023] [Accepted: 10/26/2023] [Indexed: 01/10/2024]
Abstract
Cardiac MRI is used to diagnose and treat patients with a multitude of cardiovascular diseases. Despite the growth of clinical cardiac MRI, complicated image prescriptions and long acquisition protocols limit the specialty and restrain its impact on the practice of medicine. Artificial intelligence (AI)-the ability to mimic human intelligence in learning and performing tasks-will impact nearly all aspects of MRI. Deep learning (DL) primarily uses an artificial neural network to learn a specific task from example data sets. Self-driving scanners are increasingly available, where AI automatically controls cardiac image prescriptions. These scanners offer faster image collection with higher spatial and temporal resolution, eliminating the need for cardiac triggering or breath holding. In the future, fully automated inline image analysis will most likely provide all contour drawings and initial measurements to the reader. Advanced analysis using radiomic or DL features may provide new insights and information not typically extracted in the current analysis workflow. AI may further help integrate these features with clinical, genetic, wearable-device, and "omics" data to improve patient outcomes. This article presents an overview of AI and its application in cardiac MRI, including in image acquisition, reconstruction, and processing, and opportunities for more personalized cardiovascular care through extraction of novel imaging markers.
Collapse
Affiliation(s)
- Manuel A. Morales
- From the Department of Medicine, Cardiovascular Division (M.A.M.,
W.J.M., R.N.), and Department of Radiology (W.J.M.), Beth Israel Deaconess
Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA
02215
| | - Warren J. Manning
- From the Department of Medicine, Cardiovascular Division (M.A.M.,
W.J.M., R.N.), and Department of Radiology (W.J.M.), Beth Israel Deaconess
Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA
02215
| | - Reza Nezafat
- From the Department of Medicine, Cardiovascular Division (M.A.M.,
W.J.M., R.N.), and Department of Radiology (W.J.M.), Beth Israel Deaconess
Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA
02215
| |
Collapse
|
20
|
Guermazi A, Omoumi P, Tordjman M, Fritz J, Kijowski R, Regnard NE, Carrino J, Kahn CE, Knoll F, Rueckert D, Roemer FW, Hayashi D. How AI May Transform Musculoskeletal Imaging. Radiology 2024; 310:e230764. [PMID: 38165245 PMCID: PMC10831478 DOI: 10.1148/radiol.230764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 06/18/2023] [Accepted: 07/11/2023] [Indexed: 01/03/2024]
Abstract
While musculoskeletal imaging volumes are increasing, there is a relative shortage of subspecialized musculoskeletal radiologists to interpret the studies. Will artificial intelligence (AI) be the solution? For AI to be the solution, the wide implementation of AI-supported data acquisition methods in clinical practice requires establishing trusted and reliable results. This implementation will demand close collaboration between core AI researchers and clinical radiologists. Upon successful clinical implementation, a wide variety of AI-based tools can improve the musculoskeletal radiologist's workflow by triaging imaging examinations, helping with image interpretation, and decreasing the reporting time. Additional AI applications may also be helpful for business, education, and research purposes if successfully integrated into the daily practice of musculoskeletal radiology. The question is not whether AI will replace radiologists, but rather how musculoskeletal radiologists can take advantage of AI to enhance their expert capabilities.
Collapse
Affiliation(s)
- Ali Guermazi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Patrick Omoumi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Mickael Tordjman
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Jan Fritz
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Richard Kijowski
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Nor-Eddine Regnard
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - John Carrino
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Charles E. Kahn
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Florian Knoll
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Daniel Rueckert
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Frank W. Roemer
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Daichi Hayashi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| |
Collapse
|
21
|
Gräfe D, Beeskow AB, Pfäffle R, Rosolowski M, Chung TS, DiFranco MD. Automated bone age assessment in a German pediatric cohort: agreement between an artificial intelligence software and the manual Greulich and Pyle method. Eur Radiol 2023:10.1007/s00330-023-10543-0. [PMID: 38151536 DOI: 10.1007/s00330-023-10543-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 11/12/2023] [Accepted: 12/08/2023] [Indexed: 12/29/2023]
Abstract
OBJECTIVES This study aimed to evaluate the performance of artificial intelligence (AI) software in bone age (BA) assessment, according to the Greulich and Pyle (G&P) method in a German pediatric cohort. MATERIALS AND METHODS Hand radiographs of 306 pediatric patients aged 1-18 years (153 boys, 153 girls, 18 patients per year of life)-including a subgroup of patients in the age group for which the software is declared (243 patients)-were analyzed retrospectively. Two pediatric radiologists and one endocrinologist made independent blinded BA reads. Subsequently, AI software estimated BA from the same images. Both agreements, accuracy, and interchangeability between AI and expert readers were assessed. RESULTS The mean difference between the average of three expert readers and AI software was 0.39 months with a mean absolute difference (MAD) of 6.8 months (1.73 months for the mean difference and 6.0 months for MAD in the intended use subgroup). Performance in boys was slightly worse than in girls (MAD 6.3 months vs. 5.6 months). Regression analyses showed constant bias (slope of 1.01 with a 95% CI 0.99-1.02). The estimated equivalence index for interchangeability was - 14.3 (95% CI -27.6 to - 1.1). CONCLUSION In terms of BA assessment, the new AI software was interchangeable with expert readers using the G&P method. CLINICAL RELEVANCE STATEMENT The use of AI software enables every physician to provide expert reader quality in bone age assessment. KEY POINTS • A novel artificial intelligence-based software for bone age estimation has not yet been clinically validated. • Artificial intelligence showed a good agreement and high accuracy with expert radiologists performing bone age assessment. • Artificial intelligence showed to be interchangeable with expert readers.
Collapse
Affiliation(s)
- Daniel Gräfe
- Department of Pediatric Radiology, University Hospital, Leipzig, Germany.
| | | | - Roland Pfäffle
- Department of Pediatrics, University Hospital, Leipzig, Germany
| | | | | | | |
Collapse
|
22
|
Zheng Y, Rowell B, Chen Q, Kim JY, Kontar RA, Yang XJ, Lester CA. Designing Human-Centered AI to Prevent Medication Dispensing Errors: Focus Group Study With Pharmacists. JMIR Form Res 2023; 7:e51921. [PMID: 38145475 PMCID: PMC10775023 DOI: 10.2196/51921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 11/17/2023] [Accepted: 11/22/2023] [Indexed: 12/26/2023] Open
Abstract
BACKGROUND Medication errors, including dispensing errors, represent a substantial worldwide health risk with significant implications in terms of morbidity, mortality, and financial costs. Although pharmacists use methods like barcode scanning and double-checking for dispensing verification, these measures exhibit limitations. The application of artificial intelligence (AI) in pharmacy verification emerges as a potential solution, offering precision, rapid data analysis, and the ability to recognize medications through computer vision. For AI to be embraced, it must be designed with the end user in mind, fostering trust, clear communication, and seamless collaboration between AI and pharmacists. OBJECTIVE This study aimed to gather pharmacists' feedback in a focus group setting to help inform the initial design of the user interface and iterative designs of the AI prototype. METHODS A multidisciplinary research team engaged pharmacists in a 3-stage process to develop a human-centered AI system for medication dispensing verification. To design the AI model, we used a Bayesian neural network that predicts the dispensed pills' National Drug Code (NDC). Discussion scripts regarding how to design the system and feedback in focus groups were collected through audio recordings and professionally transcribed, followed by a content analysis guided by the Systems Engineering Initiative for Patient Safety and Human-Machine Teaming theoretical frameworks. RESULTS A total of 8 pharmacists participated in 3 rounds of focus groups to identify current challenges in medication dispensing verification, brainstorm solutions, and provide feedback on our AI prototype. Participants considered several teaming scenarios, generally favoring a hybrid teaming model where the AI assists in the verification process and a pharmacist intervenes based on medication risk level and the AI's confidence level. Pharmacists highlighted the need for improving the interpretability of AI systems, such as adding stepwise checkmarks, probability scores, and details about drugs the AI model frequently confuses with the target drug. Pharmacists emphasized the need for simplicity and accessibility. They favored displaying only essential information to prevent overwhelming users with excessive data. Specific design features, such as juxtaposing pill images with their packaging for quick comparisons, were requested. Pharmacists preferred accept, reject, or unsure options. The final prototype interface included (1) checkmarks to compare pill characteristics between the AI-predicted NDC and the prescription's expected NDC, (2) a histogram showing predicted probabilities for the AI-identified NDC, (3) an image of an AI-provided "confused" pill, and (4) an NDC match status (ie, match, unmatched, or unsure). CONCLUSIONS In partnership with pharmacists, we developed a human-centered AI prototype designed to enhance AI interpretability and foster trust. This initiative emphasized human-machine collaboration and positioned AI as an augmentative tool rather than a replacement. This study highlights the process of designing a human-centered AI for dispensing verification, emphasizing its interpretability, confidence visualization, and collaborative human-machine teaming styles.
Collapse
Affiliation(s)
- Yifan Zheng
- Department of Clinical Pharmacy, College of Pharmacy, University of Michigan, Ann Arbor, MI, United States
| | - Brigid Rowell
- Department of Clinical Pharmacy, College of Pharmacy, University of Michigan, Ann Arbor, MI, United States
| | - Qiyuan Chen
- Department of Industrial and Operations Engineering, College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Jin Yong Kim
- Department of Industrial and Operations Engineering, College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Raed Al Kontar
- Department of Industrial and Operations Engineering, College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - X Jessie Yang
- Department of Industrial and Operations Engineering, College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Corey A Lester
- Department of Clinical Pharmacy, College of Pharmacy, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
23
|
Rebsamen M, Jin BZ, Klail T, De Beukelaer S, Barth R, Rezny-Kasprzak B, Ahmadli U, Vulliemoz S, Seeck M, Schindler K, Wiest R, Radojewski P, Rummel C. Clinical Evaluation of a Quantitative Imaging Biomarker Supporting Radiological Assessment of Hippocampal Sclerosis. Clin Neuroradiol 2023; 33:1045-1053. [PMID: 37358608 PMCID: PMC10654177 DOI: 10.1007/s00062-023-01308-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 05/09/2023] [Indexed: 06/27/2023]
Abstract
OBJECTIVE To evaluate the influence of quantitative reports (QReports) on the radiological assessment of hippocampal sclerosis (HS) from MRI of patients with epilepsy in a setting mimicking clinical reality. METHODS The study included 40 patients with epilepsy, among them 20 with structural abnormalities in the mesial temporal lobe (13 with HS). Six raters blinded to the diagnosis assessed the 3T MRI in two rounds, first using MRI only and later with both MRI and the QReport. Results were evaluated using inter-rater agreement (Fleiss' kappa [Formula: see text]) and comparison with a consensus of two radiological experts derived from clinical and imaging data, including 7T MRI. RESULTS For the primary outcome, diagnosis of HS, the mean accuracy of the raters improved from 77.5% with MRI only to 86.3% with the additional QReport (effect size [Formula: see text]). Inter-rater agreement increased from [Formula: see text] to [Formula: see text]. Five of the six raters reached higher accuracies, and all reported higher confidence when using the QReports. CONCLUSION In this pre-use clinical evaluation study, we demonstrated clinical feasibility and usefulness as well as the potential impact of a previously suggested imaging biomarker for radiological assessment of HS.
Collapse
Affiliation(s)
- Michael Rebsamen
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 10, 3010, Bern, Switzerland
- Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | - Baudouin Zongxin Jin
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 10, 3010, Bern, Switzerland
- Sleep-Wake-Epilepsy-Center, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Tomas Klail
- University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Sophie De Beukelaer
- University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Rike Barth
- Sleep-Wake-Epilepsy-Center, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Beata Rezny-Kasprzak
- University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Uzeyir Ahmadli
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 10, 3010, Bern, Switzerland
| | - Serge Vulliemoz
- EEG and Epilepsy Unit, Department of Clinical Neurosciences, Geneva University Hospitals and Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Margitta Seeck
- EEG and Epilepsy Unit, Department of Clinical Neurosciences, Geneva University Hospitals and Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Kaspar Schindler
- Sleep-Wake-Epilepsy-Center, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Roland Wiest
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 10, 3010, Bern, Switzerland
- Swiss Institute for Translational and Entrepreneurial Medicine, sitem-insel, Bern, Switzerland
| | - Piotr Radojewski
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 10, 3010, Bern, Switzerland.
- Swiss Institute for Translational and Entrepreneurial Medicine, sitem-insel, Bern, Switzerland.
| | - Christian Rummel
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 10, 3010, Bern, Switzerland
| |
Collapse
|
24
|
Pertuz S, Ortega D, Suarez É, Cancino W, Africano G, Rinta-Kiikka I, Arponen O, Paris S, Lozano A. Saliency of breast lesions in breast cancer detection using artificial intelligence. Sci Rep 2023; 13:20545. [PMID: 37996504 PMCID: PMC10667547 DOI: 10.1038/s41598-023-46921-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 11/07/2023] [Indexed: 11/25/2023] Open
Abstract
The analysis of mammograms using artificial intelligence (AI) has shown great potential for assisting breast cancer screening. We use saliency maps to study the role of breast lesions in the decision-making process of AI systems for breast cancer detection in screening mammograms. We retrospectively collected mammograms from 191 women with screen-detected breast cancer and 191 healthy controls matched by age and mammographic system. Two radiologists manually segmented the breast lesions in the mammograms from CC and MLO views. We estimated the detection performance of four deep learning-based AI systems using the area under the ROC curve (AUC) with a 95% confidence interval (CI). We used automatic thresholding on saliency maps from the AI systems to identify the areas of interest on the mammograms. Finally, we measured the overlap between these areas of interest and the segmented breast lesions using Dice's similarity coefficient (DSC). The detection performance of the AI systems ranged from low to moderate (AUCs from 0.525 to 0.694). The overlap between the areas of interest and the breast lesions was low for all the studied methods (median DSC from 4.2% to 38.0%). The AI system with the highest cancer detection performance (AUC = 0.694, CI 0.662-0.726) showed the lowest overlap (DSC = 4.2%) with breast lesions. The areas of interest found by saliency analysis of the AI systems showed poor overlap with breast lesions. These results suggest that AI systems with the highest performance do not solely rely on localized breast lesions for their decision-making in cancer detection; rather, they incorporate information from large image regions. This work contributes to the understanding of the role of breast lesions in cancer detection using AI.
Collapse
Affiliation(s)
- Said Pertuz
- Escuela de Ingenierías Eléctrica Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia
| | - David Ortega
- Escuela de Ingenierías Eléctrica Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia
| | - Érika Suarez
- Escuela de Ingenierías Eléctrica Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia
| | - William Cancino
- Escuela de Ingenierías Eléctrica Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia
| | - Gerson Africano
- Escuela de Ingenierías Eléctrica Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia
| | - Irina Rinta-Kiikka
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
- Department of Radiology, Tampere University Hospital, Tampere, Finland
| | - Otso Arponen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland.
- Department of Radiology, Tampere University Hospital, Tampere, Finland.
| | - Sara Paris
- Departamento de Imágenes Diagnósticas, Universidad Nacional de Colombia, Bogotá, Colombia
| | - Alfonso Lozano
- Departamento de Imágenes Diagnósticas, Universidad Nacional de Colombia, Bogotá, Colombia
| |
Collapse
|
25
|
Suman G, Koo CW. Recent Advancements in Computed Tomography Assessment of Fibrotic Interstitial Lung Diseases. J Thorac Imaging 2023; 38:S7-S18. [PMID: 37015833 DOI: 10.1097/rti.0000000000000705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/06/2023]
Abstract
Interstitial lung disease (ILD) is a heterogeneous group of disorders with complex and varied imaging manifestations and prognosis. High-resolution computed tomography (HRCT) is the current standard-of-care imaging tool for ILD assessment. However, visual evaluation of HRCT is limited by interobserver variation and poor sensitivity for subtle changes. Such challenges have led to tremendous recent research interest in objective and reproducible methods to examine ILDs. Computer-aided CT analysis to include texture analysis and machine learning methods have recently been shown to be viable supplements to traditional visual assessment through improved characterization and quantification of ILDs. These quantitative tools have not only been shown to correlate well with pulmonary function tests and patient outcomes but are also useful in disease diagnosis, surveillance and management. In this review, we provide an overview of recent computer-aided tools in diagnosis, prognosis, and longitudinal evaluation of fibrotic ILDs, while outlining some of the pitfalls and challenges that have precluded further advancement of these tools as well as potential solutions and further endeavors.
Collapse
Affiliation(s)
- Garima Suman
- Division of Thoracic Imaging, Mayo Clinic, Rochester, MN
| | | |
Collapse
|
26
|
Ma SX, Dhanaliwala AH, Rudie JD, Rauschecker AM, Roberts-Wolfe D, Haddawy P, Kahn CE. Bayesian Networks in Radiology. Radiol Artif Intell 2023; 5:e210187. [PMID: 38074791 PMCID: PMC10698603 DOI: 10.1148/ryai.210187] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 06/13/2023] [Accepted: 09/14/2023] [Indexed: 06/22/2024]
Abstract
A Bayesian network is a graphical model that uses probability theory to represent relationships among its variables. The model is a directed acyclic graph whose nodes represent variables, such as the presence of a disease or an imaging finding. Connections between nodes express causal influences between variables as probability values. Bayesian networks can learn their structure (nodes and connections) and/or conditional probability values from data. Bayesian networks offer several advantages: (a) they can efficiently perform complex inferences, (b) reason from cause to effect or vice versa, (c) assess counterfactual data, (d) integrate observations with canonical ("textbook") knowledge, and (e) explain their reasoning. Bayesian networks have been employed in a wide variety of applications in radiology, including diagnosis and treatment planning. Unlike deep learning approaches, Bayesian networks have not been applied to computer vision. However, hybrid artificial intelligence systems have combined deep learning models with Bayesian networks, where the deep learning model identifies findings in medical images and the Bayesian network formulates and explains a diagnosis from those findings. One can apply a Bayesian network's probabilistic knowledge to integrate clinical and imaging findings to support diagnosis, treatment planning, and clinical decision-making. This article reviews the fundamental principles of Bayesian networks and summarizes their applications in radiology. Keywords: Bayesian Network, Machine Learning, Abdominal Imaging, Musculoskeletal Imaging, Breast Imaging, Neurologic Imaging, Radiology Education Supplemental material is available for this article. © RSNA, 2023.
Collapse
Affiliation(s)
- Shawn X. Ma
- From the Department of Radiology (S.X.M., A.H.D., D.R.F., C.E.K.) and
Institute for Biomedical Informatics (C.E.K.), University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104; Department of Radiology, Scripps Clinic, La
Jolla, Calif (J.D.R.); Department of Radiology, University of California San
Diego, La Jolla, Calif (J.D.R.); Department of Radiology and Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (A.M.R.); Faculty
of Information and Communication Technology, Mahidol University, Bangkok,
Thailand (P.H.); and Bremen Spatial Cognition Center, University of Bremen,
Bremen, Germany (P.H.)
| | - Ali H. Dhanaliwala
- From the Department of Radiology (S.X.M., A.H.D., D.R.F., C.E.K.) and
Institute for Biomedical Informatics (C.E.K.), University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104; Department of Radiology, Scripps Clinic, La
Jolla, Calif (J.D.R.); Department of Radiology, University of California San
Diego, La Jolla, Calif (J.D.R.); Department of Radiology and Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (A.M.R.); Faculty
of Information and Communication Technology, Mahidol University, Bangkok,
Thailand (P.H.); and Bremen Spatial Cognition Center, University of Bremen,
Bremen, Germany (P.H.)
| | - Jeffrey D. Rudie
- From the Department of Radiology (S.X.M., A.H.D., D.R.F., C.E.K.) and
Institute for Biomedical Informatics (C.E.K.), University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104; Department of Radiology, Scripps Clinic, La
Jolla, Calif (J.D.R.); Department of Radiology, University of California San
Diego, La Jolla, Calif (J.D.R.); Department of Radiology and Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (A.M.R.); Faculty
of Information and Communication Technology, Mahidol University, Bangkok,
Thailand (P.H.); and Bremen Spatial Cognition Center, University of Bremen,
Bremen, Germany (P.H.)
| | - Andreas M. Rauschecker
- From the Department of Radiology (S.X.M., A.H.D., D.R.F., C.E.K.) and
Institute for Biomedical Informatics (C.E.K.), University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104; Department of Radiology, Scripps Clinic, La
Jolla, Calif (J.D.R.); Department of Radiology, University of California San
Diego, La Jolla, Calif (J.D.R.); Department of Radiology and Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (A.M.R.); Faculty
of Information and Communication Technology, Mahidol University, Bangkok,
Thailand (P.H.); and Bremen Spatial Cognition Center, University of Bremen,
Bremen, Germany (P.H.)
| | - Douglas Roberts-Wolfe
- From the Department of Radiology (S.X.M., A.H.D., D.R.F., C.E.K.) and
Institute for Biomedical Informatics (C.E.K.), University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104; Department of Radiology, Scripps Clinic, La
Jolla, Calif (J.D.R.); Department of Radiology, University of California San
Diego, La Jolla, Calif (J.D.R.); Department of Radiology and Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (A.M.R.); Faculty
of Information and Communication Technology, Mahidol University, Bangkok,
Thailand (P.H.); and Bremen Spatial Cognition Center, University of Bremen,
Bremen, Germany (P.H.)
| | - Peter Haddawy
- From the Department of Radiology (S.X.M., A.H.D., D.R.F., C.E.K.) and
Institute for Biomedical Informatics (C.E.K.), University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104; Department of Radiology, Scripps Clinic, La
Jolla, Calif (J.D.R.); Department of Radiology, University of California San
Diego, La Jolla, Calif (J.D.R.); Department of Radiology and Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (A.M.R.); Faculty
of Information and Communication Technology, Mahidol University, Bangkok,
Thailand (P.H.); and Bremen Spatial Cognition Center, University of Bremen,
Bremen, Germany (P.H.)
| | - Charles E. Kahn
- From the Department of Radiology (S.X.M., A.H.D., D.R.F., C.E.K.) and
Institute for Biomedical Informatics (C.E.K.), University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104; Department of Radiology, Scripps Clinic, La
Jolla, Calif (J.D.R.); Department of Radiology, University of California San
Diego, La Jolla, Calif (J.D.R.); Department of Radiology and Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (A.M.R.); Faculty
of Information and Communication Technology, Mahidol University, Bangkok,
Thailand (P.H.); and Bremen Spatial Cognition Center, University of Bremen,
Bremen, Germany (P.H.)
| |
Collapse
|
27
|
Osadebey M, Liu Q, Fuster-Garcia E, Emblem KE. Interpreting deep learning models for glioma survival classification using visualization and textual explanations. BMC Med Inform Decis Mak 2023; 23:225. [PMID: 37853371 PMCID: PMC10583453 DOI: 10.1186/s12911-023-02320-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 10/02/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND Saliency-based algorithms are able to explain the relationship between input image pixels and deep-learning model predictions. However, it may be difficult to assess the clinical value of the most important image features and the model predictions derived from the raw saliency map. This study proposes to enhance the interpretability of saliency-based deep learning model for survival classification of patients with gliomas, by extracting domain knowledge-based information from the raw saliency maps. MATERIALS AND METHODS Our study includes presurgical T1-weighted (pre- and post-contrast), T2-weighted and T2-FLAIR MRIs of 147 glioma patients from the BraTs 2020 challenge dataset aligned to the SRI 24 anatomical atlas. Each image exam includes a segmentation mask and the information of overall survival (OS) from time of diagnosis (in days). This dataset was divided into training ([Formula: see text]) and validation ([Formula: see text]) datasets. The extent of surgical resection for all patients was gross total resection. We categorized the data into 42 short (mean [Formula: see text] days), 30 medium ([Formula: see text] days), and 46 long ([Formula: see text] days) survivors. A 3D convolutional neural network (CNN) trained on brain tumour MRI volumes classified all patients based on expected prognosis of either short-term, medium-term, or long-term survival. We extend the popular 2D Gradient-weighted Class Activation Mapping (Grad-CAM), for the generation of saliency map, to 3D and combined it with the anatomical atlas, to extract brain regions, brain volume and probability map that reveal domain knowledge-based information. RESULTS For each OS class, a larger tumor volume was associated with a shorter OS. There were 10, 7 and 27 tumor locations in brain regions that uniquely associate with the short-term, medium-term, and long-term survival, respectively. Tumors located in the transverse temporal gyrus, fusiform, and palladium are associated with short, medium and long-term survival, respectively. The visual and textual information displayed during OS prediction highlights tumor location and the contribution of different brain regions to the prediction of OS. This algorithm design feature assists the physician in analyzing and understanding different model prediction stages. CONCLUSIONS Domain knowledge-based information extracted from the saliency map can enhance the interpretability of deep learning models. Our findings show that tumors overlapping eloquent brain regions are associated with short patient survival.
Collapse
Affiliation(s)
- Michael Osadebey
- Department of Physics and Computational Radiology, Division of Radiology and Nuclear Medicine, Oslo University Hospital, Sognsvannsveien 20, 0372, Oslo, Norway.
| | - Qinghui Liu
- Department of Physics and Computational Radiology, Division of Radiology and Nuclear Medicine, Oslo University Hospital, Sognsvannsveien 20, 0372, Oslo, Norway
| | - Elies Fuster-Garcia
- Biomedical Data Science Laboratory,Instituto Universitario de Tecnologias de la Informacion Comunicaciones, Universitat Politècnica de València, 46022, Valencia, Spain
| | - Kyrre E Emblem
- Department of Physics and Computational Radiology, Division of Radiology and Nuclear Medicine, Oslo University Hospital, Sognsvannsveien 20, 0372, Oslo, Norway
| |
Collapse
|
28
|
Wei L, Niraula D, Gates EDH, Fu J, Luo Y, Nyflot MJ, Bowen SR, El Naqa IM, Cui S. Artificial intelligence (AI) and machine learning (ML) in precision oncology: a review on enhancing discoverability through multiomics integration. Br J Radiol 2023; 96:20230211. [PMID: 37660402 PMCID: PMC10546458 DOI: 10.1259/bjr.20230211] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 06/15/2023] [Accepted: 06/27/2023] [Indexed: 09/05/2023] Open
Abstract
Multiomics data including imaging radiomics and various types of molecular biomarkers have been increasingly investigated for better diagnosis and therapy in the era of precision oncology. Artificial intelligence (AI) including machine learning (ML) and deep learning (DL) techniques combined with the exponential growth of multiomics data may have great potential to revolutionize cancer subtyping, risk stratification, prognostication, prediction and clinical decision-making. In this article, we first present different categories of multiomics data and their roles in diagnosis and therapy. Second, AI-based data fusion methods and modeling methods as well as different validation schemes are illustrated. Third, the applications and examples of multiomics research in oncology are demonstrated. Finally, the challenges regarding the heterogeneity data set, availability of omics data, and validation of the research are discussed. The transition of multiomics research to real clinics still requires consistent efforts in standardizing omics data collection and analysis, building computational infrastructure for data sharing and storing, developing advanced methods to improve data fusion and interpretability, and ultimately, conducting large-scale prospective clinical trials to fill the gap between study findings and clinical benefits.
Collapse
Affiliation(s)
- Lise Wei
- Department of Radiation Oncology, University of Michigan, Michigan, United States
| | - Dipesh Niraula
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, United States
| | - Evan D. H. Gates
- Department of Radiation Oncology, University of Washington, Washington, United States
| | - Jie Fu
- Department of Radiation Oncology, Stanford University, Stanford, California, United States
| | - Yi Luo
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, United States
| | - Matthew J. Nyflot
- Department of Radiation Oncology, University of Washington, Washington, United States
| | - Stephen R. Bowen
- Department of Radiation Oncology, University of Washington, Washington, United States
| | - Issam M El Naqa
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, United States
| | - Sunan Cui
- Department of Radiation Oncology, University of Washington, Washington, United States
| |
Collapse
|
29
|
Yang N, Yue H, Zhang B, Chen J, Chu Q, Wang J, Yu X, Jian L, Bin Y, Liu S, Liu J, Zeng L, Yang H, Zhou C, Jiang W, Liu L, Zhang Y, Xiong Y, Wang Z. Predicting pathological response to neoadjuvant or conversion chemoimmunotherapy in stage IB-III non-small cell lung cancer patients using radiomic features. Thorac Cancer 2023; 14:2869-2876. [PMID: 37596822 PMCID: PMC10542462 DOI: 10.1111/1759-7714.15052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 07/12/2023] [Accepted: 07/14/2023] [Indexed: 08/20/2023] Open
Abstract
BACKGROUND To develop a radiomics model based on chest computed tomography (CT) for the prediction of a pathological complete response (pCR) after neoadjuvant or conversion chemoimmunotherapy (CIT) in patients with non-small cell lung cancer (NSCLC). METHODS Patients with stage IB-III NSCLC who received neoadjuvant or conversion CIT between September 2019 and July 2021 at Hunan Cancer Hospital, Xiangya Hospital, and Union Hospital were retrospectively collected. The least absolute shrinkage and selection operator (LASSO) were used to screen features. Then, model 1 (five radiomics features before CIT), model 2 (four radiomics features after CIT and before surgery) and model 3 were constructed for the prediction of pCR. Model 3 included all nine features of model 1 and 2 and was later named the neoadjuvant chemoimmunotherapy-related pathological response prediction model (NACIP). RESULTS This study included 110 patients: 77 in the training set and 33 in the validation set. Thirty-nine (35.5%) patients achieved a pCR. Model 1 showed area under the curve (AUC) = 0.65, 64% accuracy, 71% specificity, and 50% sensitivity, while model 2 displayed AUC = 0.81, 73% accuracy, 62% specificity, and 92% sensitivity. In comparison, NACIP yielded a good predictive value, with an AUC of 0.85, 81% accuracy, 81% specificity, and 83% sensitivity in the validation set. CONCLUSION NACIP may be a potential model for the early prediction of pCR in patients with NSCLC treated with neoadjuvant/conversion CIT.
Collapse
Affiliation(s)
- Nong Yang
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and EngineeringCentral South UniversityChangshaChina
- Lung Cancer and Gastrointestinal Unit, Department of Medical Oncology, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaChina
| | - Hai‐Lin Yue
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and EngineeringCentral South UniversityChangshaChina
| | - Bai‐Hua Zhang
- Department of Thoracic SurgeryHunan Cancer HospitalChangshaChina
| | - Juan Chen
- Department of Pharmacy, Xiangya HospitalCentral South UniversityChangshaChina
| | - Qian Chu
- Department of Oncology, Tongji Hospital, Tongji Medical CollegeHuazhong University of Science and TechnologyWuhanChina
| | - Jian‐Xin Wang
- Lung Cancer and Gastrointestinal Unit, Department of Medical Oncology, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaChina
| | - Xiao‐Ping Yu
- Department of Diagnostic Radiology, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaChina
| | - Lian Jian
- Department of Diagnostic Radiology, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaChina
| | - Ya‐Wen Bin
- Cancer Center, Union Hospital, Tongji Medical CollegeHuazhong University of Science and TechnologyWuhanChina
| | - Si‐Ye Liu
- Department of Diagnostic Radiology, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaChina
| | - Jin Liu
- Lung Cancer and Gastrointestinal Unit, Department of Medical Oncology, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaChina
| | - Liang Zeng
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and EngineeringCentral South UniversityChangshaChina
| | - Hai‐Yan Yang
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and EngineeringCentral South UniversityChangshaChina
| | - Chun‐Hua Zhou
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and EngineeringCentral South UniversityChangshaChina
| | - Wen‐Juan Jiang
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and EngineeringCentral South UniversityChangshaChina
| | - Li Liu
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and EngineeringCentral South UniversityChangshaChina
| | - Yong‐Chang Zhang
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and EngineeringCentral South UniversityChangshaChina
| | - Yi Xiong
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and EngineeringCentral South UniversityChangshaChina
| | - Zhan Wang
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and EngineeringCentral South UniversityChangshaChina
| |
Collapse
|
30
|
Miranda F, Choudhari V, Barone S, Anchling L, Hutin N, Gurgel M, Al Turkestani N, Yatabe M, Bianchi J, Aliaga-Del Castillo A, Zupelari-Gonçalves P, Edwards S, Garib D, Cevidanes L, Prieto J. Interpretable artificial intelligence for classification of alveolar bone defect in patients with cleft lip and palate. Sci Rep 2023; 13:15861. [PMID: 37740091 PMCID: PMC10516946 DOI: 10.1038/s41598-023-43125-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 09/20/2023] [Indexed: 09/24/2023] Open
Abstract
Cleft lip and/or palate (CLP) is the most common congenital craniofacial anomaly and requires bone grafting of the alveolar cleft. This study aimed to develop a novel classification algorithm to assess the severity of alveolar bone defects in patients with CLP using three-dimensional (3D) surface models and to demonstrate through an interpretable artificial intelligence (AI)-based algorithm the decisions provided by the classifier. Cone-beam computed tomography scans of 194 patients with CLP were used to train and test the performance of an automatic classification of the severity of alveolar bone defect. The shape, height, and width of the alveolar bone defect were assessed in automatically segmented maxillary 3D surface models to determine the ground truth classification index of its severity. The novel classifier algorithm renders the 3D surface models from different viewpoints and captures 2D image snapshots fed into a 2D Convolutional Neural Network. An interpretable AI algorithm was developed that uses features from each view and aggregated via Attention Layers to explain the classification. The precision, recall and F-1 score were 0.823, 0.816, and 0.817, respectively, with agreement ranging from 97.4 to 100% on the severity index within 1 group difference. The new classifier and interpretable AI algorithm presented satisfactory accuracy to classify the severity of alveolar bone defect morphology using 3D surface models of patients with CLP and graphically displaying the features that were considered during the deep learning model's classification decision.
Collapse
Affiliation(s)
- Felicia Miranda
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA.
- Department of Orthodontics, Bauru Dental School, University of São Paulo, Bauru, SP, Brazil.
| | - Vishakha Choudhari
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Selene Barone
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
- Department of Health Science, School of Dentistry, Magna Graecia University of Catanzaro, Catanzaro, Italy
| | - Luc Anchling
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
- CPE Lyon, Lyon, France
| | - Nathan Hutin
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
- CPE Lyon, Lyon, France
| | - Marcela Gurgel
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Najla Al Turkestani
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
- Department of Restorative and Aesthetic Dentistry, Faculty of Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Marilia Yatabe
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Jonas Bianchi
- Department of Orthodontics, University of the Pacific, Arthur A. Dugoni School of Dentistry, San Francisco, CA, USA
| | - Aron Aliaga-Del Castillo
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Paulo Zupelari-Gonçalves
- Department of Oral and Maxillofacial Surgery, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Sean Edwards
- Department of Oral and Maxillofacial Surgery, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Daniela Garib
- Department of Orthodontics, Bauru Dental School, University of São Paulo, Bauru, SP, Brazil
- Department of Orthodontics, Hospital for Rehabilitation of Craniofacial Anomalies, University of São Paulo, Bauru, SP, Brazil
| | - Lucia Cevidanes
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Juan Prieto
- Department of Psychiatry, University of North Carolina, Chapel Hill, NC, USA
| |
Collapse
|
31
|
Huynh BN, Groendahl AR, Tomic O, Liland KH, Knudtsen IS, Hoebers F, van Elmpt W, Malinen E, Dale E, Futsaether CM. Head and neck cancer treatment outcome prediction: a comparison between machine learning with conventional radiomics features and deep learning radiomics. Front Med (Lausanne) 2023; 10:1217037. [PMID: 37711738 PMCID: PMC10498924 DOI: 10.3389/fmed.2023.1217037] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 07/07/2023] [Indexed: 09/16/2023] Open
Abstract
Background Radiomics can provide in-depth characterization of cancers for treatment outcome prediction. Conventional radiomics rely on extraction of image features within a pre-defined image region of interest (ROI) which are typically fed to a classification algorithm for prediction of a clinical endpoint. Deep learning radiomics allows for a simpler workflow where images can be used directly as input to a convolutional neural network (CNN) with or without a pre-defined ROI. Purpose The purpose of this study was to evaluate (i) conventional radiomics and (ii) deep learning radiomics for predicting overall survival (OS) and disease-free survival (DFS) for patients with head and neck squamous cell carcinoma (HNSCC) using pre-treatment 18F-fluorodeoxuglucose positron emission tomography (FDG PET) and computed tomography (CT) images. Materials and methods FDG PET/CT images and clinical data of patients with HNSCC treated with radio(chemo)therapy at Oslo University Hospital (OUS; n = 139) and Maastricht University Medical Center (MAASTRO; n = 99) were collected retrospectively. OUS data was used for model training and initial evaluation. MAASTRO data was used for external testing to assess cross-institutional generalizability. Models trained on clinical and/or conventional radiomics features, with or without feature selection, were compared to CNNs trained on PET/CT images without or with the gross tumor volume (GTV) included. Model performance was measured using accuracy, area under the receiver operating characteristic curve (AUC), Matthew's correlation coefficient (MCC), and the F1 score calculated for both classes separately. Results CNNs trained directly on images achieved the highest performance on external data for both endpoints. Adding both clinical and radiomics features to these image-based models increased performance further. Conventional radiomics including clinical data could achieve competitive performance. However, feature selection on clinical and radiomics data lead to overfitting and poor cross-institutional generalizability. CNNs without tumor and node contours achieved close to on-par performance with CNNs including contours. Conclusion High performance and cross-institutional generalizability can be achieved by combining clinical data, radiomics features and medical images together with deep learning models. However, deep learning models trained on images without contours can achieve competitive performance and could see potential use as an initial screening tool for high-risk patients.
Collapse
Affiliation(s)
- Bao Ngoc Huynh
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | | | - Oliver Tomic
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Kristian Hovde Liland
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Ingerid Skjei Knudtsen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
| | - Frank Hoebers
- Department of Radiation Oncology (MAASTRO), Maastricht University Medical Center, Maastricht, Netherlands
- GROW School for Oncology and Reproduction, Maastricht University Medical Center, Maastricht, Netherlands
| | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), Maastricht University Medical Center, Maastricht, Netherlands
- GROW School for Oncology and Reproduction, Maastricht University Medical Center, Maastricht, Netherlands
| | - Eirik Malinen
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| | - Einar Dale
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | | |
Collapse
|
32
|
Perni S, Lehmann LS, Bitterman DS. Patients should be informed when AI systems are used in clinical trials. Nat Med 2023; 29:1890-1891. [PMID: 37221381 DOI: 10.1038/s41591-023-02367-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Affiliation(s)
- Subha Perni
- Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
- MD Anderson Cancer Center, Houston, TX, USA
| | - Lisa Soleymani Lehmann
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
- Harvard T.H. Chan School of Public Health, Harvard Medical School, Boston, MA, USA
| | - Danielle S Bitterman
- Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA.
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
33
|
Shamshad F, Khan S, Zamir SW, Khan MH, Hayat M, Khan FS, Fu H. Transformers in medical imaging: A survey. Med Image Anal 2023; 88:102802. [PMID: 37315483 DOI: 10.1016/j.media.2023.102802] [Citation(s) in RCA: 64] [Impact Index Per Article: 64.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 03/11/2023] [Accepted: 03/23/2023] [Indexed: 06/16/2023]
Abstract
Following unprecedented success on the natural language tasks, Transformers have been successfully applied to several computer vision problems, achieving state-of-the-art results and prompting researchers to reconsider the supremacy of convolutional neural networks (CNNs) as de facto operators. Capitalizing on these advances in computer vision, the medical imaging field has also witnessed growing interest for Transformers that can capture global context compared to CNNs with local receptive fields. Inspired from this transition, in this survey, we attempt to provide a comprehensive review of the applications of Transformers in medical imaging covering various aspects, ranging from recently proposed architectural designs to unsolved issues. Specifically, we survey the use of Transformers in medical image segmentation, detection, classification, restoration, synthesis, registration, clinical report generation, and other tasks. In particular, for each of these applications, we develop taxonomy, identify application-specific challenges as well as provide insights to solve them, and highlight recent trends. Further, we provide a critical discussion of the field's current state as a whole, including the identification of key challenges, open problems, and outlining promising future directions. We hope this survey will ignite further interest in the community and provide researchers with an up-to-date reference regarding applications of Transformer models in medical imaging. Finally, to cope with the rapid development in this field, we intend to regularly update the relevant latest papers and their open-source implementations at https://github.com/fahadshamshad/awesome-transformers-in-medical-imaging.
Collapse
Affiliation(s)
- Fahad Shamshad
- MBZ University of Artificial Intelligence, Abu Dhabi, United Arab Emirates.
| | - Salman Khan
- MBZ University of Artificial Intelligence, Abu Dhabi, United Arab Emirates; CECS, Australian National University, Canberra ACT 0200, Australia
| | - Syed Waqas Zamir
- Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | | | - Munawar Hayat
- Faculty of IT, Monash University, Clayton VIC 3800, Australia
| | - Fahad Shahbaz Khan
- MBZ University of Artificial Intelligence, Abu Dhabi, United Arab Emirates; Computer Vision Laboratory, Linköping University, Sweden
| | - Huazhu Fu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| |
Collapse
|
34
|
Chaddad A, Tan G, Liang X, Hassan L, Rathore S, Desrosiers C, Katib Y, Niazi T. Advancements in MRI-Based Radiomics and Artificial Intelligence for Prostate Cancer: A Comprehensive Review and Future Prospects. Cancers (Basel) 2023; 15:3839. [PMID: 37568655 PMCID: PMC10416937 DOI: 10.3390/cancers15153839] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 07/25/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023] Open
Abstract
The use of multiparametric magnetic resonance imaging (mpMRI) has become a common technique used in guiding biopsy and developing treatment plans for prostate lesions. While this technique is effective, non-invasive methods such as radiomics have gained popularity for extracting imaging features to develop predictive models for clinical tasks. The aim is to minimize invasive processes for improved management of prostate cancer (PCa). This study reviews recent research progress in MRI-based radiomics for PCa, including the radiomics pipeline and potential factors affecting personalized diagnosis. The integration of artificial intelligence (AI) with medical imaging is also discussed, in line with the development trend of radiogenomics and multi-omics. The survey highlights the need for more data from multiple institutions to avoid bias and generalize the predictive model. The AI-based radiomics model is considered a promising clinical tool with good prospects for application.
Collapse
Affiliation(s)
- Ahmad Chaddad
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China
- The Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure (ETS), Montreal, QC H3C 1K3, Canada
| | - Guina Tan
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China
| | - Xiaojuan Liang
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China
| | - Lama Hassan
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China
| | | | - Christian Desrosiers
- The Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure (ETS), Montreal, QC H3C 1K3, Canada
| | - Yousef Katib
- Department of Radiology, Taibah University, Al Madinah 42361, Saudi Arabia
| | - Tamim Niazi
- Lady Davis Institute for Medical Research, McGill University, Montreal, QC H3T 1E2, Canada
| |
Collapse
|
35
|
Novak LL, Russell RG, Garvey K, Patel M, Thomas Craig KJ, Snowdon J, Miller B. Clinical use of artificial intelligence requires AI-capable organizations. JAMIA Open 2023; 6:ooad028. [PMID: 37152469 PMCID: PMC10155810 DOI: 10.1093/jamiaopen/ooad028] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 03/18/2023] [Accepted: 04/11/2023] [Indexed: 05/09/2023] Open
Abstract
Artificial intelligence-based algorithms are being widely implemented in health care, even as evidence is emerging of bias in their design, problems with implementation, and potential harm to patients. To achieve the promise of using of AI-based tools to improve health, healthcare organizations will need to be AI-capable, with internal and external systems functioning in tandem to ensure the safe, ethical, and effective use of AI-based tools. Ideas are starting to emerge about the organizational routines, competencies, resources, and infrastructures that will be required for safe and effective deployment of AI in health care, but there has been little empirical research. Infrastructures that provide legal and regulatory guidance for managers, clinician competencies for the safe and effective use of AI-based tools, and learner-centric resources such as clear AI documentation and local health ecosystem impact reviews can help drive continuous improvement.
Collapse
Affiliation(s)
- Laurie Lovett Novak
- Corresponding Author: Laurie Lovett Novak, PhD, MHSA, Department of Biomedical Informatics, Vanderbilt University Medical Center, 2525 West End Ave, Suite 1475, Nashville, TN 37203, USA;
| | - Regina G Russell
- Department of Medical Education and Administration and Office of Undergraduate Medical Education, Vanderbilt University School of Medicine, Nashville, Tennessee, USA
| | - Kim Garvey
- Department of Anesthesiology and the Center for Advanced Mobile Healthcare Learning, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Mehool Patel
- Department of Internal Medicine, Northeastern Ohio Medical University (NEOMED), Rootstown, Ohio, USA
- Department of Internal Medicine, Western Reserve Hospital, Cuyahoga Falls, Ohio, USA
| | - Kelly Jean Thomas Craig
- Clinical Evidence Development, Aetna®, Medical Affairs CVS Health®, Wellesley, Massachusetts, USA
| | - Jane Snowdon
- Corporate Technical Strategy, IBM® Corporation, Yorktown Heights, New York, USA
| | - Bonnie Miller
- Department of Medical Education and Administration and Center for Advanced Mobile Healthcare Learning, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
36
|
Felder FN, Walsh SL. Exploring computer-based imaging analysis in interstitial lung disease: opportunities and challenges. ERJ Open Res 2023; 9:00145-2023. [PMID: 37404849 PMCID: PMC10316044 DOI: 10.1183/23120541.00145-2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 05/03/2023] [Indexed: 07/06/2023] Open
Abstract
The advent of quantitative computed tomography (QCT) and artificial intelligence (AI) using high-resolution computed tomography data has revolutionised the way interstitial diseases are studied. These quantitative methods provide more accurate and precise results compared to prior semiquantitative methods, which were limited by human error such as interobserver disagreement or low reproducibility. The integration of QCT and AI and the development of digital biomarkers has facilitated not only diagnosis but also prognostication and prediction of disease behaviour, not just in idiopathic pulmonary fibrosis in which they were initially studied, but also in other fibrotic lung diseases. These tools provide reproducible, objective prognostic information which may facilitate clinical decision-making. However, despite the benefits of QCT and AI, there are still obstacles that need to be addressed. Important issues include optimal data management, data sharing and maintenance of data privacy. In addition, the development of explainable AI will be essential to develop trust within the medical community and facilitate implementation in routine clinical practice.
Collapse
Affiliation(s)
| | - Simon L.F. Walsh
- National Heart and Lung Institute, Imperial College London, London, UK
| |
Collapse
|
37
|
Rezazade Mehrizi MH, Mol F, Peter M, Ranschaert E, Dos Santos DP, Shahidi R, Fatehi M, Dratsch T. The impact of AI suggestions on radiologists' decisions: a pilot study of explainability and attitudinal priming interventions in mammography examination. Sci Rep 2023; 13:9230. [PMID: 37286665 DOI: 10.1038/s41598-023-36435-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 06/03/2023] [Indexed: 06/09/2023] Open
Abstract
Various studies have shown that medical professionals are prone to follow the incorrect suggestions offered by algorithms, especially when they have limited inputs to interrogate and interpret such suggestions and when they have an attitude of relying on them. We examine the effect of correct and incorrect algorithmic suggestions on the diagnosis performance of radiologists when (1) they have no, partial, and extensive informational inputs for explaining the suggestions (study 1) and (2) they are primed to hold a positive, negative, ambivalent, or neutral attitude towards AI (study 2). Our analysis of 2760 decisions made by 92 radiologists conducting 15 mammography examinations shows that radiologists' diagnoses follow both incorrect and correct suggestions, despite variations in the explainability inputs and attitudinal priming interventions. We identify and explain various pathways through which radiologists navigate through the decision process and arrive at correct or incorrect decisions. Overall, the findings of both studies show the limited effect of using explainability inputs and attitudinal priming for overcoming the influence of (incorrect) algorithmic suggestions.
Collapse
Affiliation(s)
| | - Ferdinand Mol
- Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Marcel Peter
- Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | | | - Daniel Pinto Dos Santos
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Ramin Shahidi
- Bushehr University of Medical Sciences, Bushehr, Iran
| | | | - Thomas Dratsch
- Institute of Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
38
|
Ozcan BB, Patel BK, Banerjee I, Dogan BE. Artificial Intelligence in Breast Imaging: Challenges of Integration Into Clinical Practice. JOURNAL OF BREAST IMAGING 2023; 5:248-257. [PMID: 38416888 DOI: 10.1093/jbi/wbad007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Indexed: 03/01/2024]
Abstract
Artificial intelligence (AI) in breast imaging is a rapidly developing field with promising results. Despite the large number of recent publications in this field, unanswered questions have led to limited implementation of AI into daily clinical practice for breast radiologists. This paper provides an overview of the key limitations of AI in breast imaging including, but not limited to, limited numbers of FDA-approved algorithms and annotated data sets with histologic ground truth; concerns surrounding data privacy, security, algorithm transparency, and bias; and ethical issues. Ultimately, the successful implementation of AI into clinical care will require thoughtful action to address these challenges, transparency, and sharing of AI implementation workflows, limitations, and performance metrics within the breast imaging community and other end-users.
Collapse
Affiliation(s)
- B Bersu Ozcan
- The University of Texas Southwestern Medical Center, Department of Radiology, Dallas, TX, USA
| | | | - Imon Banerjee
- Mayo Clinic, Department of Radiology, Scottsdale, AZ, USA
| | - Basak E Dogan
- The University of Texas Southwestern Medical Center, Department of Radiology, Dallas, TX, USA
| |
Collapse
|
39
|
Bobowicz M, Rygusik M, Buler J, Buler R, Ferlin M, Kwasigroch A, Szurowska E, Grochowski M. Attention-Based Deep Learning System for Classification of Breast Lesions-Multimodal, Weakly Supervised Approach. Cancers (Basel) 2023; 15:2704. [PMID: 37345041 DOI: 10.3390/cancers15102704] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 05/02/2023] [Accepted: 05/05/2023] [Indexed: 06/23/2023] Open
Abstract
Breast cancer is the most frequent female cancer, with a considerable disease burden and high mortality. Early diagnosis with screening mammography might be facilitated by automated systems supported by deep learning artificial intelligence. We propose a model based on a weakly supervised Clustering-constrained Attention Multiple Instance Learning (CLAM) classifier able to train under data scarcity effectively. We used a private dataset with 1174 non-cancer and 794 cancer images labelled at the image level with pathological ground truth confirmation. We used feature extractors (ResNet-18, ResNet-34, ResNet-50 and EfficientNet-B0) pre-trained on ImageNet. The best results were achieved with multimodal-view classification using both CC and MLO images simultaneously, resized by half, with a patch size of 224 px and an overlap of 0.25. It resulted in AUC-ROC = 0.896 ± 0.017, F1-score 81.8 ± 3.2, accuracy 81.6 ± 3.2, precision 82.4 ± 3.3, and recall 81.6 ± 3.2. Evaluation with the Chinese Mammography Database, with 5-fold cross-validation, patient-wise breakdowns, and transfer learning, resulted in AUC-ROC 0.848 ± 0.015, F1-score 78.6 ± 2.0, accuracy 78.4 ± 1.9, precision 78.8 ± 2.0, and recall 78.4 ± 1.9. The CLAM algorithm's attentional maps indicate the features most relevant to the algorithm in the images. Our approach was more effective than in many other studies, allowing for some explainability and identifying erroneous predictions based on the wrong premises.
Collapse
Affiliation(s)
- Maciej Bobowicz
- 2nd Department of Radiology, Medical University of Gdansk, 80-214 Gdansk, Poland
| | - Marlena Rygusik
- 2nd Department of Radiology, Medical University of Gdansk, 80-214 Gdansk, Poland
| | - Jakub Buler
- Department of Intelligent Control Systems and Decision Support, Faculty of Electrical and Control Engineering, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Rafał Buler
- Department of Intelligent Control Systems and Decision Support, Faculty of Electrical and Control Engineering, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Maria Ferlin
- Department of Intelligent Control Systems and Decision Support, Faculty of Electrical and Control Engineering, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Arkadiusz Kwasigroch
- Department of Intelligent Control Systems and Decision Support, Faculty of Electrical and Control Engineering, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Edyta Szurowska
- 2nd Department of Radiology, Medical University of Gdansk, 80-214 Gdansk, Poland
| | - Michał Grochowski
- Department of Intelligent Control Systems and Decision Support, Faculty of Electrical and Control Engineering, Gdansk University of Technology, 80-233 Gdansk, Poland
| |
Collapse
|
40
|
Chapiro J. Explainable AI for Prostate MRI: Don't Trust, Verify. Radiology 2023; 307:e230574. [PMID: 37039689 PMCID: PMC10323286 DOI: 10.1148/radiol.230574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 03/10/2023] [Accepted: 03/14/2023] [Indexed: 04/12/2023]
Affiliation(s)
- Julius Chapiro
- From the Department of Radiology and Biomedical Imaging, Yale
University School of Medicine, 789 Howard Ave, CB363H, New Haven, CT
06519
| |
Collapse
|
41
|
Martin SA, Townend FJ, Barkhof F, Cole JH. Interpretable machine learning for dementia: A systematic review. Alzheimers Dement 2023; 19:2135-2149. [PMID: 36735865 PMCID: PMC10955773 DOI: 10.1002/alz.12948] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 12/05/2022] [Accepted: 12/20/2022] [Indexed: 02/05/2023]
Abstract
INTRODUCTION Machine learning research into automated dementia diagnosis is becoming increasingly popular but so far has had limited clinical impact. A key challenge is building robust and generalizable models that generate decisions that can be reliably explained. Some models are designed to be inherently "interpretable," whereas post hoc "explainability" methods can be used for other models. METHODS Here we sought to summarize the state-of-the-art of interpretable machine learning for dementia. RESULTS We identified 92 studies using PubMed, Web of Science, and Scopus. Studies demonstrate promising classification performance but vary in their validation procedures and reporting standards and rely heavily on popular data sets. DISCUSSION Future work should incorporate clinicians to validate explanation methods and make conclusive inferences about dementia-related disease pathology. Critically analyzing model explanations also requires an understanding of the interpretability methods itself. Patient-specific explanations are also required to demonstrate the benefit of interpretable machine learning in clinical practice.
Collapse
Affiliation(s)
- Sophie A. Martin
- Centre for Medical Image ComputingDepartment of Computer ScienceUniversity College LondonLondonUK
- Dementia Research CentreQueen Square Institute of NeurologyUniversity College LondonLondonUK
| | - Florence J. Townend
- Centre for Medical Image ComputingDepartment of Computer ScienceUniversity College LondonLondonUK
| | - Frederik Barkhof
- Centre for Medical Image ComputingDepartment of Computer ScienceUniversity College LondonLondonUK
- Dementia Research CentreQueen Square Institute of NeurologyUniversity College LondonLondonUK
- Amsterdam UMC, Department of Radiology & Nuclear MedicineVrije UniversiteitAmsterdamNetherlands
| | - James H. Cole
- Centre for Medical Image ComputingDepartment of Computer ScienceUniversity College LondonLondonUK
- Dementia Research CentreQueen Square Institute of NeurologyUniversity College LondonLondonUK
| |
Collapse
|
42
|
Ursin F, Lindner F, Ropinski T, Salloch S, Timmermann C. Ebenen der Explizierbarkeit für medizinische künstliche Intelligenz: Was brauchen wir normativ und was können wir technisch erreichen? Ethik Med 2023. [DOI: 10.1007/s00481-023-00761-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Abstract
Definition of the problem
The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which levels of explicability are needed to obtain informed consent when utilizing medical AI?
Arguments
We proceed in five steps: First, we map the terms commonly associated with explicability as described in the ethics and computer science literature, i.e., disclosure, intelligibility, interpretability, and explainability. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we distinguish hurdles for explicability in terms of epistemic and explanatory opacity. Fourth, this then allows to conclude the level of explicability physicians must reach and what patients can expect. In a final step, we show how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic AI systems in radiology as an example.
Conclusion
We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements.
Collapse
|
43
|
Rusche T, Wasserthal J, Breit HC, Fischer U, Guzman R, Fiehler J, Psychogios MN, Sporns PB. Machine Learning for Onset Prediction of Patients with Intracerebral Hemorrhage. J Clin Med 2023; 12:jcm12072631. [PMID: 37048712 PMCID: PMC10094957 DOI: 10.3390/jcm12072631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 03/13/2023] [Accepted: 03/30/2023] [Indexed: 04/03/2023] Open
Abstract
Objective: Intracerebral hemorrhage (ICH) has a high mortality and long-term morbidity and thus has a significant overall health–economic impact. Outcomes are especially poor if the exact onset is unknown, but reliable imaging-based methods for onset estimation have not been established. We hypothesized that onset prediction of patients with ICH using artificial intelligence (AI) may be more accurate than human readers. Material and Methods: A total of 7421 computed tomography (CT) datasets between January 2007–July 2021 from the University Hospital Basel with confirmed ICH were extracted and an ICH-segmentation algorithm as well as two classifiers (one with radiomics, one with convolutional neural networks) for onset estimation were trained. The classifiers were trained based on the gold standard of 644 datasets with a known onset of >1 and <48 h. The results of the classifiers were compared to the ratings of two radiologists. Results: Both the AI-based classifiers and the radiologists had poor discrimination of the known onsets, with a mean absolute error (MAE) of 9.77 h (95% CI (confidence interval) = 8.52–11.03) for the convolutional neural network (CNN), 9.96 h (8.68–11.32) for the radiomics model, 13.38 h (11.21–15.74) for rater 1 and 11.21 h (9.61–12.90) for rater 2, respectively. The results of the CNN and radiomics model were both not significantly different to the mean of the known onsets (p = 0.705 and p = 0.423). Conclusions: In our study, the discriminatory power of AI-based classifiers and human readers for onset estimation of patients with ICH was poor. This indicates that accurate AI-based onset estimation of patients with ICH based only on CT-data may be unlikely to change clinical decision making in the near future. Perhaps multimodal AI-based approaches could improve ICH onset prediction and should be considered in future studies.
Collapse
Affiliation(s)
- Thilo Rusche
- Department of Neuroradiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, 4031 Basel, Switzerland
- Correspondence:
| | - Jakob Wasserthal
- Department of Neuroradiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, 4031 Basel, Switzerland
| | - Hanns-Christian Breit
- Department of Neuroradiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, 4031 Basel, Switzerland
| | - Urs Fischer
- Department of Neurology, University Hospital Basel, 4031 Basel, Switzerland
| | - Raphael Guzman
- Department of Neurosurgery, University Hospital Basel, 4031 Basel, Switzerland
| | - Jens Fiehler
- Department of Diagnostic and Interventional Neuroradiology, University Medical Center Hamburg-Eppendorf, 55131 Hamburg, Germany
| | - Marios-Nikos Psychogios
- Department of Neuroradiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, 4031 Basel, Switzerland
| | - Peter B. Sporns
- Department of Neuroradiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, 4031 Basel, Switzerland
- Department of Diagnostic and Interventional Neuroradiology, University Medical Center Hamburg-Eppendorf, 55131 Hamburg, Germany
- Department of Radiology and Neuroradiology, Stadtspital Zürich, 8063 Zürich, Switzerland
| |
Collapse
|
44
|
Gorre N, Carranza E, Fuhrman J, Li H, Madduri RK, Giger M, El Naqa I. MIDRC CRP10 AI interface-an integrated tool for exploring, testing and visualization of AI models. Phys Med Biol 2023; 68:10.1088/1361-6560/acb754. [PMID: 36716497 PMCID: PMC10155272 DOI: 10.1088/1361-6560/acb754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 01/30/2023] [Indexed: 01/31/2023]
Abstract
Objective. Developing Machine Learning models (N Gorre et al 2023) for clinical applications from scratch can be a cumbersome task requiring varying levels of expertise. Seasoned developers and researchers may also often face incompatible frameworks and data preparation issues. This is further complicated in the context of diagnostic radiology and oncology applications, given the heterogenous nature of the input data and the specialized task requirements. Our goal is to provide clinicians, researchers, and early AI developers with a modular, flexible, and user-friendly software tool that can effectively meet their needs to explore, train, and test AI algorithms by allowing users to interpret their model results. This latter step involves the incorporation of interpretability and explainability methods that would allow visualizing performance as well as interpreting predictions across the different neural network layers of a deep learning algorithm.Approach. To demonstrate our proposed tool, we have developed the CRP10 AI Application Interface (CRP10AII) as part of the MIDRC consortium. CRP10AII is based on the web service Django framework in Python. CRP10AII/Django/Python in combination with another data manager tool/platform, data commons such as Gen3 can provide a comprehensive while easy to use machine/deep learning analytics tool. The tool allows to test, visualize, interpret how and why the deep learning model is performing. The major highlight of CRP10AII is its capability of visualization and interpretability of otherwise Blackbox AI algorithms.Results. CRP10AII provides many convenient features for model building and evaluation, including: (1) query and acquire data according to the specific application (e.g. classification, segmentation) from the data common platform (Gen3 here); (2) train the AI models from scratch or use pre-trained models (e.g. VGGNet, AlexNet, BERT) for transfer learning and test the model predictions, performance assessment, receiver operating characteristics curve evaluation; (3) interpret the AI model predictions using methods like SHAPLEY, LIME values; and (4) visualize the model learning through heatmaps and activation maps of individual layers of the neural network.Significance. Unexperienced users may have more time to swiftly pre-process, build/train their AI models on their own use-cases, and further visualize and explore these AI models as part of this pipeline, all in an end-to-end manner. CRP10AII will be provided as an open-source tool, and we expect to continue developing it based on users' feedback.
Collapse
Affiliation(s)
- Naveena Gorre
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, United States of America
| | - Eduardo Carranza
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, United States of America
| | - Jordan Fuhrman
- Department of Radiology, University of Chicago, Chicago, IL, United States of America
| | - Hui Li
- Department of Radiology, University of Chicago, Chicago, IL, United States of America
| | - Ravi K Madduri
- Data Science and Learning Division, Argonne National Laboratory, Lemont, IL, United States of America
- University of Chicago Consortium for Advanced Science and Engineering, Chicago, IL, United States of America
| | - Maryellen Giger
- Department of Radiology, University of Chicago, Chicago, IL, United States of America
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, United States of America
| |
Collapse
|
45
|
Vega C, Schneider R, Satagopam V. Analysis: Flawed Datasets of Monkeypox Skin Images. J Med Syst 2023; 47:37. [PMID: 36933065 PMCID: PMC10024024 DOI: 10.1007/s10916-023-01928-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 02/26/2023] [Indexed: 03/19/2023]
Abstract
The self-proclaimed first publicly available dataset of Monkeypox skin images consists of medically irrelevant images extracted from Google and photography repositories through a process denominated web-scrapping. Yet, this did not stop other researchers from employing it to build Machine Learning (ML) solutions aimed at computer-aided diagnosis of Monkeypox and other viral infections presenting skin lesions. Neither did it stop the reviewers or editors from publishing these subsequent works in peer-reviewed journals. Several of these works claimed extraordinary performance in the classification of Monkeypox, Chickenpox and Measles, employing ML and the aforementioned dataset. In this work, we analyse the initiator work that has catalysed the development of several ML solutions, and whose popularity is continuing to grow. Further, we provide a rebuttal experiment that showcases the risks of such methodologies, proving that the ML solutions do not necessarily obtain their performance from the features relevant to the diseases at issue.
Collapse
Affiliation(s)
- Carlos Vega
- Bioinformatics Core, University of Luxembourg, Luxembourg Centre for Systems Biomedicine, Av. du Swing 6, Belvaux, 4367, Luxembourg.
| | - Reinhard Schneider
- Bioinformatics Core, University of Luxembourg, Luxembourg Centre for Systems Biomedicine, Av. du Swing 6, Belvaux, 4367, Luxembourg
| | - Venkata Satagopam
- Bioinformatics Core, University of Luxembourg, Luxembourg Centre for Systems Biomedicine, Av. du Swing 6, Belvaux, 4367, Luxembourg
| |
Collapse
|
46
|
Chen YS, Luo SD, Lee CH, Lin JF, Lin TY, Ko SF, Yu CC, Chiang PL, Wang CK, Chiu IM, Huang YT, Tai YF, Chiang PT, Lin WC. Improving detection of impacted animal bones on lateral neck radiograph using a deep learning artificial intelligence algorithm. Insights Imaging 2023; 14:43. [PMID: 36929090 PMCID: PMC10020388 DOI: 10.1186/s13244-023-01385-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 02/04/2023] [Indexed: 03/18/2023] Open
Abstract
OBJECTIVE We aimed to develop a deep learning artificial intelligence (AI) algorithm to detect impacted animal bones on lateral neck radiographs and to assess its effectiveness for improving the interpretation of lateral neck radiographs. METHODS Lateral neck radiographs were retrospectively collected for patients with animal bone impaction between January 2010 and March 2020. Radiographs were then separated into training, validation, and testing sets. A total of 1733 lateral neck radiographs were used to develop the deep learning algorithm. The testing set was assessed for the stand-alone deep learning AI algorithm and for human readers (radiologists, radiology residents, emergency physicians, ENT physicians) with and without the aid of the AI algorithm. Another radiograph cohort, collected from April 1, 2020, to June 30, 2020, was analyzed to simulate clinical application by comparing the deep learning AI algorithm with radiologists' reports. RESULTS In the testing set, the sensitivity, specificity, and accuracy of the AI model were 96%, 90%, and 93% respectively. Among the human readers, all physicians of different subspecialties achieved a higher accuracy with AI-assisted reading than without. In the simulation set, among the 20 cases positive for animal bones, the AI model accurately identified 3 more cases than the radiologists' reports. CONCLUSION Our deep learning AI model demonstrated a higher sensitivity for detection of animal bone impaction on lateral neck radiographs without an increased false positive rate. The application of this model in a clinical setting may effectively reduce time to diagnosis, accelerate workflow, and decrease the use of CT.
Collapse
Affiliation(s)
- Yueh-Sheng Chen
- Department of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, 123 Ta-Pei Road, Niao-Sung, Kaohsiung, 83305, Taiwan
| | - Sheng-Dean Luo
- Department of Otolaryngology, Kaohsiung Chang Gung Memorial Hospital, Kaohsiung, Taiwan
| | - Chi-Hsun Lee
- Next E-Commerce Technology Co., LTD., Taichung, Taiwan
| | - Jian-Feng Lin
- Next E-Commerce Technology Co., LTD., Taichung, Taiwan
| | - Te-Yen Lin
- Department of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, 123 Ta-Pei Road, Niao-Sung, Kaohsiung, 83305, Taiwan
| | - Sheung-Fat Ko
- Department of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, 123 Ta-Pei Road, Niao-Sung, Kaohsiung, 83305, Taiwan
| | - Chiun-Chieh Yu
- Department of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, 123 Ta-Pei Road, Niao-Sung, Kaohsiung, 83305, Taiwan
| | - Pi-Ling Chiang
- Department of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, 123 Ta-Pei Road, Niao-Sung, Kaohsiung, 83305, Taiwan
| | - Cheng-Kang Wang
- Department of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, 123 Ta-Pei Road, Niao-Sung, Kaohsiung, 83305, Taiwan
| | - I-Min Chiu
- Department of Emergency Medicine, Kaohsiung Chang Gung Memorial Hospital, Kaohsiung, Taiwan
| | - Yii-Ting Huang
- Department of Emergency Medicine, Kaohsiung Chang Gung Memorial Hospital, Kaohsiung, Taiwan
| | - Yi-Fan Tai
- Department of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, 123 Ta-Pei Road, Niao-Sung, Kaohsiung, 83305, Taiwan
| | - Po-Teng Chiang
- Department of Otolaryngology, Kaohsiung Chang Gung Memorial Hospital, Kaohsiung, Taiwan
| | - Wei-Che Lin
- Department of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, 123 Ta-Pei Road, Niao-Sung, Kaohsiung, 83305, Taiwan. .,Department of Radiology, Jen Ai Chang Gung Health, Dali Branch, Taichung, Taiwan.
| |
Collapse
|
47
|
Liu CF, Li J, Kim G, Miller MI, Hillis AE, Faria AV. Automatic comprehensive aspects reports in clinical acute stroke MRIs. Sci Rep 2023; 13:3784. [PMID: 36882475 PMCID: PMC9992659 DOI: 10.1038/s41598-023-30242-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 02/20/2023] [Indexed: 03/09/2023] Open
Abstract
The Alberta Stroke Program Early CT Score (ASPECTS) is a simple visual system to assess the extent and location of ischemic stroke core. The capability of ASPECTS for selecting patients' treatment, however, is affected by the variability in human evaluation. In this study, we developed a fully automatic system to calculate ASPECTS comparable with consensus expert readings. Our system was trained in 400 clinical diffusion weighted images of patients with acute infarcts and evaluated with an external testing set of 100 cases. The models are interpretable, and the results are comprehensive, evidencing the features that lead to the classification. This system adds to our automated pipeline for acute stroke detection, segmentation, and quantification in MRIs (ADS), which outputs digital infarct masks and the proportion of diverse brain regions injured, in addition to the predicted ASPECTS, the prediction probability and the explanatory features. ADS is public, free, accessible to non-experts, has very few computational requirements, and run in real time in local CPUs with a single command line, fulfilling the conditions to perform large-scale, reproducible clinical and translational research.
Collapse
Affiliation(s)
- Chin-Fu Liu
- Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Jintong Li
- Department of Physics, Johns Hopkins University, Baltimore, MD, USA
- Department of Neuroscience, Johns Hopkins University, Baltimore, MD, USA
| | - Ganghyun Kim
- Department of Neuroscience, Johns Hopkins University, Baltimore, MD, USA
| | - Michael I Miller
- Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Argye E Hillis
- Department of Neurology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
- Department of Physical Medicine and Rehabilitation, and Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Andreia V Faria
- Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
48
|
Jiang Y, Wang H, Sun X, Li C, Wu T. Evaluation of Chinese populational exposure to environmental electromagnetic field based on stochastic dosimetry and parametric human modelling. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2023; 30:40445-40460. [PMID: 36609755 DOI: 10.1007/s11356-023-25153-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 01/02/2023] [Indexed: 06/17/2023]
Abstract
This study aimed to estimate the distribution of the whole-body averaged specific absorption rate (WBSAR) using several measurable physique parameters for Chinese adult population exposed to environmental electromagnetic fields (EMFs) of current wireless communication frequencies, and to discuss the effects of these physique parameters in the frequency-dependent dosimetric results. The physique distribution of Chinese adults was obtained from the National Physical Fitness and Health Database comprising 81,490 adult samples. The number of physique parameters used to construct the surrogate model was reduced to three via mutual information analysis. A stochastic method with 40 deterministic simulations was used to generate frequency-dependent and gender-specific surrogate models for WBSAR via polynomial chaos expansion. In the simulations, we constructed anatomically correct models conforming to the targeted physique parameters via deformable human modelling technique, which was based on deep learning from the image database including 767 Chinese adults. Thereafter, we analysed the sensitivity of the physique parameters to WBSAR by covariance-based Sobol decomposition. The results indicated that the generated models were consistent with the targeted physique parameters. The estimated dosimetric results were validated using finite-difference time-domain simulations (the error was < 6% across all the investigated frequencies for WBSAR). The novelty of the study included that it demonstrated the feasibility of estimating the individual WBSAR using a limited number of physique parameters with the aid of surrogate modelling. In addition, the population-based distribution of the WBSAR in Chinese adults was firstly presented in the manuscript. The results also indicated that the different combinations of physique parameter, dependent on genders and frequencies, significantly influenced the WBSAR, although the general conservativeness of the guidelines of the International Commission on Non-Ionizing Radiation and Protection can be confirmed in the surveyed population.
Collapse
Affiliation(s)
- Yuwei Jiang
- China Academy of Information and Communications Technology, No. 52, Huayuan Bei Road, Beijing, 100191, China
| | - Hongkai Wang
- School of Biomedical Engineering, Dalian University of Technology, Dalian, 116024, China
| | - Xiaobang Sun
- School of Biomedical Engineering, Dalian University of Technology, Dalian, 116024, China
- Faculty of Information Technology, University of Jyväskylä, 40014, Jyväskylä, Finland
| | - Congsheng Li
- China Academy of Information and Communications Technology, No. 52, Huayuan Bei Road, Beijing, 100191, China
| | - Tongning Wu
- China Academy of Information and Communications Technology, No. 52, Huayuan Bei Road, Beijing, 100191, China.
| |
Collapse
|
49
|
Gurevich E, El Hassan B, El Morr C. Equity within AI systems: What can health leaders expect? Healthc Manage Forum 2023; 36:119-124. [PMID: 36226507 PMCID: PMC9976641 DOI: 10.1177/08404704221125368] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Artificial Intelligence (AI) for health has a great potential; it has already proven to be successful in enhancing patient outcomes, facilitating professional work and benefiting administration. However, AI presents challenges related to health equity defined as an opportunity for people to reach their fullest health potential. This article discusses the opportunities and challenges that AI presents in health and examines ways in which inequities related to AI can be mitigated.
Collapse
Affiliation(s)
| | | | - Christo El Morr
- York University, Toronto, Ontario, Canada.,Christo El Morr, York University, Toronto, Ontario, Canada. E-mail:
| |
Collapse
|
50
|
Nakagawa K, Moukheiber L, Celi LA, Patel M, Mahmood F, Gondim D, Hogarth M, Levenson R. AI in Pathology: What could possibly go wrong? Semin Diagn Pathol 2023; 40:100-108. [PMID: 36882343 DOI: 10.1053/j.semdp.2023.02.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 02/25/2023] [Accepted: 02/26/2023] [Indexed: 03/05/2023]
Abstract
The field of medicine is undergoing rapid digital transformation. Pathologists are now striving to digitize their data, workflows, and interpretations, assisted by the enabling development of whole-slide imaging. Going digital means that the analog process of human diagnosis can be augmented or even replaced by rapidly evolving AI approaches, which are just now entering into clinical practice. But with such progress comes challenges that reflect a variety of stressors, including the impact of unrepresentative training data with accompanying implicit bias, data privacy concerns, and fragility of algorithm performance. Beyond such core digital aspects, considerations arise related to difficulties presented by changing disease presentations, diagnostic approaches, and therapeutic options. While some tools such as data federation can help with broadening data diversity while preserving expertise and local control, they may not be the full answer to some of these issues. The impact of AI in pathology on the field's human practitioners is still very much unknown: installation of unconscious bias and deference to AI guidance need to be understood and addressed. If AI is widely adopted, it may remove many inefficiencies in daily practice and compensate for staff shortages. It may also cause practitioner deskilling, dethrilling, and burnout. We discuss the technological, clinical, legal, and sociological factors that will influence the adoption of AI in pathology, and its eventual impact for good or ill.
Collapse
Affiliation(s)
| | | | - Leo A Celi
- Massachusetts Institute of Technology, Cambridge, MA
| | | | | | | | | | | |
Collapse
|