1
|
Popic D, Marinovich ML, Houssami N, Hall J, Carter SM. How should artificial intelligence be used in breast screening? Women's reasoning about workflow options. PLoS One 2025; 20:e0323528. [PMID: 40446203 PMCID: PMC12124851 DOI: 10.1371/journal.pone.0323528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2024] [Accepted: 04/10/2025] [Indexed: 06/02/2025] Open
Abstract
Studies show that breast screening participants are open to artificial intelligence (AI) in breast screening, but hold concerns about AI performance, governance, equitable access, and dependence on technology. Little is known of consumers' views on how AI should be used in breast screening practice. Our study aims to determine what matters most to women regarding AI use in the workflow of publicly funded breast screening programs, and how women choose between workflow options. We recruited forty women of screening age to learn about AI, the Australian breast screening program, and four possible workflows that include AI - one where AI works alone, and three different combinations of humans and AI. Participants then joined one of eight 90-minute dialogue groups to discuss their normative judgements on workflow options. Women proposed four conditions on AI deployment: preserving human control, evidence to assure AI performance, time to become familiar with AI, and clearly justifying the need for implementation. These informed women's unified rejection of AI working alone, and divided preferences across the other three workflows, as they traded off workflow attributes. Current evidence on AI performance convinced some women, but not others. Most women believed humans mitigate risk the best, so workflows should continue to be designed around them. Public breast screening services are trusted and valued by women, so significant changes require careful attention to outcomes relevant to women. Our results - women's detailed judgements on workflow design options - are new to the research literature. We conclude that women expect that AI only be deployed to do tasks it can do well, only where necessary, and only to fill gaps that radiologists cannot meet. Advancements in AI accuracy alone are unlikely to influence all women to accept AI making final decisions, if clinicians are available to perform the same task.
Collapse
Affiliation(s)
- Diana Popic
- Australian Centre for Health Engagement Evidence and Values, School of Social Sciences, Faculty of Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, New South Wales, Australia
| | - M. Luke Marinovich
- The Daffodil Centre, The University of Sydney, Sydney, New South Wales, Australiaa Joint Venture with Cancer Council NSW, Sydney, New South Wales, Australia
| | - Nehmat Houssami
- The Daffodil Centre, The University of Sydney, Sydney, New South Wales, Australiaa Joint Venture with Cancer Council NSW, Sydney, New South Wales, Australia
- School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia
| | - Julie Hall
- Australian Centre for Health Engagement Evidence and Values, School of Social Sciences, Faculty of Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, New South Wales, Australia
| | - Stacy M. Carter
- Australian Centre for Health Engagement Evidence and Values, School of Social Sciences, Faculty of Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, New South Wales, Australia
| |
Collapse
|
2
|
Woode ME, De Silva Perera U, Degeling C, Aquino YSJ, Houssami N, Carter SM, Chen G. Preferences for the Use of Artificial Intelligence for Breast Cancer Screening in Australia: A Discrete Choice Experiment. THE PATIENT 2025:10.1007/s40271-025-00742-w. [PMID: 40347323 DOI: 10.1007/s40271-025-00742-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/08/2025] [Indexed: 05/12/2025]
Abstract
BACKGROUND Breast cancer screening is considered an effective early detection strategy. Artificial intelligence (AI) may both offer benefits and create risks for breast screening programmes. To use AI in health screening services, the views and expectations of consumers are critical. This study examined the preferences of Australian women regarding AI use in breast cancer screening and the impact of information on preferences using discrete choice experiments. METHODS The experiment presented two alternative screening services based on seven attributes (reading method, screening sensitivity, screening specificity, time between screening and receiving results, supporting evidence, fair representation, and who should be held accountable) to 2063 women aged between 40 and 74 years recruited from an online panel. Participants were randomised into two arms. Both received standard information on AI use in breast screening, but one arm received additional information on its potential benefits. Preferences for hypothetical breast cancer screening services were modelled using a random parameter logit model. Relative attribute importance and uptake rates were estimated. RESULTS Participants preferred mixed reading (radiologist + AI system) over the other two reading methods. They showed a strong preference for fewer missed cases with a high attribute relative importance. Fewer false positives and a shorter waiting time for results were also preferred. Strength of preferences for mixed reading was significantly higher compared to two radiologists when additional information on AI is provided, highlighting the impact of information. CONCLUSIONS This study revealed the preferences among Australian women for the use of AI-driven breast cancer screening services. Results generally suggest women are open to their mammograms being read by both a radiologist and an AI-based system under certain conditions.
Collapse
Affiliation(s)
- Maame Esi Woode
- Centre for Health Economics, Monash Business School, Monash University, 900 Dandenong Road, East Caulfield, VIC, 3145, Australia.
- Monash Data Futures Research Institute, Monash University, East Caulfield, Australia.
| | - Udeni De Silva Perera
- Centre for Health Economics, Monash Business School, Monash University, 900 Dandenong Road, East Caulfield, VIC, 3145, Australia
- School of Psychology, Faculty of Health, Deakin University, Burwood, Australia
| | - Chris Degeling
- Australian Centre for Health Engagement, Evidence and Values, School of Social Sciences, University of Wollongong, Wollongong, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Social Sciences, University of Wollongong, Wollongong, Australia
| | - Nehmat Houssami
- Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, Australia
- The Daffodil Centre, The University of Sydney, a Joint Venture with Cancer Council, Kings Cross, Sydney, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Social Sciences, University of Wollongong, Wollongong, Australia
| | - Gang Chen
- Centre for Health Economics, Monash Business School, Monash University, 900 Dandenong Road, East Caulfield, VIC, 3145, Australia
- Cancer Health Services Research, Collaborative Centre for Genomic Cancer Medicine and Centre for Health Policy, University of Melbourne, Melbourne, Australia
- Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
| |
Collapse
|
3
|
Ozcan BB, Dogan BE, Xi Y, Knippa EE. Patient Perception of Artificial Intelligence Use in Interpretation of Screening Mammograms: A Survey Study. Radiol Imaging Cancer 2025; 7:e240290. [PMID: 40249272 DOI: 10.1148/rycan.240290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/19/2025]
Abstract
Purpose To assess patient perceptions of artificial intelligence (AI) use in the interpretation of screening mammograms. Materials and Methods In a prospective, institutional review board-approved study, all patients undergoing mammography screening at the authors' institution between February 2023 and August 2023 were offered a 29-question survey. Age, race and ethnicity, education, income level, and history of breast cancer and biopsy were collected. Univariable and multivariable logistic regression analyses were used to identify the independent factors associated with participants' acceptance of AI use. Results Of the 518 participants, the majority were between the ages of 40 and 69 years (377 of 518, 72.8%), at least college graduates (347 of 518, 67.0%), and non-Hispanic White (262 of 518, 50.6%). Participant-reported knowledge of AI was none or minimal in 76.5% (396 of 518). Stand-alone AI interpretation was accepted by 4.44% (23 of 518), whereas 71.0% (368 of 518) preferred AI to be used as a second reader. After an AI-reported abnormal screening, 88.9% (319 of 359) requested radiologist review versus 51.3% (184 of 359) of radiologist recall review by AI (P < .001). In cases of discrepancy, higher rate of participants would undergo diagnostic examination for radiologist recalls compared with AI recalls (94.2% [419 of 445] vs 92.6% [412 of 445]; P = .20]. Higher education was associated with higher AI acceptance (odds ratio [OR] 2.05, 95% CI: 1.31, 3.20; P = .002). Race was associated with higher concern for bias in Hispanic versus non-Hispanic White participants (OR 3.32, 95% CI: 1.15, 9.61; P = .005) and non-Hispanic Black versus non-Hispanic White participants (OR 4.31, 95% CI: 1.50, 12.39; P = .005). Conclusion AI use as a second reader of screening mammograms was accepted by participants. Participants' race and education level were significantly associated with AI acceptance. Keywords: Breast, Mammography, Artificial Intelligence Supplemental material is available for this article. Published under a CC BY 4.0 license.
Collapse
Affiliation(s)
- B Bersu Ozcan
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, MC 8896, Dallas, TX 75390-8896
| | - Basak E Dogan
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, MC 8896, Dallas, TX 75390-8896
| | - Yin Xi
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, MC 8896, Dallas, TX 75390-8896
- Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Department of Population and Data Sciences, Dallas, Tex
| | - Emily E Knippa
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, MC 8896, Dallas, TX 75390-8896
| |
Collapse
|
4
|
Pearce A, Carter S, Frazer HML, Houssami N, Macheras‐Magias M, Webb G, Marinovich ML. Implementing artificial intelligence in breast cancer screening: Women's preferences. Cancer 2025; 131:e35859. [PMID: 40262029 PMCID: PMC12013981 DOI: 10.1002/cncr.35859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Revised: 02/20/2025] [Accepted: 03/14/2025] [Indexed: 04/24/2025]
Abstract
BACKGROUND Artificial intelligence (AI) could improve accuracy and efficiency of breast cancer screening. However, many women distrust AI in health care, potentially jeopardizing breast cancer screening participation rates. The aim was to quantify community preferences for models of AI implementation within breast cancer screening. METHODS An online discrete choice experiment survey of people eligible for breast cancer screening aged 40 to 74 years in Australia. Respondents answered 10 questions where they chose between two screening options created by an experimental design. Each screening option described the role of AI (supplementing current practice, replacing one radiologist, replacing both radiologists, or triaging), and the AI accuracy, ownership, representativeness, privacy, and waiting time. Analysis included conditional and latent class models, willingness-to-pay, and predicted screening uptake. RESULTS The 802 participants preferred screening where AI was more accurate, Australian owned, more representative and had shorter waiting time for results (all p < .001). There were strong preferences (p < .001) against AI alone or as triage. Three patterns of preferences emerged: positive about AI if accuracy improves (40% of sample), strongly against AI (42%), and concerned about AI (18%). Participants were willing to accept AI replacing one human reader if their results were available 10 days faster than current practice but would need results 21 days faster for AI as triage. Implementing AI inconsistent with community preferences could reduce participation by up to 22%.
Collapse
Affiliation(s)
- Alison Pearce
- The Daffodil CentreThe University of SydneyA Joint Venture With Cancer Council New South WalesSydneyNew South WalesAustralia
- Sydney School of Public HealthThe University of SydneySydneyNew South WalesAustralia
| | - Stacy Carter
- Australian Centre for Health Engagement, Evidence and ValuesSchool of Health and SocietyUniversity of WollongongWollongongNew South WalesAustralia
| | - Helen ML Frazer
- St Vincent’s Hospital MelbourneFitzroyVictoriaAustralia
- BreastScreen VictoriaCarltonVictoriaAustralia
| | - Nehmat Houssami
- The Daffodil CentreThe University of SydneyA Joint Venture With Cancer Council New South WalesSydneyNew South WalesAustralia
- Sydney School of Public HealthThe University of SydneySydneyNew South WalesAustralia
| | - Mary Macheras‐Magias
- Seat at the Table representativeBreast Cancer Network AustraliaCamberwellVictoriaAustralia
| | - Genevieve Webb
- Health Consumers New South WalesSydneyNew South WalesAustralia
| | - M. Luke Marinovich
- The Daffodil CentreThe University of SydneyA Joint Venture With Cancer Council New South WalesSydneyNew South WalesAustralia
- Sydney School of Public HealthThe University of SydneySydneyNew South WalesAustralia
| |
Collapse
|
5
|
Ye H. What Kind of Healthcare System Do We Need in the Era of Artificial Intelligence? J Gen Intern Med 2025:10.1007/s11606-025-09512-8. [PMID: 40229606 DOI: 10.1007/s11606-025-09512-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Accepted: 04/04/2025] [Indexed: 04/16/2025]
Affiliation(s)
- Hongnan Ye
- Beijing Alumni Association of China Medical University, Room 106, Building 3, Jindian Garden, No.9 Wenhuiyuan North Road, Haidian District, Beijing, 100000, China.
| |
Collapse
|
6
|
Grimm LJ. Radiologists Were Wrong to Mistrust the Machines. Radiology 2025; 314:e250402. [PMID: 40100025 DOI: 10.1148/radiol.250402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/20/2025]
Affiliation(s)
- Lars J Grimm
- Department of Radiology, Duke University Medical Center, Box 3808, Durham, NC 27710
| |
Collapse
|
7
|
Fransen SJ, Kwee TC, Rouw D, Roest C, van Lohuizen QY, Simonis FFJ, van Leeuwen PJ, Heijmink S, Ongena YP, Haan M, Yakar D. Patient perspectives on the use of artificial intelligence in prostate cancer diagnosis on MRI. Eur Radiol 2025; 35:769-775. [PMID: 39143247 PMCID: PMC11782406 DOI: 10.1007/s00330-024-11012-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 07/17/2024] [Accepted: 07/23/2024] [Indexed: 08/16/2024]
Abstract
OBJECTIVES This study investigated patients' acceptance of artificial intelligence (AI) for diagnosing prostate cancer (PCa) on MRI scans and the factors influencing their trust in AI diagnoses. MATERIALS AND METHODS A prospective, multicenter study was conducted between January and November 2023. Patients undergoing prostate MRI were surveyed about their opinions on hypothetical AI assessment of their MRI scans. The questionnaire included nine items: four on hypothetical scenarios of combinations between AI and the radiologist, two on trust in the diagnosis, and three on accountability for misdiagnosis. Relationships between the items and independent variables were assessed using multivariate analysis. RESULTS A total of 212 PCa suspicious patients undergoing prostate MRI were included. The majority preferred AI involvement in their PCa diagnosis alongside a radiologist, with 91% agreeing with AI as the primary reader and 79% as the secondary reader. If AI has a high certainty diagnosis, 15% of the respondents would accept it as the sole decision-maker. Autonomous AI outperforming radiologists would be accepted by 52%. Higher educated persons tended to accept AI when it would outperform radiologists (p < 0.05). The respondents indicated that the hospital (76%), radiologist (70%), and program developer (55%) should be held accountable for misdiagnosis. CONCLUSIONS Patients favor AI involvement alongside radiologists in PCa diagnosis. Trust in AI diagnosis depends on the patient's education level and the AI performance, with autonomous AI acceptance by a small majority on the condition that AI outperforms a radiologist. Respondents held the hospital, radiologist, and program developers accountable for misdiagnosis in descending order of accountability. CLINICAL RELEVANCE STATEMENT Patients show a high level of acceptance for AI-assisted prostate cancer diagnosis on MRI, either alongside radiologists or fully autonomous, particularly if it demonstrates superior performance to radiologists alone. KEY POINTS Prostate cancer suspicious patients may accept autonomous AI based on performance. Patients prefer AI involvement alongside a radiologist in diagnosing prostate cancer. Patients indicate accountability for AI should be shared among multiple stakeholders.
Collapse
Affiliation(s)
| | - T C Kwee
- University Medical Center Groningen, Groningen, Netherlands
| | - D Rouw
- Martini Hospital, Groningen, Netherlands
| | - C Roest
- University Medical Center Groningen, Groningen, Netherlands
| | | | | | | | - S Heijmink
- Dutch Cancer Institute, Amsterdam, Netherlands
| | - Y P Ongena
- University of Groningen, Groningen, Netherlands
| | - M Haan
- University of Groningen, Groningen, Netherlands
| | - D Yakar
- University Medical Center Groningen, Groningen, Netherlands
- Dutch Cancer Institute, Amsterdam, Netherlands
| |
Collapse
|
8
|
Fanni SC, Neri E. Bystanders or stakeholders: patient perspectives on the adoption of AI in radiology. Eur Radiol 2025; 35:767-768. [PMID: 39417868 DOI: 10.1007/s00330-024-11135-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Revised: 09/16/2024] [Accepted: 10/03/2024] [Indexed: 10/19/2024]
Affiliation(s)
| | - Emanuele Neri
- Department of Translational Research, Academic Radiology, University of Pisa, Pisa, Italy
| |
Collapse
|
9
|
Abu Abeelh E, Abuabeileh Z. Screening Mammography and Artificial Intelligence: A Comprehensive Systematic Review. Cureus 2025; 17:e79353. [PMID: 40125173 PMCID: PMC11929143 DOI: 10.7759/cureus.79353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/19/2025] [Indexed: 03/25/2025] Open
Abstract
Screening mammography is vital for early breast cancer detection, improving outcomes by identifying malignancies at treatable stages. Artificial intelligence has emerged as a tool to enhance diagnostic accuracy and reduce radiologists' workload in screening programs, though its full integration into clinical practice remains limited, necessitating a comprehensive review of its performance. This systematic review assesses artificial intelligence's effectiveness in screening mammography, focusing on diagnostic performance, reduction of false positives, and support for radiologists in clinical decision-making. A systematic search was conducted across PubMed, Embase, Web of Science, Cochrane Central, and Scopus for studies published between 2013 and 2024, including those evaluating artificial intelligence in mammography screening and reporting outcomes related to cancer detection, sensitivity, specificity, and workflow optimization. A total of 13 studies were analyzed, with data extracted on study characteristics, population demographics, artificial intelligence algorithms, and key outcomes. Artificial intelligence-assisted readings in screening mammography were found to be comparable or superior to traditional double readings by radiologists, reducing unnecessary recalls, improving specificity, and in some cases increasing cancer detection rates. Its integration into workflows showed potential for reducing radiologist workload while maintaining high diagnostic performance; however, challenges such as high false-positive rates and variations in artificial intelligence performance across patient subgroups remain concerns. Overall, artificial intelligence has the potential to enhance the efficiency and accuracy of breast cancer screening programs, and while it can reduce unnecessary recalls and alleviate radiologists' workloads, issues with false positives and demographic variations in accuracy highlight the need for further research. With ongoing refinement, artificial intelligence could become a valuable tool in routine mammography screening, augmenting radiologists' capabilities and improving patient care.
Collapse
|
10
|
Zackrisson S, Bolejko A. Navigating the future of mammography: How women's perceptions of AI may guide tomorrow's screening practice. Eur J Radiol 2025; 183:111870. [PMID: 39644598 DOI: 10.1016/j.ejrad.2024.111870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Accepted: 12/02/2024] [Indexed: 12/09/2024]
Affiliation(s)
- Sophia Zackrisson
- Department of Medical Imaging and Physiology, Skåne University Hospital, Inga Marie Nilssons gata 47 205 02, Malmö, Sweden; Department of Translational Medicine, Diagnostic Radiology, Lund University, Inga Marie Nilssons gata 47 205 02, Malmö, Sweden.
| | - Anetta Bolejko
- Department of Medical Imaging and Physiology, Skåne University Hospital, Inga Marie Nilssons gata 47 205 02, Malmö, Sweden; Department of Translational Medicine, Diagnostic Radiology, Lund University, Inga Marie Nilssons gata 47 205 02, Malmö, Sweden
| |
Collapse
|
11
|
Fathi M, Vakili K, Hajibeygi R, Bahrami A, Behzad S, Tafazolimoghadam A, Aghabozorgi H, Eshraghi R, Bhatt V, Gholamrezanezhad A. Cultivating diagnostic clarity: The importance of reporting artificial intelligence confidence levels in radiologic diagnoses. Clin Imaging 2025; 117:110356. [PMID: 39566394 DOI: 10.1016/j.clinimag.2024.110356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 11/01/2024] [Accepted: 11/09/2024] [Indexed: 11/22/2024]
Abstract
Accurate image interpretation is essential in the field of radiology to the healthcare team in order to provide optimal patient care. This article discusses the use of artificial intelligence (AI) confidence levels to enhance the accuracy and dependability of its radiological diagnoses. The current advances in AI technologies have changed how radiologists and clinicians make the diagnoses of pathological conditions such as aneurysms, hemorrhages, pneumothorax, pneumoperitoneum, and particularly fractures. To enhance the utility of these AI models, radiologists need a more comprehensive understanding of the model's levels of confidence and certainty behind the results they produce. This allows radiologists to make more informed decisions that have the potential to drastically change a patient's clinical management. Several AI models, especially those utilizing deep learning models (DL) with convolutional neural networks (CNNs), have demonstrated significant potential in identifying subtle findings in medical imaging that are often missed by radiologists. It is necessary to create standardized levels of confidence metrics in order for AI systems to be relevant and reliable in the clinical setting. Incorporating AI into clinical practice does have certain obstacles like the need for clinical validation, concerns regarding the interpretability of AI system results, and addressing confusion and misunderstandings within the medical community. This study emphasizes the importance of AI systems to clearly convey their level of confidence in radiological diagnosis. This paper highlights the importance of conducting research to establish AI confidence level metrics that are limited to a specific anatomical region or lesion type. KEY POINT OF THE VIEW: Accurate fracture diagnosis relies on radiologic certainty, where Artificial intelligence (AI), especially convolutional neural networks (CNNs) and deep learning (DL), shows promise in enhancing X-ray interpretation amidst a shortage of radiologists. Overcoming integration challenges through improved AI interpretability and education is crucial for widespread acceptance and better patient outcomes.
Collapse
Affiliation(s)
- Mobina Fathi
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Science, Tehran, Iran; School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Kimia Vakili
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ramtin Hajibeygi
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Science, Tehran, Iran; Tehran University of Medical Science (TUMS), School of Medicine, Tehran, Iran
| | - Ashkan Bahrami
- Faculty of Medicine, Kashan University of Medical Science, Kashan, Iran
| | - Shima Behzad
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Science, Tehran, Iran
| | | | - Hadiseh Aghabozorgi
- Student Research Committee, Shahrekord University of Medical Sciences, Shahrekord, Iran
| | - Reza Eshraghi
- Faculty of Medicine, Kashan University of Medical Science, Kashan, Iran
| | - Vivek Bhatt
- University of California, Riverside, School of Medicine, Riverside, CA, United States of America
| | - Ali Gholamrezanezhad
- Keck School of Medicine of University of Southern California, Los Angeles, CA, United States of America; Department of Radiology, Cedars Sinai Hospital, Los Angeles, CA, United States of America.
| |
Collapse
|
12
|
Singh S, Healy NA. The top 100 most-cited articles on artificial intelligence in breast radiology: a bibliometric analysis. Insights Imaging 2024; 15:297. [PMID: 39666106 PMCID: PMC11638451 DOI: 10.1186/s13244-024-01869-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Accepted: 11/24/2024] [Indexed: 12/13/2024] Open
Abstract
INTRODUCTION Artificial intelligence (AI) in radiology is a rapidly evolving field. In breast imaging, AI has already been applied in a real-world setting and multiple studies have been conducted in the area. The aim of this analysis is to identify the most influential publications on the topic of artificial intelligence in breast imaging. METHODS A retrospective bibliometric analysis was conducted on artificial intelligence in breast radiology using the Web of Science database. The search strategy involved searching for the keywords 'breast radiology' or 'breast imaging' and the various keywords associated with AI such as 'deep learning', 'machine learning,' and 'neural networks'. RESULTS From the top 100 list, the number of citations per article ranged from 30 to 346 (average 85). The highest cited article titled 'Artificial Neural Networks In Mammography-Application To Decision-Making In The Diagnosis Of Breast-Cancer' was published in Radiology in 1993. Eighty-three of the articles were published in the last 10 years. The journal with the greatest number of articles was Radiology (n = 22). The most common country of origin was the United States (n = 51). Commonly occurring topics published were the use of deep learning models for breast cancer detection in mammography or ultrasound, radiomics in breast cancer, and the use of AI for breast cancer risk prediction. CONCLUSION This study provides a comprehensive analysis of the top 100 most-cited papers on the subject of artificial intelligence in breast radiology and discusses the current most influential papers in the field. CLINICAL RELEVANCE STATEMENT This article provides a concise summary of the top 100 most-cited articles in the field of artificial intelligence in breast radiology. It discusses the most impactful articles and explores the recent trends and topics of research in the field. KEY POINTS Multiple studies have been conducted on AI in breast radiology. The most-cited article was published in the journal Radiology in 1993. This study highlights influential articles and topics on AI in breast radiology.
Collapse
Affiliation(s)
- Sneha Singh
- Department of Radiology, Royal College of Surgeons in Ireland, Dublin, Ireland.
- Beaumont Breast Centre, Beaumont Hospital, Dublin, Ireland.
| | - Nuala A Healy
- Department of Radiology, Royal College of Surgeons in Ireland, Dublin, Ireland
- Beaumont Breast Centre, Beaumont Hospital, Dublin, Ireland
- Department of Radiology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
13
|
Gatting L, Ahmed S, Meccheri P, Newlands R, Kehagia AA, Waller J. Acceptability of artificial intelligence in breast screening: focus groups with the screening-eligible population in England. BMJ PUBLIC HEALTH 2024; 2:e000892. [PMID: 40018529 PMCID: PMC11816108 DOI: 10.1136/bmjph-2024-000892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 10/31/2024] [Indexed: 03/01/2025]
Abstract
Introduction Preliminary studies of artificial intelligence (AI) tools developed to support breast screening demonstrate the potential to reduce radiologist burden and improve cancer detection which could lead to improved breast cancer outcomes. This study explores the public acceptability of the use of AI in breast screening from the perspective of screening-eligible women in England. Methods 64 women in England, aged 50-70 years (eligible for breast screening) and 45-49 years (approaching eligibility), participated in 12 focus groups-8 online and 4 in person. Specific scenarios in which AI may be used in the mammogram reading process were presented. Data were analysed using a reflexive thematic analysis. Results Four themes described public perceptions of AI in breast screening found in this study: (1) Things going wrong and being missed summarises a predominant and pervasive concern about an AI tool being used in breast screening; (2) Speed of change and loss of control captures a positive association of AI with technological advances held by the women but also feelings of things being out of their control, and that they were being left behind and in the dark; (3) The importance of humans reports concern around the possibility that AI excludes humans and renders them redundant and (4) Desire for thorough research, staggered implementation and double-checking of scans included insistence that any AI be thoroughly trialled, tested and not solely relied on when initially implemented. Conclusions It will be essential that future decision-making and communication about AI implementation in breast screening (and, likely, in healthcare more widely) address concerns surrounding (1) the fallibility of AI, (2) lack of inclusion, control and transparency in relation to healthcare and technology decisions and (3) humans being left redundant and unneeded, while building on women's hopes for the technology.
Collapse
Affiliation(s)
- Lauren Gatting
- Cancer Prevention Group, School of Cancer & Pharmaceutical Sciences, King's College London, London, UK
- King’s Technology Evaluation Centre, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Centre for Cancer Screening, Prevention and Early Diagnosis, Wolfson Institute of Population Health, Queen Mary University of London, London, UK
| | - Syeda Ahmed
- School of Mental Health & Psychological Sciences, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - Priscilla Meccheri
- School of Mental Health & Psychological Sciences, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - Rumana Newlands
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | - Angie A Kehagia
- King’s Technology Evaluation Centre, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Jo Waller
- Cancer Prevention Group, School of Cancer & Pharmaceutical Sciences, King's College London, London, UK
- Centre for Cancer Screening, Prevention and Early Diagnosis, Wolfson Institute of Population Health, Queen Mary University of London, London, UK
| |
Collapse
|
14
|
Carter SM, Popic D, Marinovich ML, Carolan L, Houssami N. Women's views on using artificial intelligence in breast cancer screening: A review and qualitative study to guide breast screening services. Breast 2024; 77:103783. [PMID: 39111200 PMCID: PMC11362777 DOI: 10.1016/j.breast.2024.103783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 07/30/2024] [Accepted: 07/31/2024] [Indexed: 09/02/2024] Open
Abstract
As breast screening services move towards use of healthcare AI (HCAI) for screen reading, research on public views of HCAI can inform more person-centered implementation. We synthesise reviews of public views of HCAI in general, and review primary studies of women's views of AI in breast screening. People generally appear open to HCAI and its potential benefits, despite a wide range of concerns; similarly, women are open towards AI in breast screening because of the potential benefits, but are concerned about a wide range of risks. Women want radiologists to remain central; oversight, evaluation and performance, care, equity and bias, transparency, and accountability are key issues; women may be less tolerant of AI error than of human error. Using our recent Australian primary study, we illustrate both the value of informing participants before collecting data, and women's views. The 40 screening-age women in this study stipulated four main conditions on breast screening AI implementation: 1) maintaining human control; 2) strong evidence of performance; 3) supporting familiarisation with AI; and 4) providing adequate reasons for introducing AI. Three solutions were offered to support familiarisation: transparency and information; slow and staged implementation; and allowing women to opt-out of AI reading. We provide recommendations to guide both implementation of AI in healthcare and research on public views of HCAI. Breast screening services should be transparent about AI use and share information about breast screening AI with women. Implementation should be slow and staged, providing opt-out options if possible. Screening services should demonstrate strong governance to maintain clinician control, demonstrate excellent AI system performance, assure data protection and bias mitigation, and give good reasons to justify implementation. When these measures are put in place, women are more likely to see HCAI use in breast screening as legitimate and acceptable.
Collapse
Affiliation(s)
- Stacy M Carter
- Australian Centre for Health Engagement Evidence and Values, School of Health and Society, Faculty of Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, New South Wales, Australia.
| | - Diana Popic
- Australian Centre for Health Engagement Evidence and Values, School of Health and Society, Faculty of Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, New South Wales, Australia.
| | - M Luke Marinovich
- The Daffodil Centre, The University of Sydney, a Joint Venture with Cancer Council NSW, Sydney, New South Wales, Australia.
| | - Lucy Carolan
- Australian Centre for Health Engagement Evidence and Values, School of Health and Society, Faculty of Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, New South Wales, Australia.
| | - Nehmat Houssami
- The Daffodil Centre, The University of Sydney, a Joint Venture with Cancer Council NSW, Sydney, New South Wales, Australia; School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia.
| |
Collapse
|
15
|
Patterson F, Kunar MA. The message matters: changes to binary Computer Aided Detection recommendations affect cancer detection in low prevalence search. Cogn Res Princ Implic 2024; 9:59. [PMID: 39218972 PMCID: PMC11366737 DOI: 10.1186/s41235-024-00576-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 07/09/2024] [Indexed: 09/04/2024] Open
Abstract
Computer Aided Detection (CAD) has been used to help readers find cancers in mammograms. Although these automated systems have been shown to help cancer detection when accurate, the presence of CAD also leads to an over-reliance effect where miss errors and false alarms increase when the CAD system fails. Previous research investigated CAD systems which overlayed salient exogenous cues onto the image to highlight suspicious areas. These salient cues capture attention which may exacerbate the over-reliance effect. Furthermore, overlaying CAD cues directly on the mammogram occludes sections of breast tissue which may disrupt global statistics useful for cancer detection. In this study we investigated whether an over-reliance effect occurred with a binary CAD system, which instead of overlaying a CAD cue onto the mammogram, reported a message alongside the mammogram indicating the possible presence of a cancer. We manipulated the certainty of the message and whether it was presented only to indicate the presence of a cancer, or whether a message was displayed on every mammogram to state whether a cancer was present or absent. The results showed that although an over-reliance effect still occurred with binary CAD systems miss errors were reduced when the CAD message was more definitive and only presented to alert readers of a possible cancer.
Collapse
Affiliation(s)
| | - Melina A Kunar
- Department of Psychology, The University of Warwick, Coventry, CV4 7AL, UK
| |
Collapse
|
16
|
Baghdadi LR, Mobeirek AA, Alhudaithi DR, Albenmousa FA, Alhadlaq LS, Alaql MS, Alhamlan SA. Patients' Attitudes Toward the Use of Artificial Intelligence as a Diagnostic Tool in Radiology in Saudi Arabia: Cross-Sectional Study. JMIR Hum Factors 2024; 11:e53108. [PMID: 39110973 PMCID: PMC11339559 DOI: 10.2196/53108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 03/15/2024] [Accepted: 06/22/2024] [Indexed: 08/25/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) is widely used in various medical fields, including diagnostic radiology as a tool for greater efficiency, precision, and accuracy. The integration of AI as a radiological diagnostic tool has the potential to mitigate delays in diagnosis, which could, in turn, impact patients' prognosis and treatment outcomes. The literature shows conflicting results regarding patients' attitudes to AI as a diagnostic tool. To the best of our knowledge, no similar study has been conducted in Saudi Arabia. OBJECTIVE The objectives of this study are to examine patients' attitudes toward the use of AI as a tool in diagnostic radiology at King Khalid University Hospital, Saudi Arabia. Additionally, we sought to explore potential associations between patients' attitudes and various sociodemographic factors. METHODS This descriptive-analytical cross-sectional study was conducted in a tertiary care hospital. Data were collected from patients scheduled for radiological imaging through a validated self-administered questionnaire. The main outcome was to measure patients' attitudes to the use of AI in radiology by calculating mean scores of 5 factors, distrust and accountability (factor 1), procedural knowledge (factor 2), personal interaction and communication (factor 3), efficiency (factor 4), and methods of providing information to patients (factor 5). Data were analyzed using the student t test, one-way analysis of variance followed by post hoc and multivariable analysis. RESULTS A total of 382 participants (n=273, 71.5% women and n=109, 28.5% men) completed the surveys and were included in the analysis. The mean age of the respondents was 39.51 (SD 13.26) years. Participants favored physicians over AI for procedural knowledge, personal interaction, and being informed. However, the participants demonstrated a neutral attitude for distrust and accountability and for efficiency. Marital status was found to be associated with distrust and accountability, procedural knowledge, and personal interaction. Associations were also found between self-reported health status and being informed and between the field of specialization and distrust and accountability. CONCLUSIONS Patients were keen to understand the work of AI in radiology but favored personal interaction with a radiologist. Patients were impartial toward AI replacing radiologists and the efficiency of AI, which should be a consideration in future policy development and integration. Future research involving multicenter studies in different regions of Saudi Arabia is required.
Collapse
Affiliation(s)
- Leena R Baghdadi
- Department of Family and Community Medicine, College of Medicine, King Saud University, Riyadh, Saudi Arabia
| | - Arwa A Mobeirek
- College of Medicine, King Saud University, Riyadh, Saudi Arabia
| | | | | | - Leen S Alhadlaq
- College of Medicine, King Saud University, Riyadh, Saudi Arabia
| | - Maisa S Alaql
- College of Medicine, King Saud University, Riyadh, Saudi Arabia
| | | |
Collapse
|
17
|
Castner N, Arsiwala-Scheppach L, Mertens S, Krois J, Thaqi E, Kasneci E, Wahl S, Schwendicke F. Expert gaze as a usability indicator of medical AI decision support systems: a preliminary study. NPJ Digit Med 2024; 7:199. [PMID: 39068241 PMCID: PMC11283514 DOI: 10.1038/s41746-024-01192-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Accepted: 07/12/2024] [Indexed: 07/30/2024] Open
Abstract
Given the current state of medical artificial intelligence (AI) and perceptions towards it, collaborative systems are becoming the preferred choice for clinical workflows. This work aims to address expert interaction with medical AI support systems to gain insight towards how these systems can be better designed with the user in mind. As eye tracking metrics have been shown to be robust indicators of usability, we employ them for evaluating the usability and user interaction with medical AI support systems. We use expert gaze to assess experts' interaction with an AI software for caries detection in bitewing x-ray images. We compared standard viewing of bitewing images without AI support versus viewing where AI support could be freely toggled on and off. We found that experts turned the AI on for roughly 25% of the total inspection task, and generally turned it on halfway through the course of the inspection. Gaze behavior showed that when supported by AI, more attention was dedicated to user interface elements related to the AI support, with more frequent transitions from the image itself to these elements. When considering that expert visual strategy is already optimized for fast and effective image inspection, such interruptions in attention can lead to increased time needed for the overall assessment. Gaze analysis provided valuable insights into an AI's usability for medical image inspection. Further analyses of these tools and how to delineate metrical measures of usability should be developed.
Collapse
Affiliation(s)
- Nora Castner
- Carl Zeiss Vision International GmbH, Tübingen, Germany.
- University of Tübingen, Tübingen, Germany.
| | | | - Sarah Mertens
- Charité - Univesitätsmedizin, Oral Diagnostics, Digital Health and Services Research, Berlin, Germany
| | - Joachim Krois
- Charité - Univesitätsmedizin, Oral Diagnostics, Digital Health and Services Research, Berlin, Germany
| | - Enkeleda Thaqi
- Technical University of Munich, Human-Centered Technologies for Learning, Munich, Germany
| | - Enkelejda Kasneci
- Technical University of Munich, Human-Centered Technologies for Learning, Munich, Germany
| | - Siegfried Wahl
- Carl Zeiss Vision International GmbH, Tübingen, Germany
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
| | - Falk Schwendicke
- Ludwig Maximilian University, Operative, Preventative and Pediatric Dentistry and Periodontology, Munich, Germany
| |
Collapse
|
18
|
Johansson JV, Engström E. 'Humans think outside the pixels' - Radiologists' perceptions of using artificial intelligence for breast cancer detection in mammography screening in a clinical setting. Health Informatics J 2024; 30:14604582241275020. [PMID: 39155239 DOI: 10.1177/14604582241275020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/20/2024]
Abstract
OBJECTIVE This study aimed to explore radiologists' views on using an artificial intelligence (AI) tool named ScreenTrustCAD with Philips equipment) as a diagnostic decision support tool in mammography screening during a clinical trial at Capio Sankt Göran Hospital, Sweden. METHODS We conducted semi-structured interviews with seven breast imaging radiologists, evaluated using inductive thematic content analysis. RESULTS We identified three main thematic categories: AI in society, reflecting views on AI's contribution to the healthcare system; AI-human interactions, addressing the radiologists' self-perceptions when using the AI and its potential challenges to their profession; and AI as a tool among others. The radiologists were generally positive towards AI, and they felt comfortable handling its sometimes-ambiguous outputs and erroneous evaluations. While they did not feel that it would undermine their profession, they preferred using it as a complementary reader rather than an independent one. CONCLUSION The results suggested that breast radiology could become a launch pad for AI in healthcare. We recommend that this exploratory work on subjective perceptions be complemented by quantitative assessments to generalize the findings.
Collapse
Affiliation(s)
- Jennifer Viberg Johansson
- Department of Public Health and Caring Sciences, Centre for Research Ethics & Bioethics, Uppsala University, Uppsala, Sweden
| | - Emma Engström
- Institute for Futures Studies, Stockholm, Sweden; Department of Urban Planning and Environment, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
19
|
Frost EK, Bosward R, Aquino YSJ, Braunack-Mayer A, Carter SM. Facilitating public involvement in research about healthcare AI: A scoping review of empirical methods. Int J Med Inform 2024; 186:105417. [PMID: 38564959 DOI: 10.1016/j.ijmedinf.2024.105417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 03/06/2024] [Accepted: 03/17/2024] [Indexed: 04/04/2024]
Abstract
OBJECTIVE With the recent increase in research into public views on healthcare artificial intelligence (HCAI), the objective of this review is to examine the methods of empirical studies on public views on HCAI. We map how studies provided participants with information about HCAI, and we examine the extent to which studies framed publics as active contributors to HCAI governance. MATERIALS AND METHODS We searched 5 academic databases and Google Advanced for empirical studies investigating public views on HCAI. We extracted information including study aims, research instruments, and recommendations. RESULTS Sixty-two studies were included. Most were quantitative (N = 42). Most (N = 47) reported providing participants with background information about HCAI. Despite this, studies often reported participants' lack of prior knowledge about HCAI as a limitation. Over three quarters (N = 48) of the studies made recommendations that envisaged public views being used to guide governance of AI. DISCUSSION Provision of background information is an important component of facilitating research with publics on HCAI. The high proportion of studies reporting participants' lack of knowledge about HCAI as a limitation reflects the need for more guidance on how information should be presented. A minority of studies adopted technocratic positions that construed publics as passive beneficiaries of AI, rather than as active stakeholders in HCAI design and implementation. CONCLUSION This review draws attention to how public roles in HCAI governance are constructed in empirical studies. To facilitate active participation, we recommend that research with publics on HCAI consider methodological designs that expose participants to diverse information sources.
Collapse
Affiliation(s)
- Emma Kellie Frost
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Rebecca Bosward
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| |
Collapse
|
20
|
Holen ÅS, Martiniussen MA, Bergan MB, Moshina N, Hovda T, Hofvind S. Women's attitudes and perspectives on the use of artificial intelligence in the assessment of screening mammograms. Eur J Radiol 2024; 175:111431. [PMID: 38520804 DOI: 10.1016/j.ejrad.2024.111431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 02/26/2024] [Accepted: 03/15/2024] [Indexed: 03/25/2024]
Abstract
PURPOSE To investigate attitudes and perspectives on the use of artificial intelligence (AI) in the assessment of screening mammograms among women invited to BreastScreen Norway. METHOD An anonymous survey was sent to all women invited to BreastScreen Norway during the study period, October 10, 2022, to December 25, 2022 (n = 84,543). Questions were answered on a 10-point Likert scale and as multiple-choice, addressing knowledge of AI, willingness to participate in AI studies, information needs, confidence in AI results and AI assisted reading strategies, and thoughts on concerns and benefits of AI in mammography screening. Analyses were performed using χ2 and logistic regression tests. RESULTS General knowledge of AI was reported as extensive by 11.0% of the 8,355 respondents. Respondents were willing to participate in studies using AI either for decision support (64.0%) or triaging (54.9%). Being informed about use of AI-assisted image assessment was considered important, and a reading strategy of AI in combination with one radiologist preferred. Having extensive knowledge of AI was associated with willingness to participate in AI studies (decision support; odds ratio [OR]: 5.1, 95% confidence interval [CI]: 4.1-6.4, and triaging; OR: 3.4, 95% CI: 2.8-4.0) and trust in AI's independent assessment (OR: 6.8, 95% CI: 5.7, 8.3). CONCLUSIONS Women invited to BreastScreen Norway had a positive attitude towards the use of AI in image assessment, given that human readers are still involved. Targeted information and increased public knowledge of AI could help achieve high participation in AI studies and successful implementation of AI in mammography screening.
Collapse
Affiliation(s)
- Åsne Sørlien Holen
- Cancer Registry of Norway, Norwegian Institute of Public Health, Oslo, Norway.
| | - Marit Almenning Martiniussen
- Department of Radiology, Østfold Hospital Trust, Kalnes, Norway; University of Oslo, Institute of Clinical Medicine, Oslo, Norway.
| | - Marie Burns Bergan
- Cancer Registry of Norway, Norwegian Institute of Public Health, Oslo, Norway.
| | - Nataliia Moshina
- Cancer Registry of Norway, Norwegian Institute of Public Health, Oslo, Norway.
| | - Tone Hovda
- Department of Radiology, Vestre Viken Hospital Trust, Drammen, Norway.
| | - Solveig Hofvind
- Cancer Registry of Norway, Norwegian Institute of Public Health, Oslo, Norway; Department of Health and Care Sciences, UiT, The Artic University of Norway, Tromsø, Norway.
| |
Collapse
|
21
|
Geny M, Andres E, Talha S, Geny B. Liability of Health Professionals Using Sensors, Telemedicine and Artificial Intelligence for Remote Healthcare. SENSORS (BASEL, SWITZERLAND) 2024; 24:3491. [PMID: 38894282 PMCID: PMC11174849 DOI: 10.3390/s24113491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 05/17/2024] [Accepted: 05/24/2024] [Indexed: 06/21/2024]
Abstract
In the last few decades, there has been an ongoing transformation of our healthcare system with larger use of sensors for remote care and artificial intelligence (AI) tools. In particular, sensors improved by new algorithms with learning capabilities have proven their value for better patient care. Sensors and AI systems are no longer only non-autonomous devices such as the ones used in radiology or surgical robots; there are novel tools with a certain degree of autonomy aiming to largely modulate the medical decision. Thus, there will be situations in which the doctor is the one making the decision and has the final say and other cases in which the doctor might only apply the decision presented by the autonomous device. As those are two hugely different situations, they should not be treated the same way, and different liability rules should apply. Despite a real interest in the promise of sensors and AI in medicine, doctors and patients are reluctant to use it. One important reason is a lack clear definition of liability. Nobody wants to be at fault, or even prosecuted, because they followed the advice from an AI system, notably when it has not been perfectly adapted to a specific patient. Fears are present even with simple sensors and AI use, such as during telemedicine visits based on very useful, clinically pertinent sensors; with the risk of missing an important parameter; and, of course, when AI appears "intelligent", potentially replacing the doctors' judgment. This paper aims to provide an overview of the liability of the health professional in the context of the use of sensors and AI tools in remote healthcare, analyzing four regimes: the contract-based approach, the approach based on breach of duty to inform, the fault-based approach, and the approach related to the good itself. We will also discuss future challenges and opportunities in the promising domain of sensors and AI use in medicine.
Collapse
Affiliation(s)
- Marie Geny
- Joint Research Unit-UMR 7354, Law, Religion, Business and Society, University of Strasbourg, 67000 Strasbourg, France;
| | - Emmanuel Andres
- Biomedicine Research Center of Strasbourg (CRBS), UR 3072, “Mitochondria, Oxidative Stress and Muscle Plasticity”, University of Strasbourg, 67000 Strasbourg, France; (E.A.); (S.T.)
- Faculty of Medicine, University of Strasbourg, 67000 Strasbourg, France
- Department of Internal Medicine, University Hospital of Strasbourg, 67091 Strasbourg, France
| | - Samy Talha
- Biomedicine Research Center of Strasbourg (CRBS), UR 3072, “Mitochondria, Oxidative Stress and Muscle Plasticity”, University of Strasbourg, 67000 Strasbourg, France; (E.A.); (S.T.)
- Faculty of Medicine, University of Strasbourg, 67000 Strasbourg, France
- Department of Physiology and Functional Explorations, University Hospital of Strasbourg, 67091 Strasbourg, France
| | - Bernard Geny
- Biomedicine Research Center of Strasbourg (CRBS), UR 3072, “Mitochondria, Oxidative Stress and Muscle Plasticity”, University of Strasbourg, 67000 Strasbourg, France; (E.A.); (S.T.)
- Faculty of Medicine, University of Strasbourg, 67000 Strasbourg, France
- Department of Physiology and Functional Explorations, University Hospital of Strasbourg, 67091 Strasbourg, France
| |
Collapse
|
22
|
Newlands R, Bruhn H, Díaz MR, Lip G, Anderson LA, Ramsay C. A stakeholder analysis to prepare for real-world evaluation of integrating artificial intelligent algorithms into breast screening (PREP-AIR study): a qualitative study using the WHO guide. BMC Health Serv Res 2024; 24:569. [PMID: 38698386 PMCID: PMC11067265 DOI: 10.1186/s12913-024-10926-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 03/28/2024] [Indexed: 05/05/2024] Open
Abstract
BACKGROUND The national breast screening programme in the United Kingdom is under pressure due to workforce shortages and having been paused during the COVID-19 pandemic. Artificial intelligence has the potential to transform how healthcare is delivered by improving care processes and patient outcomes. Research on the clinical and organisational benefits of artificial intelligence is still at an early stage, and numerous concerns have been raised around its implications, including patient safety, acceptance, and accountability for decisions. Reforming the breast screening programme to include artificial intelligence is a complex endeavour because numerous stakeholders influence it. Therefore, a stakeholder analysis was conducted to identify relevant stakeholders, explore their views on the proposed reform (i.e., integrating artificial intelligence algorithms into the Scottish National Breast Screening Service for breast cancer detection) and develop strategies for managing 'important' stakeholders. METHODS A qualitative study (i.e., focus groups and interviews, March-November 2021) was conducted using the stakeholder analysis guide provided by the World Health Organisation and involving three Scottish health boards: NHS Greater Glasgow & Clyde, NHS Grampian and NHS Lothian. The objectives included: (A) Identify possible stakeholders (B) Explore stakeholders' perspectives and describe their characteristics (C) Prioritise stakeholders in terms of importance and (D) Develop strategies to manage 'important' stakeholders. Seven stakeholder characteristics were assessed: their knowledge of the targeted reform, position, interest, alliances, resources, power and leadership. RESULTS Thirty-two participants took part from 14 (out of 17 identified) sub-groups of stakeholders. While they were generally supportive of using artificial intelligence in breast screening programmes, some concerns were raised. Stakeholder knowledge, influence and interests in the reform varied. Key advantages mentioned include service efficiency, quicker results and reduced work pressure. Disadvantages included overdiagnosis or misdiagnosis of cancer, inequalities in detection and the self-learning capacity of the algorithms. Five strategies (with considerations suggested by stakeholders) were developed to maintain and improve the support of 'important' stakeholders. CONCLUSIONS Health services worldwide face similar challenges of workforce issues to provide patient care. The findings of this study will help others to learn from Scottish experiences and provide guidance to conduct similar studies targeting healthcare reform. STUDY REGISTRATION researchregistry6579, date of registration: 16/02/2021.
Collapse
Affiliation(s)
- Rumana Newlands
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK.
| | - Hanne Bruhn
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | | | - Gerald Lip
- North East Scotland Breast Screening Programme, NHS Grampian, Aberdeen, UK
| | - Lesley A Anderson
- Centre for Health Data Science, Institute of Applied Health Sciences, University of Aberdeen, Aberdeen, UK
| | - Craig Ramsay
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
23
|
Shamir SB, Sasson AL, Margolies LR, Mendelson DS. New Frontiers in Breast Cancer Imaging: The Rise of AI. Bioengineering (Basel) 2024; 11:451. [PMID: 38790318 PMCID: PMC11117903 DOI: 10.3390/bioengineering11050451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 04/18/2024] [Accepted: 04/26/2024] [Indexed: 05/26/2024] Open
Abstract
Artificial intelligence (AI) has been implemented in multiple fields of medicine to assist in the diagnosis and treatment of patients. AI implementation in radiology, more specifically for breast imaging, has advanced considerably. Breast cancer is one of the most important causes of cancer mortality among women, and there has been increased attention towards creating more efficacious methods for breast cancer detection utilizing AI to improve radiologist accuracy and efficiency to meet the increasing demand of our patients. AI can be applied to imaging studies to improve image quality, increase interpretation accuracy, and improve time efficiency and cost efficiency. AI applied to mammography, ultrasound, and MRI allows for improved cancer detection and diagnosis while decreasing intra- and interobserver variability. The synergistic effect between a radiologist and AI has the potential to improve patient care in underserved populations with the intention of providing quality and equitable care for all. Additionally, AI has allowed for improved risk stratification. Further, AI application can have treatment implications as well by identifying upstage risk of ductal carcinoma in situ (DCIS) to invasive carcinoma and by better predicting individualized patient response to neoadjuvant chemotherapy. AI has potential for advancement in pre-operative 3-dimensional models of the breast as well as improved viability of reconstructive grafts.
Collapse
Affiliation(s)
- Stephanie B. Shamir
- Department of Diagnostic, Molecular and Interventional Radiology, The Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY 10029, USA
| | | | | | | |
Collapse
|
24
|
Pesapane F, Giambersio E, Capetti B, Monzani D, Grasso R, Nicosia L, Rotili A, Sorce A, Meneghetti L, Carriero S, Santicchia S, Carrafiello G, Pravettoni G, Cassano E. Patients' Perceptions and Attitudes to the Use of Artificial Intelligence in Breast Cancer Diagnosis: A Narrative Review. Life (Basel) 2024; 14:454. [PMID: 38672725 PMCID: PMC11051490 DOI: 10.3390/life14040454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 03/26/2024] [Accepted: 03/27/2024] [Indexed: 04/28/2024] Open
Abstract
Breast cancer remains the most prevalent cancer among women worldwide, necessitating advancements in diagnostic methods. The integration of artificial intelligence (AI) into mammography has shown promise in enhancing diagnostic accuracy. However, understanding patient perspectives, particularly considering the psychological impact of breast cancer diagnoses, is crucial. This narrative review synthesizes literature from 2000 to 2023 to examine breast cancer patients' attitudes towards AI in breast imaging, focusing on trust, acceptance, and demographic influences on these views. Methodologically, we employed a systematic literature search across databases such as PubMed, Embase, Medline, and Scopus, selecting studies that provided insights into patients' perceptions of AI in diagnostics. Our review included a sample of seven key studies after rigorous screening, reflecting varied patient trust and acceptance levels towards AI. Overall, we found a clear preference among patients for AI to augment rather than replace the diagnostic process, emphasizing the necessity of radiologists' expertise in conjunction with AI to enhance decision-making accuracy. This paper highlights the importance of aligning AI implementation in clinical settings with patient needs and expectations, emphasizing the need for human interaction in healthcare. Our findings advocate for a model where AI augments the diagnostic process, underlining the necessity for educational efforts to mitigate concerns and enhance patient trust in AI-enhanced diagnostics.
Collapse
Affiliation(s)
- Filippo Pesapane
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| | - Emilia Giambersio
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (E.G.); (A.S.)
| | - Benedetta Capetti
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology, IRCCS, 20141 Milan, Italy; (B.C.); (D.M.); (R.G.); (G.P.)
| | - Dario Monzani
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology, IRCCS, 20141 Milan, Italy; (B.C.); (D.M.); (R.G.); (G.P.)
- Department of Psychology, Educational Science and Human Movement (SPPEFF), University of Palermo, 90133 Palermo, Italy
| | - Roberto Grasso
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology, IRCCS, 20141 Milan, Italy; (B.C.); (D.M.); (R.G.); (G.P.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy;
| | - Luca Nicosia
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| | - Anna Rotili
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| | - Adriana Sorce
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (E.G.); (A.S.)
| | - Lorenza Meneghetti
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| | - Serena Carriero
- Foundation IRCCS Cà Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy; (S.C.); (S.S.)
| | - Sonia Santicchia
- Foundation IRCCS Cà Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy; (S.C.); (S.S.)
| | - Gianpaolo Carrafiello
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy;
- Foundation IRCCS Cà Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy; (S.C.); (S.S.)
| | - Gabriella Pravettoni
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology, IRCCS, 20141 Milan, Italy; (B.C.); (D.M.); (R.G.); (G.P.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy;
| | - Enrico Cassano
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| |
Collapse
|
25
|
Carmichael J, Costanza E, Blandford A, Struyven R, Keane PA, Balaskas K. Diagnostic decisions of specialist optometrists exposed to ambiguous deep-learning outputs. Sci Rep 2024; 14:6775. [PMID: 38514657 PMCID: PMC10958016 DOI: 10.1038/s41598-024-55410-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 02/23/2024] [Indexed: 03/23/2024] Open
Abstract
Artificial intelligence (AI) has great potential in ophthalmology. We investigated how ambiguous outputs from an AI diagnostic support system (AI-DSS) affected diagnostic responses from optometrists when assessing cases of suspected retinal disease. Thirty optometrists (15 more experienced, 15 less) assessed 30 clinical cases. For ten, participants saw an optical coherence tomography (OCT) scan, basic clinical information and retinal photography ('no AI'). For another ten, they were also given AI-generated OCT-based probabilistic diagnoses ('AI diagnosis'); and for ten, both AI-diagnosis and AI-generated OCT segmentations ('AI diagnosis + segmentation') were provided. Cases were matched across the three types of presentation and were selected to include 40% ambiguous and 20% incorrect AI outputs. Optometrist diagnostic agreement with the predefined reference standard was lowest for 'AI diagnosis + segmentation' (204/300, 68%) compared to 'AI diagnosis' (224/300, 75% p = 0.010), and 'no Al' (242/300, 81%, p = < 0.001). Agreement with AI diagnosis consistent with the reference standard decreased (174/210 vs 199/210, p = 0.003), but participants trusted the AI more (p = 0.029) with segmentations. Practitioner experience did not affect diagnostic responses (p = 0.24). More experienced participants were more confident (p = 0.012) and trusted the AI less (p = 0.038). Our findings also highlight issues around reference standard definition.
Collapse
Affiliation(s)
- Josie Carmichael
- University College London Interaction Centre (UCLIC), UCL, London, UK.
- Institute of Ophthalmology, NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL, London, UK.
| | - Enrico Costanza
- University College London Interaction Centre (UCLIC), UCL, London, UK
| | - Ann Blandford
- University College London Interaction Centre (UCLIC), UCL, London, UK
| | - Robbert Struyven
- Institute of Ophthalmology, NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL, London, UK
| | - Pearse A Keane
- Institute of Ophthalmology, NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL, London, UK
| | - Konstantinos Balaskas
- Institute of Ophthalmology, NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL, London, UK
| |
Collapse
|
26
|
Vo V, Chen G, Aquino YSJ, Carter SM, Do QN, Woode ME. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc Sci Med 2023; 338:116357. [PMID: 37949020 DOI: 10.1016/j.socscimed.2023.116357] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/04/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023]
Abstract
INTRODUCTION Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals' perspectives to understand these issues from multiple perspectives. METHODOLOGY A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human - AI relationship. RESULTS The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. CONCLUSIONS While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
Collapse
Affiliation(s)
- Vinh Vo
- Centre for Health Economics, Monash University, Australia.
| | - Gang Chen
- Centre for Health Economics, Monash University, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Quynh Nga Do
- Department of Economics, Monash University, Australia
| | - Maame Esi Woode
- Centre for Health Economics, Monash University, Australia; Monash Data Futures Research Institute, Australia
| |
Collapse
|
27
|
Birch J. Medical AI, inductive risk and the communication of uncertainty: the case of disorders of consciousness. JOURNAL OF MEDICAL ETHICS 2023:jme-2023-109424. [PMID: 37979975 DOI: 10.1136/jme-2023-109424] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 10/28/2023] [Indexed: 11/20/2023]
Abstract
Some patients, following brain injury, do not outwardly respond to spoken commands, yet show patterns of brain activity that indicate responsiveness. This is 'cognitive-motor dissociation' (CMD). Recent research has used machine learning to diagnose CMD from electroencephalogram recordings. These techniques have high false discovery rates, raising a serious problem of inductive risk. It is no solution to communicate the false discovery rates directly to the patient's family, because this information may confuse, alarm and mislead. Instead, we need a procedure for generating case-specific probabilistic assessments that can be communicated clearly. This article constructs a possible procedure with three key elements: (1) A shift from categorical 'responding or not' assessments to degrees of evidence; (2) The use of patient-centred priors to convert degrees of evidence to probabilistic assessments; and (3) The use of standardised probability yardsticks to convey those assessments as clearly as possible.
Collapse
Affiliation(s)
- Jonathan Birch
- Centre for Philosophy of Natural and Social Science, LSE, London, UK
| |
Collapse
|
28
|
Reading Turchioe M, Harkins S, Desai P, Kumar S, Kim J, Hermann A, Joly R, Zhang Y, Pathak J, Benda NC. Women's perspectives on the use of artificial intelligence (AI)-based technologies in mental healthcare. JAMIA Open 2023; 6:ooad048. [PMID: 37425486 PMCID: PMC10329494 DOI: 10.1093/jamiaopen/ooad048] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 05/24/2023] [Accepted: 07/05/2023] [Indexed: 07/11/2023] Open
Abstract
This study aimed to evaluate women's attitudes towards artificial intelligence (AI)-based technologies used in mental health care. We conducted a cross-sectional, online survey of U.S. adults reporting female sex at birth focused on bioethical considerations for AI-based technologies in mental healthcare, stratifying by previous pregnancy. Survey respondents (n = 258) were open to AI-based technologies in mental healthcare but concerned about medical harm and inappropriate data sharing. They held clinicians, developers, healthcare systems, and the government responsible for harm. Most reported it was "very important" for them to understand AI output. More previously pregnant respondents reported being told AI played a small role in mental healthcare was "very important" versus those not previously pregnant (P = .03). We conclude that protections against harm, transparency around data use, preservation of the patient-clinician relationship, and patient comprehension of AI predictions may facilitate trust in AI-based technologies for mental healthcare among women.
Collapse
Affiliation(s)
| | - Sarah Harkins
- Columbia University School of Nursing, New York, New York, USA
| | - Pooja Desai
- Department of Biomedical Informatics, Columbia University, New York, New York, USA
| | | | - Jessica Kim
- Department of Population Health Sciences, Weill Cornell Medicine, New York, New York, USA
| | - Alison Hermann
- Department of Psychiatry, Weill Cornell Medicine, New York, New York, USA
| | - Rochelle Joly
- Department of Obstetrics and Gynecology, Weill Cornell Medicine, New York, New York, USA
| | - Yiye Zhang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, New York, USA
| | - Jyotishman Pathak
- Department of Population Health Sciences, Weill Cornell Medicine, New York, New York, USA
- Department of Psychiatry, Weill Cornell Medicine, New York, New York, USA
| | - Natalie C Benda
- Columbia University School of Nursing, New York, New York, USA
| |
Collapse
|
29
|
Champendal M, Marmy L, Malamateniou C, Sá Dos Reis C. Artificial intelligence to support person-centred care in breast imaging - A scoping review. J Med Imaging Radiat Sci 2023; 54:511-544. [PMID: 37183076 DOI: 10.1016/j.jmir.2023.04.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/05/2023] [Accepted: 04/11/2023] [Indexed: 05/16/2023]
Abstract
AIM To overview Artificial Intelligence (AI) developments and applications in breast imaging (BI) focused on providing person-centred care in diagnosis and treatment for breast pathologies. METHODS The scoping review was conducted in accordance with the Joanna Briggs Institute methodology. The search was conducted on MEDLINE, Embase, CINAHL, Web of science, IEEE explore and arxiv during July 2022 and included only studies published after 2016, in French and English. Combination of keywords and Medical Subject Headings terms (MeSH) related to breast imaging and AI were used. No keywords or MeSH terms related to patients, or the person-centred care (PCC) concept were included. Three independent reviewers screened all abstracts and titles, and all eligible full-text publications during a second stage. RESULTS 3417 results were identified by the search and 106 studies were included for meeting all criteria. Six themes relating to the AI-enabled PCC in BI were identified: individualised risk prediction/growth and prediction/false negative reduction (44.3%), treatment assessment (32.1%), tumour type prediction (11.3%), unnecessary biopsies reduction (5.7%), patients' preferences (2.8%) and other issues (3.8%). The main BI modalities explored in the included studies were magnetic resonance imaging (MRI) (31.1%), mammography (27.4%) and ultrasound (23.6%). The studies were predominantly retrospective, and some variations (age range, data source, race, medical imaging) were present in the datasets used. CONCLUSIONS The AI tools for person-centred care are mainly designed for risk and cancer prediction and disease management to identify the most suitable treatment. However, further studies are needed for image acquisition optimisation for different patient groups, improvement and customisation of patient experience and for communicating to patients the options and pathways of disease management.
Collapse
Affiliation(s)
- Mélanie Champendal
- School of Health Sciences HESAV, HES-SO; University of Applied Sciences Western Switzerland: Lausanne, CH.
| | - Laurent Marmy
- School of Health Sciences HESAV, HES-SO; University of Applied Sciences Western Switzerland: Lausanne, CH.
| | - Christina Malamateniou
- School of Health Sciences HESAV, HES-SO; University of Applied Sciences Western Switzerland: Lausanne, CH; Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, University of London, London, UK.
| | - Cláudia Sá Dos Reis
- School of Health Sciences HESAV, HES-SO; University of Applied Sciences Western Switzerland: Lausanne, CH.
| |
Collapse
|
30
|
Ozcan BB, Patel BK, Banerjee I, Dogan BE. Artificial Intelligence in Breast Imaging: Challenges of Integration Into Clinical Practice. JOURNAL OF BREAST IMAGING 2023; 5:248-257. [PMID: 38416888 DOI: 10.1093/jbi/wbad007] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Indexed: 03/01/2024]
Abstract
Artificial intelligence (AI) in breast imaging is a rapidly developing field with promising results. Despite the large number of recent publications in this field, unanswered questions have led to limited implementation of AI into daily clinical practice for breast radiologists. This paper provides an overview of the key limitations of AI in breast imaging including, but not limited to, limited numbers of FDA-approved algorithms and annotated data sets with histologic ground truth; concerns surrounding data privacy, security, algorithm transparency, and bias; and ethical issues. Ultimately, the successful implementation of AI into clinical care will require thoughtful action to address these challenges, transparency, and sharing of AI implementation workflows, limitations, and performance metrics within the breast imaging community and other end-users.
Collapse
Affiliation(s)
- B Bersu Ozcan
- The University of Texas Southwestern Medical Center, Department of Radiology, Dallas, TX, USA
| | | | - Imon Banerjee
- Mayo Clinic, Department of Radiology, Scottsdale, AZ, USA
| | - Basak E Dogan
- The University of Texas Southwestern Medical Center, Department of Radiology, Dallas, TX, USA
| |
Collapse
|
31
|
Ng AY, Glocker B, Oberije C, Fox G, Sharma N, James JJ, Ambrózay É, Nash J, Karpati E, Kerruish S, Kecskemethy PD. Artificial Intelligence as Supporting Reader in Breast Screening: A Novel Workflow to Preserve Quality and Reduce Workload. JOURNAL OF BREAST IMAGING 2023; 5:267-276. [PMID: 38416889 DOI: 10.1093/jbi/wbad010] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Indexed: 03/01/2024]
Abstract
OBJECTIVE To evaluate the effectiveness of a new strategy for using artificial intelligence (AI) as supporting reader for the detection of breast cancer in mammography-based double reading screening practice. METHODS Large-scale multi-site, multi-vendor data were used to retrospectively evaluate a new paradigm of AI-supported reading. Here, the AI served as the second reader only if it agrees with the recall/no-recall decision of the first human reader. Otherwise, a second human reader made an assessment followed by the standard clinical workflow. The data included 280 594 cases from 180 542 female participants screened for breast cancer at seven screening sites in two countries and using equipment from four hardware vendors. The statistical analysis included non-inferiority and superiority testing of cancer screening performance and evaluation of the reduction in workload, measured as arbitration rate and number of cases requiring second human reading. RESULTS Artificial intelligence as a supporting reader was found to be superior or noninferior on all screening metrics compared with human double reading while reducing the number of cases requiring second human reading by up to 87% (245 395/280 594). Compared with AI as an independent reader, the number of cases referred to arbitration was reduced from 13% (35 199/280 594) to 2% (5056/280 594). CONCLUSION The simulation indicates that the proposed workflow retains screening performance of human double reading while substantially reducing the workload. Further research should study the impact on the second human reader because they would only assess cases in which the AI prediction and first human reader disagree.
Collapse
Affiliation(s)
- Annie Y Ng
- Kheiron Medical Technologies, London, UK
| | - Ben Glocker
- Kheiron Medical Technologies, London, UK
- Imperial College London, Department of Computing, London, UK
| | | | | | - Nisha Sharma
- Leeds Teaching Hospital NHS Trust, Department of Radiology, Leeds, UK
| | - Jonathan J James
- Nottingham University Hospitals NHS Trust, Nottingham Breast Institute, Nottingham, UK
| | - Éva Ambrózay
- MaMMa Egészségügyi Zrt., Breast Diagnostic Department, Kecskemét, Hungary
| | | | | | | | | |
Collapse
|
32
|
Jenkinson GP, Houghton N, van Zalk N, Waller J, Bello F, Tzemanaki A. Acceptability of Automated Robotic Clinical Breast Examination: Survey Study. J Particip Med 2023; 15:e42704. [PMID: 37010907 PMCID: PMC10131668 DOI: 10.2196/42704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 01/14/2023] [Accepted: 02/20/2023] [Indexed: 02/22/2023] Open
Abstract
BACKGROUND In the United Kingdom, women aged 50 to 70 years are invited to undergo mammography. However, 10% of invasive breast cancers occur in women aged ≤45 years, representing an unmet need for young women. Identifying a suitable screening modality for this population is challenging; mammography is insufficiently sensitive, whereas alternative diagnostic methods are invasive or costly. Robotic clinical breast examination (R-CBE)-using soft robotic technology and machine learning for fully automated clinical breast examination-is a theoretically promising screening modality with early prototypes under development. Understanding the perspectives of potential users and partnering with patients in the design process from the outset is essential for ensuring the patient-centered design and implementation of this technology. OBJECTIVE This study investigated the attitudes and perspectives of women regarding the use of soft robotics and intelligent systems in breast cancer screening. It aimed to determine whether such technology is theoretically acceptable to potential users and identify aspects of the technology and implementation system that are priorities for patients, allowing these to be integrated into technology design. METHODS This study used a mixed methods design. We conducted a 30-minute web-based survey with 155 women in the United Kingdom. The survey comprised an overview of the proposed concept followed by 5 open-ended questions and 17 closed questions. Respondents were recruited through a web-based survey linked to the Cancer Research United Kingdom patient involvement opportunities web page and distributed through research networks' mailing lists. Qualitative data generated via the open-ended questions were analyzed using thematic analysis. Quantitative data were analyzed using 2-sample Kolmogorov-Smirnov tests, 1-tailed t tests, and Pearson coefficients. RESULTS Most respondents (143/155, 92.3%) indicated that they would definitely or probably use R-CBE, with 82.6% (128/155) willing to be examined for up to 15 minutes. The most popular location for R-CBE was at a primary care setting, whereas the most accepted method for receiving the results was an on-screen display (with an option to print information) immediately after the examination. Thematic analysis of free-text responses identified the following 7 themes: women perceive that R-CBE has the potential to address limitations in current screening services; R-CBE may facilitate increased user choice and autonomy; ethical motivations for supporting R-CBE development; accuracy (and users' perceptions of accuracy) is essential; results management with clear communication is a priority for users; device usability is important; and integration with health services is key. CONCLUSIONS There is a high potential for the acceptance of R-CBE in its target user group and a high concordance between user expectations and technological feasibility. Early patient participation in the design process allowed the authors to identify key development priorities for ensuring that this new technology meets the needs of users. Ongoing patient and public involvement at each development stage is essential.
Collapse
Affiliation(s)
- George P Jenkinson
- Bristol Robotics Laboratory, Department of Mechanical Engineering, University of Bristol, Bristol, United Kingdom
| | - Natasha Houghton
- Centre for Engagement and Simulation Science, Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Nejra van Zalk
- Dyson School of Design Engineering, Imperial College London, London, United Kingdom
| | - Jo Waller
- Cancer Prevention Group, School of Cancer & Pharmaceutical Sciences, King's College London, London, United Kingdom
| | - Fernando Bello
- Centre for Engagement and Simulation Science, Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Antonia Tzemanaki
- Bristol Robotics Laboratory, Department of Mechanical Engineering, University of Bristol, Bristol, United Kingdom
| |
Collapse
|
33
|
Bahakeem BH, Alobaidi SF, Alzahrani AS, Alhasawi R, Alzahrani A, Alqahtani W, Alhashmi Alamer L, Bin Laswad BM, Al Shanbari N. The General Population's Perspectives on Implementation of Artificial Intelligence in Radiology in the Western Region of Saudi Arabia. Cureus 2023; 15:e37391. [PMID: 37182053 PMCID: PMC10171828 DOI: 10.7759/cureus.37391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/10/2023] [Indexed: 05/16/2023] Open
Abstract
Background Artificial intelligence (AI) is a broad spectrum of computer-executed operations that mimics the human intellect. It is expected to improve healthcare practice in general and radiology in particular by enhancing image acquisition, image analysis, and processing speed. Despite the rapid development of AI systems, successful application in radiology requires analysis of social factors such as the public's perspectives toward the technology. Objectives The current study aims to investigate the general population's perspectives on AI implementation in radiology in the Western region of Saudi Arabia. Methods A cross-sectional study was conducted from November 2022 and July 2023 utilizing a self-administrative online survey distributed via social media platforms. A convenience sampling technique was used to recruit the study participants. After obtaining Institutional Review Board approval, data were collected from citizens and residents of the western region of Saudi Arabia aged 18 years or older. Results A total of 1,024 participants were included in the present study, with the mean age of respondents being 29.6 ± 11.3. Of them, 49.9% (511) were men, and 50.1% (513) were women. The comprehensive mean score of the first four domains among our participants was 3.93 out of 5.00. Higher mean scores suggest being more negative regarding AI in radiology, except for the fifth domain. Respondents had less trust in AI utilization in radiology, as evidenced by their overall distrust and accountability domain mean score of 3.52 out of 5. The majority of respondents agreed that it is essential to understand every step of the diagnostic process, and the mean score for the procedural knowledge domain was 4.34 out of 5. The mean score for the personal interaction domain was 4.31 out of 5, indicating that the participants agreed on the value of direct communication between the patient and the radiologist for discussing test results and asking questions. Our data show that people think AI is more effective than human doctors in making accurate diagnoses and decreasing patient wait times, with an overall mean score of the efficiency domain of 3.56 out of 5. Finally, the fifth domain, "being informed," had a mean score of 3.91 out of 5. Conclusion The application of AI in radiologic assessment and interpretation is generally viewed negatively. Even though people think AI is more efficient and accurate at diagnosing than humans, they still think that computers will never be able to match a specialist doctor's years of training.
Collapse
Affiliation(s)
- Basem H Bahakeem
- Department of Medical Imaging, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Sultan F Alobaidi
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Amjad S Alzahrani
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Roudin Alhasawi
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Abdulkarem Alzahrani
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Wed Alqahtani
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Lujain Alhashmi Alamer
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Bassam M Bin Laswad
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Nasser Al Shanbari
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| |
Collapse
|
34
|
Hemphill S, Jackson K, Bradley S, Bhartia B. The implementation of artificial intelligence in radiology: a narrative review of patient perspectives. Future Healthc J 2023; 10:63-68. [PMID: 37786489 PMCID: PMC10538685 DOI: 10.7861/fhj.2022-0097] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
Abstract
Aim To synthesise research on the view of the public and patients of the use of artificial intelligence (AI) in radiology investigations. Methods A literature review of narrative synthesis of qualitative and quantitative studies that reported views of the public and patients toward the use of AI in radiology. Results Only seven studies related to patient and public views were retrieved, suggesting that this is an underexplored area of research. Two broad themes, of confidence in the capabilities of AI, and the accountability and transparency of AI, were identified. Conclusions Both optimism and concerns were expressed by participants. Transparency in the implementation of AI, scientific validation, clear regulation and accountability were expected. Combined human and AI interpretation of imaging was strongly favoured over AI acting autonomously. The review highlights the limited engagement of the public in the adoption of AI in a radiology setting. Successful implementation of AI in this field will require demonstrating not only adequate accuracy of the technology, but also its acceptance by patients.
Collapse
|
35
|
Malignant melanoma diagnosis applying a machine learning method based on the combination of nonlinear and texture features. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
36
|
Derevianko A, Pizzoli SFM, Pesapane F, Rotili A, Monzani D, Grasso R, Cassano E, Pravettoni G. The Use of Artificial Intelligence (AI) in the Radiology Field: What Is the State of Doctor-Patient Communication in Cancer Diagnosis? Cancers (Basel) 2023; 15:cancers15020470. [PMID: 36672417 PMCID: PMC9856827 DOI: 10.3390/cancers15020470] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 01/04/2023] [Accepted: 01/10/2023] [Indexed: 01/14/2023] Open
Abstract
BACKGROUND In the past decade, interest in applying Artificial Intelligence (AI) in radiology to improve diagnostic procedures increased. AI has potential benefits spanning all steps of the imaging chain, from the prescription of diagnostic tests to the communication of test reports. The use of AI in the field of radiology also poses challenges in doctor-patient communication at the time of the diagnosis. This systematic review focuses on the patient role and the interpersonal skills between patients and physicians when AI is implemented in cancer diagnosis communication. METHODS A systematic search was conducted on PubMed, Embase, Medline, Scopus, and PsycNet from 1990 to 2021. The search terms were: ("artificial intelligence" or "intelligence machine") and "communication" "radiology" and "oncology diagnosis". The PRISMA guidelines were followed. RESULTS 517 records were identified, and 5 papers met the inclusion criteria and were analyzed. Most of the articles emphasized the success of the technological support of AI in radiology at the expense of patient trust in AI and patient-centered communication in cancer disease. Practical implications and future guidelines were discussed according to the results. CONCLUSIONS AI has proven to be beneficial in helping clinicians with diagnosis. Future research may improve patients' trust through adequate information about the advantageous use of AI and an increase in medical compliance with adequate training on doctor-patient diagnosis communication.
Collapse
Affiliation(s)
- Alexandra Derevianko
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
| | - Silvia Francesca Maria Pizzoli
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
- Correspondence: ; Tel.: +39-0294372099
| | - Filippo Pesapane
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20139 Milan, Italy
| | - Anna Rotili
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20139 Milan, Italy
| | - Dario Monzani
- Department of Psychology, Educational Science and Human Movement, University of Palermo, 90128 Palermo, Italy
| | - Roberto Grasso
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Enrico Cassano
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20139 Milan, Italy
| | - Gabriella Pravettoni
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| |
Collapse
|
37
|
Goldberg JE, Reig B, Lewin AA, Gao Y, Heacock L, Heller SL, Moy L. New Horizons: Artificial Intelligence for Digital Breast Tomosynthesis. Radiographics 2023; 43:e220060. [DOI: 10.1148/rg.220060] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Julia E. Goldberg
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Beatriu Reig
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Alana A. Lewin
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Yiming Gao
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Laura Heacock
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Samantha L. Heller
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Linda Moy
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| |
Collapse
|
38
|
Carter SM, Carolan L, Saint James Aquino Y, Frazer H, Rogers WA, Hall J, Degeling C, Braunack-Mayer A, Houssami N. Australian women's judgements about using artificial intelligence to read mammograms in breast cancer screening. Digit Health 2023; 9:20552076231191057. [PMID: 37559826 PMCID: PMC10408316 DOI: 10.1177/20552076231191057] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 07/13/2023] [Indexed: 08/11/2023] Open
Abstract
Objective Mammographic screening for breast cancer is an early use case for artificial intelligence (AI) in healthcare. This is an active area of research, mostly focused on the development and evaluation of individual algorithms. A growing normative literature argues that AI systems should reflect human values, but it is unclear what this requires in specific AI implementation scenarios. Our objective was to understand women's values regarding the use of AI to read mammograms in breast cancer screening. Methods We ran eight online discussion groups with a total of 50 women, focused on their expectations and normative judgements regarding the use of AI in breast screening. Results Although women were positive about the potential of breast screening AI, they argued strongly that humans must remain as central actors in breast screening systems and consistently expressed high expectations of the performance of breast screening AI. Women expected clear lines of responsibility for decision-making, to be able to contest decisions, and for AI to perform equally well for all programme participants. Women often imagined both that AI might replace radiographers and that AI implementation might allow more women to be screened: screening programmes will need to communicate carefully about these issues. Conclusions To meet women's expectations, screening programmes should delay implementation until there is strong evidence that the use of AI systems improves screening performance, should ensure that human expertise and responsibility remain central in screening programmes, and should avoid using AI in ways that exacerbate inequities.
Collapse
Affiliation(s)
- Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Lucy Carolan
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Helen Frazer
- St Vincent's Hospital BreastScreen, BreastScreen Victoria, Fitzroy, Victoria, Australia
| | - Wendy A Rogers
- Philosophy Department and School of Medicine, Macquarie University, Sydney, NSW, Australia
| | - Julie Hall
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Chris Degeling
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Nehmat Houssami
- Daffodil Centre, University of Sydney, Joint Venture with Cancer Council NSW, Sydney, NSW, Australia
- Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
39
|
Pesapane F, Rotili A, Valconi E, Agazzi GM, Montesano M, Penco S, Nicosia L, Bozzini A, Meneghetti L, Latronico A, Pizzamiglio M, Rossero E, Gaeta A, Raimondi S, Pizzoli SFM, Grasso R, Carrafiello G, Pravettoni G, Cassano E. Women's perceptions and attitudes to the use of AI in breast cancer screening: a survey in a cancer referral centre. Br J Radiol 2023; 96:20220569. [PMID: 36314388 PMCID: PMC11864346 DOI: 10.1259/bjr.20220569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 08/19/2022] [Accepted: 09/11/2022] [Indexed: 11/16/2022] Open
Abstract
OBJECTIVE Although breast cancer screening can benefit from Artificial Intelligence (AI), it is still unknown whether, to which extent or under which conditions, the use of AI is going to be accepted by the general population. The aim of our study is to evaluate what the females who are eligible for breast cancer screening know about AI and how they perceive such innovation. METHODS We used a prospective survey consisting of a 11-multiple-choice questionnaire evaluating statistical associations with Chi-Square-test or Fisher-exact-test. Multinomial-logistic-regression was performed on items with more than two response categories. Odds ratio (OR) with 95% CI were computed to estimate the probability of a specific response according to patient's characteristics. RESULTS In the 800 analysed questionnaires, 51% of respondents confirmed to have knowledge of AI. Of these, 88% expressed a positive opinion about its use in medicine. Non-Italian respondents were associated with the belief of having a deep awareness about AI more often than Italian respondents (OR = 1.91;95% CI[1.10-3.33]). Higher education level was associated with better opinions on the use of AI in medicine (OR = 4.69;95% CI[1.36-16.12]). According to 94% of respondents, the radiologists should always produce their own report on mammograms, whilst 77% agreed that AI should be used as a second reader. Most respondents (52%) considered that both the software developer and the radiologist should be held accountable for AI errors. CONCLUSIONS Most of the females undergoing screening in our Institute approve the introduction of AI, although only as a support to radiologist, and not in substitution thereof. Yet, accountability in case of AI errors is still unsolved. advances in knowledge:This survey may be considered as a pilot-study for the development of large-scale studies to understand females's demands and concerns about AI applications in breast cancer screening.
Collapse
Affiliation(s)
- Filippo Pesapane
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Anna Rotili
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Elena Valconi
- Diagnostic and Interventional Radiology Unit, Department of Diagnostic and Therapeutic Advanced Technology, Azienda Socio Sanitaria Territoriale Santi Paolo and Carlo Hospital, Milan, Italy
| | - Giorgio Maria Agazzi
- Diagnostic and Interventional Radiology Unit, Department of Diagnostic and Therapeutic Advanced Technology, Azienda Socio Sanitaria Territoriale Santi Paolo and Carlo Hospital, Milan, Italy
| | - Marta Montesano
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Silvia Penco
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Luca Nicosia
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Anna Bozzini
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Lorenza Meneghetti
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Antuono Latronico
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Maria Pizzamiglio
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Eleonora Rossero
- Laboratorio dei Diritti Fondamentali, Collegio Carlo Alberto, Torino ER, Turin, Italy
| | - Aurora Gaeta
- Department of Experimental Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Sara Raimondi
- Department of Experimental Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | | | - Roberto Grasso
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Gianpaolo Carrafiello
- Department of Radiology and Department of Health Sciences, Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico and University of Milano, Milan, Italy
| | | | - Enrico Cassano
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| |
Collapse
|
40
|
Bahl M. Artificial Intelligence in Clinical Practice: Implementation Considerations and Barriers. JOURNAL OF BREAST IMAGING 2022; 4:632-639. [PMID: 36530476 PMCID: PMC9741727 DOI: 10.1093/jbi/wbac065] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2022] [Indexed: 09/06/2023]
Abstract
The rapid growth of artificial intelligence (AI) in radiology has led to Food and Drug Administration clearance of more than 20 AI algorithms for breast imaging. The steps involved in the clinical implementation of an AI product include identifying all stakeholders, selecting the appropriate product to purchase, evaluating it with a local data set, integrating it into the workflow, and monitoring its performance over time. Despite the potential benefits of improved quality and increased efficiency with AI, several barriers, such as high costs and liability concerns, may limit its widespread implementation. This article lists currently available AI products for breast imaging, describes the key elements of clinical implementation, and discusses barriers to clinical implementation.
Collapse
Affiliation(s)
- Manisha Bahl
- Massachusetts General Hospital, Department of Radiology, Boston, MA, USA
| |
Collapse
|
41
|
Nichol BAB, Hurlbert AC, Read JCA. Predicting attitudes towards screening for neurodegenerative diseases using OCT and artificial intelligence: Findings from a literature review. J Public Health Res 2022; 11:22799036221127627. [PMID: 36310821 PMCID: PMC9597051 DOI: 10.1177/22799036221127627] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 09/02/2022] [Indexed: 11/25/2022] Open
Abstract
Recent developments in artificial intelligence (AI) and machine learning raise the possibility of screening and early diagnosis for neurodegenerative diseases, using 3D scans of the retina. The eventual value of such screening will depend not only on scientific metrics such as specificity and sensitivity but, critically, also on public attitudes and uptake. Differential screening rates for various screening programmes in England indicate that multiple factors influence uptake. In this narrative literature review, some of these potential factors are explored in relation to predicting uptake of an early screening tool for neurodegenerative diseases using AI. These include: awareness of the disease, perceived risk, social influence, the use of AI, previous screening experience, socioeconomic status, health literacy, uncontrollable mortality risk, and demographic factors. The review finds the strongest and most consistent predictors to be ethnicity, social influence, the use of AI, and previous screening experience. Furthermore, it is likely that factors also interact to predict the uptake of such a tool. However, further experimental work is needed both to validate these predictions and explore interactions between the significant predictors.
Collapse
Affiliation(s)
- Beth AB Nichol
- Department of Social Work, Education,
and Community Wellbeing, Northumbria University, Newcastle upon Tyne, UK
| | - Anya C Hurlbert
- Biosciences Institute, Newcastle
University, Newcastle upon Tyne, UK
| | - Jenny CA Read
- Biosciences Institute, Newcastle
University, Newcastle upon Tyne, UK
| |
Collapse
|
42
|
Hendrix N, Lowry KP, Elmore JG, Lotter W, Sorensen G, Hsu W, Liao GJ, Parsian S, Kolb S, Naeim A, Lee CI. Radiologist Preferences for Artificial Intelligence-Based Decision Support During Screening Mammography Interpretation. J Am Coll Radiol 2022; 19:1098-1110. [PMID: 35970474 PMCID: PMC9840464 DOI: 10.1016/j.jacr.2022.06.019] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 06/03/2022] [Accepted: 06/07/2022] [Indexed: 01/17/2023]
Abstract
BACKGROUND Artificial intelligence (AI) may improve cancer detection and risk prediction during mammography screening, but radiologists' preferences regarding its characteristics and implementation are unknown. PURPOSE To quantify how different attributes of AI-based cancer detection and risk prediction tools affect radiologists' intentions to use AI during screening mammography interpretation. MATERIALS AND METHODS Through qualitative interviews with radiologists, we identified five primary attributes for AI-based breast cancer detection and four for breast cancer risk prediction. We developed a discrete choice experiment based on these attributes and invited 150 US-based radiologists to participate. Each respondent made eight choices for each tool between three alternatives: two hypothetical AI-based tools versus screening without AI. We analyzed samplewide preferences using random parameters logit models and identified subgroups with latent class models. RESULTS Respondents (n = 66; 44% response rate) were from six diverse practice settings across eight states. Radiologists were more interested in AI for cancer detection when sensitivity and specificity were balanced (94% sensitivity with <25% of examinations marked) and AI markup appeared at the end of the hanging protocol after radiologists complete their independent review. For AI-based risk prediction, radiologists preferred AI models using both mammography images and clinical data. Overall, 46% to 60% intended to adopt any of the AI tools presented in the study; 26% to 33% approached AI enthusiastically but were deterred if the features did not align with their preferences. CONCLUSION Although most radiologists want to use AI-based decision support, short-term uptake may be maximized by implementing tools that meet the preferences of dissuadable users.
Collapse
Affiliation(s)
- Nathaniel Hendrix
- Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Kathryn P Lowry
- Department of Radiology, University of Washington, Seattle Cancer Care Alliance, Seattle, Washington.
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, California
| | - William Lotter
- Chief Technology Officer, DeepHealth Inc, RadNet AI Solutions, Cambridge, Massachusetts
| | - Gregory Sorensen
- Chief Technology Officer, DeepHealth Inc, RadNet AI Solutions, Cambridge, Massachusetts
| | - William Hsu
- Department of Radiological Sciences, Data Integration, Architecture, and Analytics Group, University of California, Los Angeles, California; American Medical Informatics Association: Member, Governance Committee; RSNA: Deputy Editor, Radiology: Artificial Intelligence
| | - Geraldine J Liao
- Department of Radiology, Virginia Mason Medical Center, Seattle, Washington
| | - Sana Parsian
- Department of Radiology, University of Washington, Seattle Cancer Care Alliance, Seattle, Washington; Department of Radiology, Kaiser Permanente Washington, Seattle, Washington
| | - Suzanne Kolb
- Department of Radiology, University of Washington, Seattle Cancer Care Alliance, Seattle, Washington
| | - Arash Naeim
- Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, California; Chief Medical Officer for Clinical Research, UCLA Health; Codirector: Clinical and Translational Science Institute and Center for SMART Health; Associate Director: Institute for Precision Health, Jonsson Comprehensive Cancer Center, Garrick Institute for Risk Sciences
| | - Christoph I Lee
- Department of Radiology, University of Washington, Seattle Cancer Care Alliance, Seattle, Washington; Department of Health Services, School of Public Health, University of Washington, Seattle, Washington; and Deputy Editor, JACR
| |
Collapse
|
43
|
Lamb LR, Lehman CD, Gastounioti A, Conant EF, Bahl M. Artificial Intelligence (AI) for Screening Mammography, From the AJR Special Series on AI Applications. AJR Am J Roentgenol 2022; 219:369-380. [PMID: 35018795 DOI: 10.2214/ajr.21.27071] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Artificial intelligence (AI) applications for screening mammography are being marketed for clinical use in the interpretative domains of lesion detection and diagnosis, triage, and breast density assessment and in the noninterpretive domains of breast cancer risk assessment, image quality control, image acquisition, and dose reduction. Evidence in support of these nascent applications, particularly for lesion detection and diagnosis, is largely based on multireader studies with cancer-enriched datasets rather than rigorous clinical evaluation aligned with the application's specific intended clinical use. This article reviews commercial AI algorithms for screening mammography that are currently available for clinical practice, their use, and evidence supporting their performance. Clinical implementation considerations, such as workflow integration, governance, and ethical issues, are also described. In addition, the future of AI for screening mammography is discussed, including the development of interpretive and noninterpretive AI applications and strategic priorities for research and development.
Collapse
Affiliation(s)
- Leslie R Lamb
- Department of Radiology, Massachusetts General Hospital, 55 Fruit St, WAC 240, Boston, MA 02114
| | - Constance D Lehman
- Department of Radiology, Massachusetts General Hospital, 55 Fruit St, WAC 240, Boston, MA 02114
| | - Aimilia Gastounioti
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA
- Present affiliation: Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO
| | - Emily F Conant
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA
| | - Manisha Bahl
- Department of Radiology, Massachusetts General Hospital, 55 Fruit St, WAC 240, Boston, MA 02114
| |
Collapse
|
44
|
Impact of artificial intelligence in breast cancer screening with mammography. Breast Cancer 2022; 29:967-977. [PMID: 35763243 PMCID: PMC9587927 DOI: 10.1007/s12282-022-01375-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/29/2022] [Indexed: 11/21/2022]
Abstract
Objectives To demonstrate that radiologists, with the help of artificial intelligence (AI), are able to better classify screening mammograms into the correct breast imaging reporting and data system (BI-RADS) category, and as a secondary objective, to explore the impact of AI on cancer detection and mammogram interpretation time. Methods A multi-reader, multi-case study with cross-over design, was performed, including 314 mammograms. Twelve radiologists interpreted the examinations in two sessions delayed by a 4 weeks wash-out period with and without AI support. For each breast of each mammogram, they had to mark the most suspicious lesion (if any) and assign it with a forced BI-RADS category and a level of suspicion or “continuous BI-RADS 100”.
Cohen’s kappa correlation coefficient evaluating the inter-observer agreement for BI-RADS category per breast, and the area under the receiver operating characteristic curve (AUC), were used as metrics and analyzed. Results On average, the quadratic kappa coefficient increased significantly when using AI for all readers [κ = 0.549, 95% CI (0.528–0.571) without AI and κ = 0.626, 95% CI (0.607–0.6455) with AI]. AUC was significantly improved when using AI (0.74 vs 0.77, p = 0.004). Reading time was not significantly affected for all readers (106 s without AI and vs 102 s with AI; p = 0.754). Conclusions When using AI, radiologists were able to better assign mammograms with the correct BI-RADS category without slowing down the interpretation time.
Collapse
|
45
|
Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
46
|
Grimm LJ, Plichta JK, Hwang ES. More Than Incremental: Harnessing Machine Learning to Predict Breast Cancer Risk. J Clin Oncol 2022; 40:1713-1717. [PMID: 35245093 DOI: 10.1200/jco.21.02733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Lars J Grimm
- Department of Radiology, Duke University, Durham, NC
| | | | | |
Collapse
|
47
|
Yakar D, Ongena YP, Kwee TC, Haan M. Do People Favor Artificial Intelligence Over Physicians? A Survey Among the General Population and Their View on Artificial Intelligence in Medicine. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2022; 25:374-381. [PMID: 35227448 DOI: 10.1016/j.jval.2021.09.004] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Revised: 08/24/2021] [Accepted: 09/06/2021] [Indexed: 06/14/2023]
Abstract
OBJECTIVES To investigate the general population's view on artificial intelligence (AI) in medicine with specific emphasis on 3 areas that have experienced major progress in AI research in the past few years, namely radiology, robotic surgery, and dermatology. METHODS For this prospective study, the April 2020 Online Longitudinal Internet Studies for the Social Sciences Panel Wave was used. Of the 3117 Longitudinal Internet Studies For The Social Sciences panel members contacted, 2411 completed the full questionnaire (77.4% response rate), after combining data from earlier waves, the final sample size was 1909. A total of 3 scales focusing on trust in the implementation of AI in radiology, robotic surgery, and dermatology were used. Repeated-measures analysis of variance and multivariate analysis of variance was used for comparison. RESULTS The overall means show that respondents have slightly more trust in AI in dermatology than in radiology and surgery. The means show that higher educated males, employed or student, of Western background, and those not admitted to a hospital in the past 12 months have more trust in AI. The trust in AI in radiology, robotic surgery, and dermatology is positively associated with belief in the efficiency of AI and these specific domains were negatively associated with distrust and accountability in AI in general. CONCLUSIONS The general population is more distrustful of AI in medicine unlike the overall optimistic views posed in the media. The level of trust is dependent on what medical area is subject to scrutiny. Certain demographic characteristics and individuals with a generally positive view on AI and its efficiency are significantly associated with higher levels of trust in AI.
Collapse
Affiliation(s)
- Derya Yakar
- Department of Radiology, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| | - Yfke P Ongena
- Center of Language and Cognition, University of Groningen, Groningen, The Netherlands
| | - Thomas C Kwee
- Department of Radiology, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Marieke Haan
- Department of Sociology, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
48
|
Abstract
Built-in decision thresholds for AI diagnostics are ethically problematic, as patients may differ in their attitudes about the risk of false-positive and false-negative results, which will require that clinicians assess patient values.
Collapse
|
49
|
Ram S, Campbell T, Lourenco AP. Online or Offline: Does It Matter? A Review of Existing Interpretation Approaches and Their Effect on Screening Mammography Metrics, Patient Satisfaction, and Cost. JOURNAL OF BREAST IMAGING 2022; 4:3-9. [PMID: 38422414 DOI: 10.1093/jbi/wbab086] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Indexed: 03/02/2024]
Abstract
The ideal practice routine for screening mammography would optimize performance metrics and minimize costs, while also maximizing patient satisfaction. The main approaches to screening mammography interpretation include batch offline, non-batch offline, interrupted online, and uninterrupted online reading, each of which has its own advantages and drawbacks. This article reviews the current literature on approaches to screening mammography interpretation, potential effects of newer technologies, and promising artificial intelligence resources that could improve workflow efficiency in the future.
Collapse
Affiliation(s)
- Shruthi Ram
- Alpert Medical School of Brown University and Rhode Island Hospital, Department of Diagnostic Imaging, Providence, RI, USA
| | - Tyler Campbell
- Alpert Medical School of Brown University and Rhode Island Hospital, Department of Diagnostic Imaging, Providence, RI, USA
| | - Ana P Lourenco
- Alpert Medical School of Brown University and Rhode Island Hospital, Department of Diagnostic Imaging, Providence, RI, USA
| |
Collapse
|
50
|
Ploug T, Sundby A, Moeslund TB, Holm S. Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey. J Med Internet Res 2021; 23:e26611. [PMID: 34898454 PMCID: PMC8713089 DOI: 10.2196/26611] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 05/31/2021] [Accepted: 11/11/2021] [Indexed: 01/04/2023] Open
Abstract
BACKGROUND Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public's interests in such features of AI. OBJECTIVE This study elicited the public's preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI. METHODS We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents' views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios. RESULTS Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents' trust in health and technology, and respondents' fears and hopes regarding AI, do not play a significant role in the majority of cases. CONCLUSIONS The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information.
Collapse
Affiliation(s)
- Thomas Ploug
- Department of Communication and Psychology, Aalborg University, Copenhagen, Denmark
| | - Anna Sundby
- Department of Communication and Psychology, Aalborg University, Copenhagen, Denmark
| | - Thomas B Moeslund
- Visual Analysis and Perception Lab, Aalborg University, Aalborg, Denmark
| | - Søren Holm
- Centre for Social Ethics and Policy, University of Manchester, Manchester, United Kingdom
| |
Collapse
|