1
|
Wilkinson LS, Dunbar JK, Lip G. Clinical Integration of Artificial Intelligence for Breast Imaging. Radiol Clin North Am 2024; 62:703-716. [PMID: 38777544 DOI: 10.1016/j.rcl.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
This article describes an approach to planning and implementing artificial intelligence products in a breast screening service. It highlights the importance of an in-depth understanding of the end-to-end workflow and effective project planning by a multidisciplinary team. It discusses the need for monitoring to ensure that performance is stable and meets expectations, as well as focusing on the potential for inadvertantly generating inequality. New cross-discipline roles and expertise will be needed to enhance service delivery.
Collapse
Affiliation(s)
- Louise S Wilkinson
- Oxford Breast Imaging Centre, Churchill Hospital, Old Road, Headington, Oxford OX3 7LE, UK.
| | - J Kevin Dunbar
- Regional Head of Screening Quality Assurance Service (SQAS) - South, NHS England, England, UK
| | - Gerald Lip
- North East Scotland Breast Screening Service, Aberdeen Royal Infirmary, Foresterhill Road, Aberdeen AB25 2XF, UK
| |
Collapse
|
2
|
Frost EK, Bosward R, Aquino YSJ, Braunack-Mayer A, Carter SM. Facilitating public involvement in research about healthcare AI: A scoping review of empirical methods. Int J Med Inform 2024; 186:105417. [PMID: 38564959 DOI: 10.1016/j.ijmedinf.2024.105417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 03/06/2024] [Accepted: 03/17/2024] [Indexed: 04/04/2024]
Abstract
OBJECTIVE With the recent increase in research into public views on healthcare artificial intelligence (HCAI), the objective of this review is to examine the methods of empirical studies on public views on HCAI. We map how studies provided participants with information about HCAI, and we examine the extent to which studies framed publics as active contributors to HCAI governance. MATERIALS AND METHODS We searched 5 academic databases and Google Advanced for empirical studies investigating public views on HCAI. We extracted information including study aims, research instruments, and recommendations. RESULTS Sixty-two studies were included. Most were quantitative (N = 42). Most (N = 47) reported providing participants with background information about HCAI. Despite this, studies often reported participants' lack of prior knowledge about HCAI as a limitation. Over three quarters (N = 48) of the studies made recommendations that envisaged public views being used to guide governance of AI. DISCUSSION Provision of background information is an important component of facilitating research with publics on HCAI. The high proportion of studies reporting participants' lack of knowledge about HCAI as a limitation reflects the need for more guidance on how information should be presented. A minority of studies adopted technocratic positions that construed publics as passive beneficiaries of AI, rather than as active stakeholders in HCAI design and implementation. CONCLUSION This review draws attention to how public roles in HCAI governance are constructed in empirical studies. To facilitate active participation, we recommend that research with publics on HCAI consider methodological designs that expose participants to diverse information sources.
Collapse
Affiliation(s)
- Emma Kellie Frost
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Rebecca Bosward
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| |
Collapse
|
3
|
Holen ÅS, Martiniussen MA, Bergan MB, Moshina N, Hovda T, Hofvind S. Women's attitudes and perspectives on the use of artificial intelligence in the assessment of screening mammograms. Eur J Radiol 2024; 175:111431. [PMID: 38520804 DOI: 10.1016/j.ejrad.2024.111431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 02/26/2024] [Accepted: 03/15/2024] [Indexed: 03/25/2024]
Abstract
PURPOSE To investigate attitudes and perspectives on the use of artificial intelligence (AI) in the assessment of screening mammograms among women invited to BreastScreen Norway. METHOD An anonymous survey was sent to all women invited to BreastScreen Norway during the study period, October 10, 2022, to December 25, 2022 (n = 84,543). Questions were answered on a 10-point Likert scale and as multiple-choice, addressing knowledge of AI, willingness to participate in AI studies, information needs, confidence in AI results and AI assisted reading strategies, and thoughts on concerns and benefits of AI in mammography screening. Analyses were performed using χ2 and logistic regression tests. RESULTS General knowledge of AI was reported as extensive by 11.0% of the 8,355 respondents. Respondents were willing to participate in studies using AI either for decision support (64.0%) or triaging (54.9%). Being informed about use of AI-assisted image assessment was considered important, and a reading strategy of AI in combination with one radiologist preferred. Having extensive knowledge of AI was associated with willingness to participate in AI studies (decision support; odds ratio [OR]: 5.1, 95% confidence interval [CI]: 4.1-6.4, and triaging; OR: 3.4, 95% CI: 2.8-4.0) and trust in AI's independent assessment (OR: 6.8, 95% CI: 5.7, 8.3). CONCLUSIONS Women invited to BreastScreen Norway had a positive attitude towards the use of AI in image assessment, given that human readers are still involved. Targeted information and increased public knowledge of AI could help achieve high participation in AI studies and successful implementation of AI in mammography screening.
Collapse
Affiliation(s)
- Åsne Sørlien Holen
- Cancer Registry of Norway, Norwegian Institute of Public Health, Oslo, Norway.
| | - Marit Almenning Martiniussen
- Department of Radiology, Østfold Hospital Trust, Kalnes, Norway; University of Oslo, Institute of Clinical Medicine, Oslo, Norway.
| | - Marie Burns Bergan
- Cancer Registry of Norway, Norwegian Institute of Public Health, Oslo, Norway.
| | - Nataliia Moshina
- Cancer Registry of Norway, Norwegian Institute of Public Health, Oslo, Norway.
| | - Tone Hovda
- Department of Radiology, Vestre Viken Hospital Trust, Drammen, Norway.
| | - Solveig Hofvind
- Cancer Registry of Norway, Norwegian Institute of Public Health, Oslo, Norway; Department of Health and Care Sciences, UiT, The Artic University of Norway, Tromsø, Norway.
| |
Collapse
|
4
|
Bruce C, Gatzoulis MA, Brida M. Digital technology and artificial intelligence for improving congenital heart disease care: alea iacta est. Eur Heart J 2024; 45:1386-1389. [PMID: 38327005 DOI: 10.1093/eurheartj/ehad898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/09/2024] Open
Affiliation(s)
- Charo Bruce
- Adult Congenital Heart Centre and National Centre for Pulmonary Hypertension, Royal Brompton and Harefield Hospitals, Guys & St Thomas's NHS Trust, Sydney Street, London SW3 6NP, UK
| | - Michael A Gatzoulis
- Adult Congenital Heart Centre and National Centre for Pulmonary Hypertension, Royal Brompton and Harefield Hospitals, Guys & St Thomas's NHS Trust, Sydney Street, London SW3 6NP, UK
- National Heart & Lung Institute, Imperial College, South Kensington Campus, London SW7 2AZ, UK
| | - Margarita Brida
- Adult Congenital Heart Centre and National Centre for Pulmonary Hypertension, Royal Brompton and Harefield Hospitals, Guys & St Thomas's NHS Trust, Sydney Street, London SW3 6NP, UK
- National Heart & Lung Institute, Imperial College, South Kensington Campus, London SW7 2AZ, UK
- Medical Faculty University of Rijeka, Ul. Braće Branchetta 20/1, 51000 Rijeka, Croatia
| |
Collapse
|
5
|
Lin S, Ma Y, Jiang Y, Li W, Peng Y, Yu T, Xu Y, Zhu J, Lu L, Zou H. Service Quality and Residents' Preferences for Facilitated Self-Service Fundus Disease Screening: Cross-Sectional Study. J Med Internet Res 2024; 26:e45545. [PMID: 38630535 PMCID: PMC11063888 DOI: 10.2196/45545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 10/15/2023] [Accepted: 03/12/2024] [Indexed: 04/19/2024] Open
Abstract
BACKGROUND Fundus photography is the most important examination in eye disease screening. A facilitated self-service eye screening pattern based on the fully automatic fundus camera was developed in 2022 in Shanghai, China; it may help solve the problem of insufficient human resources in primary health care institutions. However, the service quality and residents' preference for this new pattern are unclear. OBJECTIVE This study aimed to compare the service quality and residents' preferences between facilitated self-service eye screening and traditional manual screening and to explore the relationships between the screening service's quality and residents' preferences. METHODS We conducted a cross-sectional study in Shanghai, China. Residents who underwent facilitated self-service fundus disease screening at one of the screening sites were assigned to the exposure group; those who were screened with a traditional fundus camera operated by an optometrist at an adjacent site comprised the control group. The primary outcome was the screening service quality, including effectiveness (image quality and screening efficiency), physiological discomfort, safety, convenience, and trustworthiness. The secondary outcome was the participants' preferences. Differences in service quality and the participants' preferences between the 2 groups were compared using chi-square tests separately. Subgroup analyses for exploring the relationships between the screening service's quality and residents' preference were conducted using generalized logit models. RESULTS A total of 358 residents enrolled; among them, 176 (49.16%) were included in the exposure group and the remaining 182 (50.84%) in the control group. Residents' basic characteristics were balanced between the 2 groups. There was no significant difference in service quality between the 2 groups (image quality pass rate: P=.79; average screening time: P=.57; no physiological discomfort rate: P=.92; safety rate: P=.78; convenience rate: P=.95; trustworthiness rate: P=.20). However, the proportion of participants who were willing to use the same technology for their next screening was significantly lower in the exposure group than in the control group (P<.001). Subgroup analyses suggest that distrust in the facilitated self-service eye screening might increase the probability of refusal to undergo screening (P=.02). CONCLUSIONS This study confirms that the facilitated self-service fundus disease screening pattern could achieve good service quality. However, it was difficult to reverse residents' preferences for manual screening in a short period, especially when the original manual service was already excellent. Therefore, the digital transformation of health care must be cautious. We suggest that attention be paid to the residents' individual needs. More efficient man-machine collaboration and personalized health management solutions based on large language models are both needed.
Collapse
Affiliation(s)
- Senlin Lin
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Yingyan Ma
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
- Shanghai General Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yanwei Jiang
- Shanghai Hongkou Center for Disease Control and Prevention, Shanghai, China
| | - Wenwen Li
- School of Management, Fudan University, Shanghai, China
| | - Yajun Peng
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Tao Yu
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Yi Xu
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Jianfeng Zhu
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Lina Lu
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Haidong Zou
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
- Shanghai General Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
6
|
Hölgyesi Á, Zrubka Z, Gulácsi L, Baji P, Haidegger T, Kozlovszky M, Weszl M, Kovács L, Péntek M. Robot-assisted surgery and artificial intelligence-based tumour diagnostics: social preferences with a representative cross-sectional survey. BMC Med Inform Decis Mak 2024; 24:87. [PMID: 38553703 PMCID: PMC10981282 DOI: 10.1186/s12911-024-02470-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 02/26/2024] [Indexed: 04/01/2024] Open
Abstract
BACKGROUND The aim of this study was to assess social preferences for two different advanced digital health technologies and investigate the contextual dependency of the preferences. METHODS A cross-sectional online survey was performed among the general population of Hungary aged 40 years and over. Participants were asked to imagine that they needed a total hip replacement surgery and to indicate whether they would prefer a traditional or a robot-assisted (RA) hip surgery. To better understand preferences for the chosen method, the willingness to pay (WTP) method was used. The same assessment was conducted for preferences between a radiologist's and AI-based image analysis in establishing the radiological diagnosis of a suspected tumour. Respondents' electronic health literacy was assessed with the eHEALS questionnaire. Descriptive methods were used to assess sample characteristics and differences between subgroups. Associations were investigated with correlation analysis and multiple linear regressions. RESULTS Altogether, 1400 individuals (53.7% female) with a mean age of 58.3 (SD = 11.1) years filled in the survey. RA hip surgery was chosen by 762 (54.4%) respondents, but only 470 (33.6%) chose AI-based medical image evaluation. Those who opted for the digital technology had significantly higher educational levels and electronic health literacy (eHEALS). The majority of respondents were willing to pay to secure their preferred surgical (surgeon 67.2%, robot-assisted: 68.8%) and image assessment (radiologist: 70.9%; AI: 77.4%) methods, reporting similar average amounts in the first (p = 0.677), and a significantly higher average amount for radiologist vs. AI in the second task (p = 0.001). The regression showed a significant association between WTP and income, and in the hip surgery task, it also revealed an association with the type of intervention chosen. CONCLUSIONS Individuals with higher education levels seem to accept the advanced digital medical technologies more. However, the greater openness for RA surgery than for AI image assessment highlights that social preferences may depend considerably on the medical situation and the type of advanced digital technology. WTP results suggest rather firm preferences in the great majority of the cases. Determinants of preferences and real-world choices of affected patients should be further investigated in future studies.
Collapse
Affiliation(s)
- Áron Hölgyesi
- Doctoral School, Semmelweis University, Budapest, Hungary.
- Health Economics Research Center, University Research and Innovation Center (EKIK), Óbuda University, Budapest, Hungary.
| | - Zsombor Zrubka
- Health Economics Research Center, University Research and Innovation Center (EKIK), Óbuda University, Budapest, Hungary
| | - László Gulácsi
- Health Economics Research Center, University Research and Innovation Center (EKIK), Óbuda University, Budapest, Hungary
| | - Petra Baji
- Musculoskeletal Research Unit, University of Bristol, Bristol, UK
| | - Tamás Haidegger
- Antal Bejczy Center for Intelligent Robotics, University Research and Innovation Center (EKIK) , Óbuda University, Budapest, Hungary
- Austrian Center for Medical Innovation and Technology (ACMIT) , Wiener Neustadt, Austria
| | - Miklós Kozlovszky
- BioTech Research Center, University Research and Innovation Center (EKIK) , Óbuda University, Budapest, Hungary
- John von Neumann Faculty of Informatics, Óbuda University, Budapest, Hungary
| | - Miklós Weszl
- Department of Translational Medicine, Semmelweis University, Budapest, Hungary
| | - Levente Kovács
- Physiological Controls Research Center, University Research and Innovation Center (EKIK) , Óbuda University, Budapest, Hungary
| | - Márta Péntek
- Health Economics Research Center, University Research and Innovation Center (EKIK), Óbuda University, Budapest, Hungary
| |
Collapse
|
7
|
Carmichael J, Costanza E, Blandford A, Struyven R, Keane PA, Balaskas K. Diagnostic decisions of specialist optometrists exposed to ambiguous deep-learning outputs. Sci Rep 2024; 14:6775. [PMID: 38514657 PMCID: PMC10958016 DOI: 10.1038/s41598-024-55410-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 02/23/2024] [Indexed: 03/23/2024] Open
Abstract
Artificial intelligence (AI) has great potential in ophthalmology. We investigated how ambiguous outputs from an AI diagnostic support system (AI-DSS) affected diagnostic responses from optometrists when assessing cases of suspected retinal disease. Thirty optometrists (15 more experienced, 15 less) assessed 30 clinical cases. For ten, participants saw an optical coherence tomography (OCT) scan, basic clinical information and retinal photography ('no AI'). For another ten, they were also given AI-generated OCT-based probabilistic diagnoses ('AI diagnosis'); and for ten, both AI-diagnosis and AI-generated OCT segmentations ('AI diagnosis + segmentation') were provided. Cases were matched across the three types of presentation and were selected to include 40% ambiguous and 20% incorrect AI outputs. Optometrist diagnostic agreement with the predefined reference standard was lowest for 'AI diagnosis + segmentation' (204/300, 68%) compared to 'AI diagnosis' (224/300, 75% p = 0.010), and 'no Al' (242/300, 81%, p = < 0.001). Agreement with AI diagnosis consistent with the reference standard decreased (174/210 vs 199/210, p = 0.003), but participants trusted the AI more (p = 0.029) with segmentations. Practitioner experience did not affect diagnostic responses (p = 0.24). More experienced participants were more confident (p = 0.012) and trusted the AI less (p = 0.038). Our findings also highlight issues around reference standard definition.
Collapse
Affiliation(s)
- Josie Carmichael
- University College London Interaction Centre (UCLIC), UCL, London, UK.
- Institute of Ophthalmology, NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL, London, UK.
| | - Enrico Costanza
- University College London Interaction Centre (UCLIC), UCL, London, UK
| | - Ann Blandford
- University College London Interaction Centre (UCLIC), UCL, London, UK
| | - Robbert Struyven
- Institute of Ophthalmology, NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL, London, UK
| | - Pearse A Keane
- Institute of Ophthalmology, NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL, London, UK
| | - Konstantinos Balaskas
- Institute of Ophthalmology, NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL, London, UK
| |
Collapse
|
8
|
Kovoor JG, Bacchi S, Sharma P, Sharma S, Kumawat M, Stretton B, Gupta AK, Chan W, Abou-Hamden A, Maddern GJ. Artificial intelligence for surgical services in Australia and New Zealand: opportunities, challenges and recommendations. Med J Aust 2024; 220:234-237. [PMID: 38321813 DOI: 10.5694/mja2.52225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 01/22/2024] [Indexed: 02/08/2024]
Affiliation(s)
- Joshua G Kovoor
- University of Adelaide, Adelaide, SA
- Ballarat Base Hospital, Ballarat, VIC
| | | | | | | | | | | | | | - WengOnn Chan
- University of Adelaide, Adelaide, SA
- Queen Elizabeth Hospital, Adelaide, SA
| | - Amal Abou-Hamden
- University of Adelaide, Adelaide, SA
- Royal Adelaide Hospital, Adelaide, SA
| | - Guy J Maddern
- University of Adelaide, Adelaide, SA
- Queen Elizabeth Hospital, Adelaide, SA
| |
Collapse
|
9
|
Joyce DW, Kormilitzin A, Hamer-Hunt J, McKee KR, Tomasev N. Defining acceptable data collection and reuse standards for queer artificial intelligence research in mental health: protocol for the online PARQAIR-MH Delphi study. BMJ Open 2024; 14:e079105. [PMID: 38490661 PMCID: PMC10946350 DOI: 10.1136/bmjopen-2023-079105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 02/29/2024] [Indexed: 03/17/2024] Open
Abstract
INTRODUCTION For artificial intelligence (AI) to help improve mental healthcare, the design of data-driven technologies needs to be fair, safe, and inclusive. Participatory design can play a critical role in empowering marginalised communities to take an active role in constructing research agendas and outputs. Given the unmet needs of the LGBTQI+ (Lesbian, Gay, Bisexual, Transgender, Queer and Intersex) community in mental healthcare, there is a pressing need for participatory research to include a range of diverse queer perspectives on issues of data collection and use (in routine clinical care as well as for research) as well as AI design. Here we propose a protocol for a Delphi consensus process for the development of PARticipatory Queer AI Research for Mental Health (PARQAIR-MH) practices, aimed at informing digital health practices and policy. METHODS AND ANALYSIS The development of PARQAIR-MH is comprised of four stages. In stage 1, a review of recent literature and fact-finding consultation with stakeholder organisations will be conducted to define a terms-of-reference for stage 2, the Delphi process. Our Delphi process consists of three rounds, where the first two rounds will iterate and identify items to be included in the final Delphi survey for consensus ratings. Stage 3 consists of consensus meetings to review and aggregate the Delphi survey responses, leading to stage 4 where we will produce a reusable toolkit to facilitate participatory development of future bespoke LGBTQI+-adapted data collection, harmonisation, and use for data-driven AI applications specifically in mental healthcare settings. ETHICS AND DISSEMINATION PARQAIR-MH aims to deliver a toolkit that will help to ensure that the specific needs of LGBTQI+ communities are accounted for in mental health applications of data-driven technologies. The study is expected to run from June 2024 through January 2025, with the final outputs delivered in mid-2025. Participants in the Delphi process will be recruited by snowball and opportunistic sampling via professional networks and social media (but not by direct approach to healthcare service users, patients, specific clinical services, or via clinicians' caseloads). Participants will not be required to share personal narratives and experiences of healthcare or treatment for any condition. Before agreeing to participate, people will be given information about the issues considered to be in-scope for the Delphi (eg, developing best practices and methods for collecting and harmonising sensitive characteristics data; developing guidelines for data use/reuse) alongside specific risks of unintended harm from participating that can be reasonably anticipated. Outputs will be made available in open-access peer-reviewed publications, blogs, social media, and on a dedicated project website for future reuse.
Collapse
Affiliation(s)
- Dan W Joyce
- Department of Primary Care and Mental Health and the Civic Health Information Laboratory, University of Liverpool, Liverpool, UK
| | | | | | | | | |
Collapse
|
10
|
Evans RP, Bryant LD, Russell G, Absolom K. Trust and acceptability of data-driven clinical recommendations in everyday practice: A scoping review. Int J Med Inform 2024; 183:105342. [PMID: 38266426 DOI: 10.1016/j.ijmedinf.2024.105342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 12/08/2023] [Accepted: 01/14/2024] [Indexed: 01/26/2024]
Abstract
BACKGROUND Increasing attention is being given to the analysis of large health datasets to derive new clinical decision support systems (CDSS). However, few data-driven CDSS are being adopted into clinical practice. Trust in these tools is believed to be fundamental for acceptance and uptake but to date little attention has been given to defining or evaluating trust in clinical settings. OBJECTIVES A scoping review was conducted to explore how and where acceptability and trustworthiness of data-driven CDSS have been assessed from the health professional's perspective. METHODS Medline, Embase, PsycInfo, Web of Science, Scopus, ACM Digital, IEEE Xplore and Google Scholar were searched in March 2022 using terms expanded from: "data-driven" AND "clinical decision support" AND "acceptability". Included studies focused on healthcare practitioner-facing data-driven CDSS, relating directly to clinical care. They included trust or a proxy as an outcome, or in the discussion. The preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews (PRISMA-ScR) is followed in the reporting of this review. RESULTS 3291 papers were screened, with 85 primary research studies eligible for inclusion. Studies covered a diverse range of clinical specialisms and intended contexts, but hypothetical systems (24) outnumbered those in clinical use (18). Twenty-five studies measured trust, via a wide variety of quantitative, qualitative and mixed methods. A further 24 discussed themes of trust without it being explicitly evaluated, and from these, themes of transparency, explainability, and supporting evidence were identified as factors influencing healthcare practitioner trust in data-driven CDSS. CONCLUSION There is a growing body of research on data-driven CDSS, but few studies have explored stakeholder perceptions in depth, with limited focused research on trustworthiness. Further research on healthcare practitioner acceptance, including requirements for transparency and explainability, should inform clinical implementation.
Collapse
Affiliation(s)
- Ruth P Evans
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| | | | - Gregor Russell
- Bradford District Care Trust, Bradford, New Mill, Victoria Rd, BD18 3LD, UK.
| | - Kate Absolom
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| |
Collapse
|
11
|
Bharadwaj UU, Chin CT, Majumdar S. Practical Applications of Artificial Intelligence in Spine Imaging: A Review. Radiol Clin North Am 2024; 62:355-370. [PMID: 38272627 DOI: 10.1016/j.rcl.2023.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Artificial intelligence (AI), a transformative technology with unprecedented potential in medical imaging, can be applied to various spinal pathologies. AI-based approaches may improve imaging efficiency, diagnostic accuracy, and interpretation, which is essential for positive patient outcomes. This review explores AI algorithms, techniques, and applications in spine imaging, highlighting diagnostic impact and challenges with future directions for integrating AI into spine imaging workflow.
Collapse
Affiliation(s)
- Upasana Upadhyay Bharadwaj
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 1700 4th Street, Byers Hall, Suite 203, Room 203D, San Francisco, CA 94158, USA
| | - Cynthia T Chin
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Avenue, Box 0628, San Francisco, CA 94143, USA.
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 1700 4th Street, Byers Hall, Suite 203, Room 203D, San Francisco, CA 94158, USA
| |
Collapse
|
12
|
Stevenson E, Walsh C, Hibberd L. Can artificial intelligence replace biochemists? A study comparing interpretation of thyroid function test results by ChatGPT and Google Bard to practising biochemists. Ann Clin Biochem 2024; 61:143-149. [PMID: 37699796 DOI: 10.1177/00045632231203473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/14/2023]
Abstract
BACKGROUND Public awareness of artificial intelligence (AI) is increasing and this novel technology is being used for a range of everyday tasks and more specialist clinical applications. On a background of increasing waits for GP appointments alongside patient access to laboratory test results through the NHS app, this study aimed to assess the accuracy and safety of two AI tools, ChatGPT and Google Bard, in providing interpretation of thyroid function test results as if posed by laboratory scientists or patients. METHODS Fifteen fictional cases were presented to a team of clinicians and clinical scientists to produce a consensus opinion. The cases were then presented to ChatGPT and Google Bard as though from healthcare providers and from patients. The responses were categorized as correct, partially correct or incorrect compared to consensus opinion and the advice assessed for safety to patients. RESULTS Of the 15 cases presented, ChatGPT and Google Bard correctly interpreted only 33.3% and 20.0% of cases, respectively. When queries were posed as a patient, 66.7% of ChatGPT responses were safe compared to 60.0% of Google Bard responses. Both AI tools were able to identify primary hypothyroidism and hyperthyroidism but failed to identify subclinical presentations, non-thyroidal illness or secondary hypothyroidism. CONCLUSIONS This study has demonstrated that AI tools do not currently have the capacity to generate consistently correct interpretation and safe advice to patients and should not be used as an alternative to a consultation with a qualified medical professional. Available AI in its current form cannot replace human clinical knowledge in this scenario.
Collapse
Affiliation(s)
- Emma Stevenson
- Clinical Biochemistry, Gloucestershire Hospitals NHS Foundation Trust, Cheltenham, UK
| | - Chelsey Walsh
- Clinical Biochemistry, Gloucestershire Hospitals NHS Foundation Trust, Cheltenham, UK
| | - Luke Hibberd
- Clinical Biochemistry, Gloucestershire Hospitals NHS Foundation Trust, Cheltenham, UK
| |
Collapse
|
13
|
Wang Q, Lin Y, Ding C, Guan W, Zhang X, Jia J, Zhou W, Liu Z, Bai G. Multi-modality radiomics model predicts axillary lymph node metastasis of breast cancer using MRI and mammography. Eur Radiol 2024:10.1007/s00330-024-10638-2. [PMID: 38337068 DOI: 10.1007/s00330-024-10638-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 12/05/2023] [Accepted: 01/20/2024] [Indexed: 02/12/2024]
Abstract
OBJECTIVES We aimed to develop a multi-modality model to predict axillary lymph node (ALN) metastasis by combining clinical predictors with radiomic features from magnetic resonance imaging (MRI) and mammography (MMG) in breast cancer. This model might potentially eliminate unnecessary axillary surgery in cases without ALN metastasis, thereby minimizing surgery-related complications. METHODS We retrospectively enrolled 485 breast cancer patients from two hospitals and extracted radiomics features from tumor and lymph node regions on MRI and MMG images. After feature selection, three random forest models were built using the retained features, respectively. Significant clinical factors were integrated with these radiomics models to construct a multi-modality model. The multi-modality model was compared to radiologists' diagnoses on axillary ultrasound and MRI. It was also used to assist radiologists in making a secondary diagnosis on MRI. RESULTS The multi-modality model showed superior performance with AUCs of 0.964 in the training cohort, 0.916 in the internal validation cohort, and 0.892 in the external validation cohort. It surpassed single-modality models and radiologists' ALN diagnosis on MRI and axillary ultrasound in all validation cohorts. Additionally, the multi-modality model improved radiologists' MRI-based ALN diagnostic ability, increasing the average accuracy from 70.70 to 78.16% for radiologist A and from 75.42 to 81.38% for radiologist B. CONCLUSION The multi-modality model can predict ALN metastasis of breast cancer accurately. Moreover, the artificial intelligence (AI) model also assisted the radiologists to improve their diagnostic ability on MRI. CLINICAL RELEVANCE STATEMENT The multi-modality model based on both MRI and mammography images allows preoperative prediction of axillary lymph node metastasis in breast cancer patients. With the assistance of the model, the diagnostic efficacy of radiologists can be further improved. KEY POINTS • We developed a novel multi-modality model that combines MRI and mammography radiomics with clinical factors to accurately predict axillary lymph node (ALN) metastasis, which has not been previously reported. • Our multi-modality model outperformed both the radiologists' ALN diagnosis based on MRI and axillary ultrasound, as well as single-modality radiomics models based on MRI or mammography. • The multi-modality model can serve as a potential decision support tool to improve the radiologists' ALN diagnosis on MRI.
Collapse
Affiliation(s)
- Qian Wang
- Department of Radiology, The Affiliated Huaian Clinical College of Xuzhou Medical University, Huaian, Jiangsu, China
| | - Yingyu Lin
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, 58th, The Second Zhongshan Road, Guangzhou, Guangdong, China
| | - Cong Ding
- Department of Radiology, The Affiliated Huaian No. 1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Wenting Guan
- Department of Radiology, The Affiliated Huaian No. 1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Xiaoling Zhang
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, 58th, The Second Zhongshan Road, Guangzhou, Guangdong, China
| | - Jianye Jia
- Department of Radiology, The Affiliated Huaian No. 1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Wei Zhou
- Department of Radiology, The Affiliated Huaian No. 1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Ziyan Liu
- Department of Radiology, The Affiliated Huaian No. 1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Genji Bai
- Department of Radiology, The Affiliated Huaian Clinical College of Xuzhou Medical University, Huaian, Jiangsu, China.
- Department of Radiology, The Affiliated Huaian No. 1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China.
| |
Collapse
|
14
|
Giddings R, Joseph A, Callender T, Janes SM, van der Schaar M, Sheringham J, Navani N. Factors influencing clinician and patient interaction with machine learning-based risk prediction models: a systematic review. Lancet Digit Health 2024; 6:e131-e144. [PMID: 38278615 DOI: 10.1016/s2589-7500(23)00241-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 10/20/2023] [Accepted: 11/14/2023] [Indexed: 01/28/2024]
Abstract
Machine learning (ML)-based risk prediction models hold the potential to support the health-care setting in several ways; however, use of such models is scarce. We aimed to review health-care professional (HCP) and patient perceptions of ML risk prediction models in published literature, to inform future risk prediction model development. Following database and citation searches, we identified 41 articles suitable for inclusion. Article quality varied with qualitative studies performing strongest. Overall, perceptions of ML risk prediction models were positive. HCPs and patients considered that models have the potential to add benefit in the health-care setting. However, reservations remain; for example, concerns regarding data quality for model development and fears of unintended consequences following ML model use. We identified that public views regarding these models might be more negative than HCPs and that concerns (eg, extra demands on workload) were not always borne out in practice. Conclusions are tempered by the low number of patient and public studies, the absence of participant ethnic diversity, and variation in article quality. We identified gaps in knowledge (particularly views from under-represented groups) and optimum methods for model explanation and alerts, which require future research.
Collapse
Affiliation(s)
- Rebecca Giddings
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK.
| | - Anabel Joseph
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| | - Thomas Callender
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| | - Sam M Janes
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| | - Mihaela van der Schaar
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK; The Alan Turing Institute, London, UK
| | - Jessica Sheringham
- Department of Applied Health Research, University College London, London, UK
| | - Neal Navani
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| |
Collapse
|
15
|
Mackenzie SC, Sainsbury CAR, Wake DJ. Diabetes and artificial intelligence beyond the closed loop: a review of the landscape, promise and challenges. Diabetologia 2024; 67:223-235. [PMID: 37979006 PMCID: PMC10789841 DOI: 10.1007/s00125-023-06038-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 09/22/2023] [Indexed: 11/19/2023]
Abstract
The discourse amongst diabetes specialists and academics regarding technology and artificial intelligence (AI) typically centres around the 10% of people with diabetes who have type 1 diabetes, focusing on glucose sensors, insulin pumps and, increasingly, closed-loop systems. This focus is reflected in conference topics, strategy documents, technology appraisals and funding streams. What is often overlooked is the wider application of data and AI, as demonstrated through published literature and emerging marketplace products, that offers promising avenues for enhanced clinical care, health-service efficiency and cost-effectiveness. This review provides an overview of AI techniques and explores the use and potential of AI and data-driven systems in a broad context, covering all diabetes types, encompassing: (1) patient education and self-management; (2) clinical decision support systems and predictive analytics, including diagnostic support, treatment and screening advice, complications prediction; and (3) the use of multimodal data, such as imaging or genetic data. The review provides a perspective on how data- and AI-driven systems could transform diabetes care in the coming years and how they could be integrated into daily clinical practice. We discuss evidence for benefits and potential harms, and consider existing barriers to scalable adoption, including challenges related to data availability and exchange, health inequality, clinician hesitancy and regulation. Stakeholders, including clinicians, academics, commissioners, policymakers and those with lived experience, must proactively collaborate to realise the potential benefits that AI-supported diabetes care could bring, whilst mitigating risk and navigating the challenges along the way.
Collapse
Affiliation(s)
- Scott C Mackenzie
- Population Health and Genomics, School of Medicine, University of Dundee, Dundee, UK
| | - Chris A R Sainsbury
- Institute for Applied Health Research, University of Birmingham, Birmingham, UK
| | - Deborah J Wake
- Usher Institute, The University of Edinburgh, Edinburgh, UK.
- Edinburgh Centre for Endocrinology and Diabetes, NHS Lothian, Edinburgh, UK.
| |
Collapse
|
16
|
Gunathilaka NJ, Gooden TE, Cooper J, Flanagan S, Marshall T, Haroon S, D'Elia A, Crowe F, Jackson T, Nirantharakumar K, Greenfield S. Perceptions on artificial intelligence-based decision-making for coexisting multiple long-term health conditions: protocol for a qualitative study with patients and healthcare professionals. BMJ Open 2024; 14:e077156. [PMID: 38307535 PMCID: PMC10836375 DOI: 10.1136/bmjopen-2023-077156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 11/22/2023] [Indexed: 02/04/2024] Open
Abstract
INTRODUCTION Coexisting multiple health conditions is common among older people, a population that is increasing globally. The potential for polypharmacy, adverse events, drug interactions and development of additional health conditions complicates prescribing decisions for these patients. Artificial intelligence (AI)-generated decision-making tools may help guide clinical decisions in the context of multiple health conditions, by determining which of the multiple medication options is best. This study aims to explore the perceptions of healthcare professionals (HCPs) and patients on the use of AI in the management of multiple health conditions. METHODS AND ANALYSIS A qualitative study will be conducted using semistructured interviews. Adults (≥18 years) with multiple health conditions living in the West Midlands of England and HCPs with experience in caring for patients with multiple health conditions will be eligible and purposively sampled. Patients will be identified from Clinical Practice Research Datalink (CPRD) Aurum; CPRD will contact general practitioners who will in turn, send a letter to patients inviting them to take part. Eligible HCPs will be recruited through British HCP bodies and known contacts. Up to 30 patients and 30 HCPs will be recruited, until data saturation is achieved. Interviews will be in-person or virtual, audio recorded and transcribed verbatim. The topic guide is designed to explore participants' attitudes towards AI-informed clinical decision-making to augment clinician-directed decision-making, the perceived advantages and disadvantages of both methods and attitudes towards risk management. Case vignettes comprising a common decision pathway for patients with multiple health conditions will be presented during each interview to invite participants' opinions on how their experiences compare. Data will be analysed thematically using the Framework Method. ETHICS AND DISSEMINATION This study has been approved by the National Health Service Research Ethics Committee (Reference: 22/SC/0210). Written informed consent or verbal consent will be obtained prior to each interview. The findings from this study will be disseminated through peer-reviewed publications, conferences and lay summaries.
Collapse
Affiliation(s)
| | - Tiffany E Gooden
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Jennifer Cooper
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Sarah Flanagan
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Tom Marshall
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Shamil Haroon
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Alexander D'Elia
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Francesca Crowe
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Thomas Jackson
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | | | - Sheila Greenfield
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| |
Collapse
|
17
|
Soh ZD, Tan M, Nongpiur ME, Xu BY, Friedman D, Zhang X, Leung C, Liu Y, Koh V, Aung T, Cheng CY. Assessment of angle closure disease in the age of artificial intelligence: A review. Prog Retin Eye Res 2024; 98:101227. [PMID: 37926242 DOI: 10.1016/j.preteyeres.2023.101227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 11/02/2023] [Accepted: 11/02/2023] [Indexed: 11/07/2023]
Abstract
Primary angle closure glaucoma is a visually debilitating disease that is under-detected worldwide. Many of the challenges in managing primary angle closure disease (PACD) are related to the lack of convenient and precise tools for clinic-based disease assessment and monitoring. Artificial intelligence (AI)- assisted tools to detect and assess PACD have proliferated in recent years with encouraging results. Machine learning (ML) algorithms that utilize clinical data have been developed to categorize angle closure eyes by disease mechanism. Other ML algorithms that utilize image data have demonstrated good performance in detecting angle closure. Nonetheless, deep learning (DL) algorithms trained directly on image data generally outperformed traditional ML algorithms in detecting PACD, were able to accurately differentiate between angle status (open, narrow, closed), and automated the measurement of quantitative parameters. However, more work is required to expand the capabilities of these AI algorithms and for deployment into real-world practice settings. This includes the need for real-world evaluation, establishing the use case for different algorithms, and evaluating the feasibility of deployment while considering other clinical, economic, social, and policy-related factors.
Collapse
Affiliation(s)
- Zhi Da Soh
- Singapore Eye Research Institute, Singapore National Eye Centre, 20 College Road, 169856, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, 21 Lower Kent Ridge Road, 119077, Singapore.
| | - Mingrui Tan
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*Star), 1 Fusionopolis Way, 138632, Singapore.
| | - Monisha Esther Nongpiur
- Singapore Eye Research Institute, Singapore National Eye Centre, 20 College Road, 169856, Singapore; Ophthalmology & Visual Sciences Academic Clinical Programme, Academic Medicine, Duke-NUS Medical School, 8 College Road, 169857, Singapore.
| | - Benjamin Yixing Xu
- Roski Eye Institute, Keck School of Medicine, University of Southern California, 1450 San Pablo St #4400, Los Angeles, CA, 90033, USA.
| | - David Friedman
- Department of Ophthalmology, Harvard Medical School, 25 Shattuck Street, Boston, MA, 02115, USA; Massachusetts Eye and Ear, Mass General Brigham, 243 Charles Street, Boston, MA, 02114, USA.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat Sen University, No. 54 Xianlie South Road, Yuexiu District, Guangzhou, China.
| | - Christopher Leung
- Department of Ophthalmology, School of Clinical Medicine, The University of Hong Kong, Cyberport 4, 100 Cyberport Road, Hong Kong; Department of Ophthalmology, Queen Mary Hospital, 102 Pok Fu Lam Road, Hong Kong.
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*Star), 1 Fusionopolis Way, 138632, Singapore.
| | - Victor Koh
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, 21 Lower Kent Ridge Road, 119077, Singapore; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, 1E Kent Ridge Road, NUHS Tower Block, Level 7, 119228, Singapore.
| | - Tin Aung
- Singapore Eye Research Institute, Singapore National Eye Centre, 20 College Road, 169856, Singapore; Ophthalmology & Visual Sciences Academic Clinical Programme, Academic Medicine, Duke-NUS Medical School, 8 College Road, 169857, Singapore.
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, 20 College Road, 169856, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, 21 Lower Kent Ridge Road, 119077, Singapore; Ophthalmology & Visual Sciences Academic Clinical Programme, Academic Medicine, Duke-NUS Medical School, 8 College Road, 169857, Singapore; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, 1E Kent Ridge Road, NUHS Tower Block, Level 7, 119228, Singapore.
| |
Collapse
|
18
|
Stewart J, Lu J, Goudie A, Arendts G, Meka SA, Freeman S, Walker K, Sprivulis P, Sanfilippo F, Bennamoun M, Dwivedi G. Applications of natural language processing at emergency department triage: A narrative review. PLoS One 2023; 18:e0279953. [PMID: 38096321 PMCID: PMC10721204 DOI: 10.1371/journal.pone.0279953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 11/30/2023] [Indexed: 12/18/2023] Open
Abstract
INTRODUCTION Natural language processing (NLP) uses various computational methods to analyse and understand human language, and has been applied to data acquired at Emergency Department (ED) triage to predict various outcomes. The objective of this scoping review is to evaluate how NLP has been applied to data acquired at ED triage, assess if NLP based models outperform humans or current risk stratification techniques when predicting outcomes, and assess if incorporating free-text improve predictive performance of models when compared to predictive models that use only structured data. METHODS All English language peer-reviewed research that applied an NLP technique to free-text obtained at ED triage was eligible for inclusion. We excluded studies focusing solely on disease surveillance, and studies that used information obtained after triage. We searched the electronic databases MEDLINE, Embase, Cochrane Database of Systematic Reviews, Web of Science, and Scopus for medical subject headings and text keywords related to NLP and triage. Databases were last searched on 01/01/2022. Risk of bias in studies was assessed using the Prediction model Risk of Bias Assessment Tool (PROBAST). Due to the high level of heterogeneity between studies and high risk of bias, a metanalysis was not conducted. Instead, a narrative synthesis is provided. RESULTS In total, 3730 studies were screened, and 20 studies were included. The population size varied greatly between studies ranging from 1.8 million patients to 598 triage notes. The most common outcomes assessed were prediction of triage score, prediction of admission, and prediction of critical illness. NLP models achieved high accuracy in predicting need for admission, triage score, critical illness, and mapping free-text chief complaints to structured fields. Incorporating both structured data and free-text data improved results when compared to models that used only structured data. However, the majority of studies (80%) were assessed to have a high risk of bias, and only one study reported the deployment of an NLP model into clinical practice. CONCLUSION Unstructured free-text triage notes have been used by NLP models to predict clinically relevant outcomes. However, the majority of studies have a high risk of bias, most research is retrospective, and there are few examples of implementation into clinical practice. Future work is needed to prospectively assess if applying NLP to data acquired at ED triage improves ED outcomes when compared to usual clinical practice.
Collapse
Affiliation(s)
- Jonathon Stewart
- School of Medicine, The University of Western Australia, Crawley, Western Australia, Australia
- Harry Perkins Institute of Medical Research, Murdoch, Western Australia, Australia
- Department of Emergency Medicine, Fiona Stanley Hospital, Murdoch, Western Australia, Australia
| | - Juan Lu
- School of Medicine, The University of Western Australia, Crawley, Western Australia, Australia
- Harry Perkins Institute of Medical Research, Murdoch, Western Australia, Australia
- Department of Computer Science and Software Engineering, The University of Western Australia, Crawley, Western Australia, Australia
| | - Adrian Goudie
- Department of Emergency Medicine, Fiona Stanley Hospital, Murdoch, Western Australia, Australia
| | - Glenn Arendts
- School of Medicine, The University of Western Australia, Crawley, Western Australia, Australia
- Department of Emergency Medicine, Fiona Stanley Hospital, Murdoch, Western Australia, Australia
| | - Shiv Akarsh Meka
- HIVE & Data and Digital Innovation, Royal Perth Hospital, Perth, Western Australia, Australia
| | - Sam Freeman
- Department of Emergency Medicine, St Vincent’s Hospital Melbourne, Melbourne, Victoria, Australia
- SensiLab, Monash University, Melbourne, Victoria, Australia
| | - Katie Walker
- School of Clinical Sciences at Monash Health, Monash University, Melbourne, Victoria, Australia
| | - Peter Sprivulis
- Western Australia Department of Health, East Perth, Western Australia, Australia
| | - Frank Sanfilippo
- School of Population and Global Health, University of Western Australia, Crawley, Western Australia, Australia
| | - Mohammed Bennamoun
- Department of Computer Science and Software Engineering, The University of Western Australia, Crawley, Western Australia, Australia
| | - Girish Dwivedi
- School of Medicine, The University of Western Australia, Crawley, Western Australia, Australia
- Harry Perkins Institute of Medical Research, Murdoch, Western Australia, Australia
- Department of Cardiology, Fiona Stanley Hospital, Murdoch, Western Australia, Australia
| |
Collapse
|
19
|
Meade SM, Salas-Vega S, Nagy MR, Sundar SJ, Steinmetz MP, Benzel EC, Habboub G. A Pilot Remote Curriculum to Enhance Resident and Medical Student Understanding of Machine Learning in Healthcare. World Neurosurg 2023; 180:e142-e148. [PMID: 37696433 DOI: 10.1016/j.wneu.2023.09.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 08/22/2023] [Accepted: 09/04/2023] [Indexed: 09/13/2023]
Abstract
BACKGROUND Despite the expanding role of machine learning (ML) in health care and patient expectations for clinicians to understand ML-based tools, few for-credit curricula exist specifically for neurosurgical trainees to learn basic principles and implications of ML for medical research and clinical practice. We implemented a novel, remotely delivered curriculum designed to develop literacy in ML for neurosurgical trainees. METHODS A 4-week pilot medical elective was designed specifically for trainees to build literacy in basic ML concepts. Qualitative feedback from interested and enrolled students was collected to assess students' and trainees' reactions, learning, and future application of course content. RESULTS Despite 15 interested learners, only 3 medical students and 1 neurosurgical resident completed the course. Enrollment included students and trainees from 3 different institutions. All learners who completed the course found the lectures relevant to their future practice as clinicians and researchers and reported improved confidence in applying and understanding published literature applying ML techniques in health care. Barriers to ample enrollment and retention (e.g., balancing clinical responsibilities) were identified. CONCLUSIONS This pilot elective demonstrated the interest, value, and feasibility of a remote elective to establish ML literacy; however, feedback to increase accessibility and flexibility of the course encouraged our team to implement changes. Future elective iterations will have a semiannual, 2-week format, splitting lectures more clearly between theory (the method and its value) and application (coding instructions) and will make lectures open-source prerequisites to allow tailoring of student learning to their planned application of these methods in their practice and research.
Collapse
Affiliation(s)
- Seth M Meade
- Department of Neurosurgery, Cleveland Clinic Lerner College of Medicine, Cleveland, Ohio, USA; Case Western School of Medicine, Case Western Reserve University, Cleveland, Ohio, USA; Department of Neurosurgery, Neurologic Institute, Center for Spine Health, Cleveland Clinic Foundation, Cleveland, Ohio, USA.
| | - Sebastian Salas-Vega
- Case Western School of Medicine, Case Western Reserve University, Cleveland, Ohio, USA; Department of Neurosurgery, Neurologic Institute, Center for Spine Health, Cleveland Clinic Foundation, Cleveland, Ohio, USA; Department of Neurosurgery, Inova Health System, Falls Church, Virginia, USA
| | - Matthew R Nagy
- Department of Neurosurgery, Cleveland Clinic Lerner College of Medicine, Cleveland, Ohio, USA; Case Western School of Medicine, Case Western Reserve University, Cleveland, Ohio, USA
| | - Swetha J Sundar
- Department of Neurosurgery, Cleveland Clinic Lerner College of Medicine, Cleveland, Ohio, USA; Department of Neurosurgery, Neurologic Institute, Center for Spine Health, Cleveland Clinic Foundation, Cleveland, Ohio, USA
| | - Michael P Steinmetz
- Department of Neurosurgery, Cleveland Clinic Lerner College of Medicine, Cleveland, Ohio, USA; Department of Neurosurgery, Neurologic Institute, Center for Spine Health, Cleveland Clinic Foundation, Cleveland, Ohio, USA
| | - Edward C Benzel
- Department of Neurosurgery, Cleveland Clinic Lerner College of Medicine, Cleveland, Ohio, USA; Department of Neurosurgery, Neurologic Institute, Center for Spine Health, Cleveland Clinic Foundation, Cleveland, Ohio, USA
| | - Ghaith Habboub
- Department of Neurosurgery, Cleveland Clinic Lerner College of Medicine, Cleveland, Ohio, USA; Department of Neurosurgery, Neurologic Institute, Center for Spine Health, Cleveland Clinic Foundation, Cleveland, Ohio, USA
| |
Collapse
|
20
|
Vo V, Chen G, Aquino YSJ, Carter SM, Do QN, Woode ME. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc Sci Med 2023; 338:116357. [PMID: 37949020 DOI: 10.1016/j.socscimed.2023.116357] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/04/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023]
Abstract
INTRODUCTION Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals' perspectives to understand these issues from multiple perspectives. METHODOLOGY A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human - AI relationship. RESULTS The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. CONCLUSIONS While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
Collapse
Affiliation(s)
- Vinh Vo
- Centre for Health Economics, Monash University, Australia.
| | - Gang Chen
- Centre for Health Economics, Monash University, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Quynh Nga Do
- Department of Economics, Monash University, Australia
| | - Maame Esi Woode
- Centre for Health Economics, Monash University, Australia; Monash Data Futures Research Institute, Australia
| |
Collapse
|
21
|
Willis K, Chaudhry UAR, Chandrasekaran L, Wahlich C, Olvera-Barrios A, Chambers R, Bolter L, Anderson J, Barman SA, Fajtl J, Welikala R, Egan C, Tufail A, Owen CG, Rudnicka A. What are the perceptions and concerns of people living with diabetes and National Health Service staff around the potential implementation of AI-assisted screening for diabetic eye disease? Development and validation of a survey for use in a secondary care screening setting. BMJ Open 2023; 13:e075558. [PMID: 37968006 PMCID: PMC10660949 DOI: 10.1136/bmjopen-2023-075558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 09/05/2023] [Indexed: 11/17/2023] Open
Abstract
INTRODUCTION The English National Health Service (NHS) Diabetic Eye Screening Programme (DESP) performs around 2.3 million eye screening appointments annually, generating approximately 13 million retinal images that are graded by humans for the presence or severity of diabetic retinopathy. Previous research has shown that automated retinal image analysis systems, including artificial intelligence (AI), can identify images with no disease from those with diabetic retinopathy as safely and effectively as human graders, and could significantly reduce the workload for human graders. Some algorithms can also determine the level of severity of the retinopathy with similar performance to humans. There is a need to examine perceptions and concerns surrounding AI-assisted eye-screening among people living with diabetes and NHS staff, if AI was to be introduced into the DESP, to identify factors that may influence acceptance of this technology. METHODS AND ANALYSIS People living with diabetes and staff from the North East London (NEL) NHS DESP were invited to participate in two respective focus groups to codesign two online surveys exploring their perceptions and concerns around the potential introduction of AI-assisted screening.Focus group participants were representative of the local population in terms of ages and ethnicity. Participants' feedback was taken into consideration to update surveys which were circulated for further feedback. Surveys will be piloted at the NEL DESP and followed by semistructured interviews to assess accessibility, usability and to validate the surveys.Validated surveys will be distributed by other NHS DESP sites, and also via patient groups on social media, relevant charities and the British Association of Retinal Screeners. Post-survey evaluative interviews will be undertaken among those who consent to participate in further research. ETHICS AND DISSEMINATION Ethical approval has been obtained by the NHS Research Ethics Committee (IRAS ID: 316631). Survey results will be shared and discussed with focus groups to facilitate preparation of findings for publication and to inform codesign of outreach activities to address concerns and perceptions identified.
Collapse
Affiliation(s)
- Kathryn Willis
- Population Health Research Institute, St George's University of London, London, UK
| | - Umar A R Chaudhry
- Population Health Research Institute, St George's University of London, London, UK
| | | | - Charlotte Wahlich
- Population Health Research Institute, St George's University of London, London, UK
| | - Abraham Olvera-Barrios
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Ryan Chambers
- Diabetes and Endocrinolgy, Homerton Healthcare NHS Foundation Trust, London, UK
| | - Louis Bolter
- Diabetes and Endocrinolgy, Homerton Healthcare NHS Foundation Trust, London, UK
| | - John Anderson
- Diabetes and Endocrinolgy, Homerton Healthcare NHS Foundation Trust, London, UK
| | - S A Barman
- School of Computer Science and Mathematics, Kingston University London, London, UK
| | - Jiri Fajtl
- School of Computer Science and Mathematics, Kingston University London, London, UK
| | - Roshan Welikala
- School of Computer Science and Mathematics, Kingston University London, London, UK
| | - Catherine Egan
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Adnan Tufail
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Christopher G Owen
- Population Health Research Institute, St George's University of London, London, UK
| | - Alicja Rudnicka
- Population Health Research Institute, St George's University of London, London, UK
| |
Collapse
|
22
|
Rojahn J, Palu A, Skiena S, Jones JJ. American public opinion on artificial intelligence in healthcare. PLoS One 2023; 18:e0294028. [PMID: 37943752 PMCID: PMC10635466 DOI: 10.1371/journal.pone.0294028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 10/15/2023] [Indexed: 11/12/2023] Open
Abstract
Billions of dollars are being invested into developing medical artificial intelligence (AI) systems and yet public opinion of AI in the medical field seems to be mixed. Although high expectations for the future of medical AI do exist in the American public, anxiety and uncertainty about what it can do and how it works is widespread. Continuing evaluation of public opinion on AI in healthcare is necessary to ensure alignment between patient attitudes and the technologies adopted. We conducted a representative-sample survey (total N = 203) to measure the trust of the American public towards medical AI. Primarily, we contrasted preferences for AI and human professionals to be medical decision-makers. Additionally, we measured expectations for the impact and use of medical AI in the future. We present four noteworthy results: (1) The general public strongly prefers human medical professionals make medical decisions, while at the same time believing they are more likely to make culturally biased decisions than AI. (2) The general public is more comfortable with a human reading their medical records than an AI, both now and "100 years from now." (3) The general public is nearly evenly split between those who would trust their own doctor to use AI and those who would not. (4) Respondents expect AI will improve medical treatment but more so in the distant future than immediately.
Collapse
Affiliation(s)
- Jessica Rojahn
- Department of Sociology, Stony Brook University, Stony Brook, New York, United States of America
| | - Andrea Palu
- Department of Sociology, Stony Brook University, Stony Brook, New York, United States of America
| | - Steven Skiena
- Department of Computer Science, Stony Brook University, Stony Brook, New York, United States of America
| | - Jason J. Jones
- Department of Sociology, Stony Brook University, Stony Brook, New York, United States of America
- Institute for Advanced Computational Science, Stony Brook University, Stony Brook, New York, United States of America
| |
Collapse
|
23
|
Wang B, Asan O, Mansouri M. Perspectives of Patients With Chronic Diseases on Future Acceptance of AI-Based Home Care Systems: Cross-Sectional Web-Based Survey Study. JMIR Hum Factors 2023; 10:e49788. [PMID: 37930780 PMCID: PMC10660233 DOI: 10.2196/49788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/18/2023] [Accepted: 10/05/2023] [Indexed: 11/07/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI)-based home care systems and devices are being gradually integrated into health care delivery to benefit patients with chronic diseases. However, existing research mainly focuses on the technical and clinical aspects of AI application, with an insufficient investigation of patients' motivation and intention to adopt such systems. OBJECTIVE This study aimed to examine the factors that affect the motivation of patients with chronic diseases to adopt AI-based home care systems and provide empirical evidence for the proposed research hypotheses. METHODS We conducted a cross-sectional web-based survey with 222 patients with chronic diseases based on a hypothetical scenario. RESULTS The results indicated that patients have an overall positive perception of AI-based home care systems. Their attitudes toward the technology, perceived usefulness, and comfortability were found to be significant factors encouraging adoption, with a clear understanding of accountability being a particularly influential factor in shaping patients' attitudes toward their motivation to use these systems. However, privacy concerns persist as an indirect factor, affecting the perceived usefulness and comfortability, hence influencing patients' attitudes. CONCLUSIONS This study is one of the first to examine the motivation of patients with chronic diseases to adopt AI-based home care systems, offering practical insights for policy makers, care or technology providers, and patients. This understanding can facilitate effective policy formulation, product design, and informed patient decision-making, potentially improving the overall health status of patients with chronic diseases.
Collapse
Affiliation(s)
- Bijun Wang
- Department of Business Analytics and Data Science, Florida Polytechnic University, Lakeland, FL, United States
| | - Onur Asan
- School of Systems and Enterprises, Stevens Institue of Technology, Hoboken, NJ, United States
| | - Mo Mansouri
- School of Systems and Enterprises, Stevens Institue of Technology, Hoboken, NJ, United States
| |
Collapse
|
24
|
Hasan SU, Siddiqui MAR. Diagnostic accuracy of smartphone-based artificial intelligence systems for detecting diabetic retinopathy: A systematic review and meta-analysis. Diabetes Res Clin Pract 2023; 205:110943. [PMID: 37805002 DOI: 10.1016/j.diabres.2023.110943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 07/28/2023] [Accepted: 10/05/2023] [Indexed: 10/09/2023]
Abstract
AIMS Diabetic retinopathy (DR) is a major cause of blindness globally, early detection is critical to prevent vision loss. Traditional screening that, rely on human experts are, however, costly, and time-consuming. The purpose of this systematic review is to assess the diagnostic accuracy of smartphone-based artificial intelligence(AI) systems for DR detection. METHODS Literature review was conducted on MEDLINE, Embase, Scopus, CINAHL Plus, and Cochrane from inception to December 2022. We included diagnostic test accuracy studies evaluating the use of smartphone-based AI algorithms for DR screening in patients with diabetes, with expert human grader as the reference standard. Random-effects model was used to pool sensitivity and specificity. Any DR(ADR) and referable DR(RDR) were analyzed separately. RESULTS Out of 968 identified articles, six diagnostic test accuracy studies met our inclusion criteria, comprising 3,931 patients. Four of these studies used the Medios AI algorithm. The pooled sensitivity and specificity for diagnosis of ADR were 88 % and 91.5 % respectively and for diagnosis of RDR were 98.2 % and 81.2 % respectively. The overall risk of bias across the studies was low. CONCLUSIONS Smartphone-based AI algorithms show high diagnostic accuracy for detecting DR. However, more high-quality comparative studies are needed to evaluate the effectiveness in real-world clinical settings.
Collapse
Affiliation(s)
- S Umar Hasan
- Department of Ophthalmology and Visual Sciences, Aga Khan University Hospital, National Stadium Road, Karachi, Pakistan
| | - M A Rehman Siddiqui
- Department of Ophthalmology and Visual Sciences, Aga Khan University Hospital, National Stadium Road, Karachi, Pakistan.
| |
Collapse
|
25
|
Ajuwon BI, Awotundun ON, Richardson A, Roper K, Sheel M, Rahman N, Salako A, Lidbury BA. Machine learning prediction models for clinical management of blood-borne viral infections: a systematic review of current applications and future impact. Int J Med Inform 2023; 179:105244. [PMID: 37820561 DOI: 10.1016/j.ijmedinf.2023.105244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 09/08/2023] [Accepted: 10/03/2023] [Indexed: 10/13/2023]
Abstract
BACKGROUND Machine learning (ML) prediction models to support clinical management of blood-borne viral infections are becoming increasingly abundant in medical literature, with a number of competing models being developed for the same outcome or target population. However, evidence on the quality of these ML prediction models are limited. OBJECTIVE This study aimed to evaluate the development and quality of reporting of ML prediction models that could facilitate timely clinical management of blood-borne viral infections. METHODS We conducted narrative evidence synthesis following the synthesis without meta-analysis guidelines. We searched PubMed and Cochrane Central Register of Controlled Trials for all studies applying ML models for predicting clinical outcomes associated with hepatitis B virus (HBV), human immunodeficiency virus (HIV), or hepatitis C virus (HCV). RESULTS We found 33 unique ML prediction models aiming to support clinical decision making. Overall, 12 (36.4%) focused on HBV, 10 (30.3%) on HCV, 10 on HIV (30.3%) and two (6.1%) on co-infection. Among these, six (18.2%) addressed the diagnosis of infection, 16 (48.5%) the prognosis of infection, eight (24.2%) the prediction of treatment response, two (6.1%) progression through a cascade of care, and one (3.03%) focused on the choice of antiretroviral therapy (ART). Nineteen prediction models (57.6%) were developed using data from high-income countries. Evaluation of prediction models was limited to measures of performance. Detailed information on software code accessibility was often missing. Independent validation on new datasets and/or in other institutions was rarely done. CONCLUSION Promising approaches for ML prediction models in blood-borne viral infections were identified, but the lack of robust validation, interpretability/explainability, and poor quality of reporting hampered their clinical relevance. Our findings highlight important considerations that can inform standard reporting guidelines for ML prediction models in the future (e.g., TRIPOD-AI), and provides critical data to inform robust evaluation of the models.
Collapse
Affiliation(s)
- Busayo I Ajuwon
- National Centre for Epidemiology and Population Health, ANU College of Health and Medicine, The Australian National University, Acton, Australian Capital Territory, Australia; Department of Biosciences and Biotechnology, Faculty of Pure and Applied Sciences, Kwara State University, Malete, Nigeria.
| | - Oluwatosin N Awotundun
- Department of Epidemiology, Biostatistics and Occupational Health, Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
| | - Alice Richardson
- Statistical Support Network, The Australian National University, Acton, ACT, Australia
| | - Katrina Roper
- National Centre for Epidemiology and Population Health, ANU College of Health and Medicine, The Australian National University, Acton, Australian Capital Territory, Australia
| | - Meru Sheel
- Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, New South Wales, Australia
| | - Nurudeen Rahman
- Department of Medical Parasitology and Infection Biology, Swiss Tropical and Public Health Institute, Basel, Switzerland
| | - Abideen Salako
- Department of Clinical Sciences, Nigerian Institute of Medical Research, Yaba, Lagos State, Nigeria
| | - Brett A Lidbury
- National Centre for Epidemiology and Population Health, ANU College of Health and Medicine, The Australian National University, Acton, Australian Capital Territory, Australia
| |
Collapse
|
26
|
Pelayo C, Hoang J, Mora Pinzón M, Lock LJ, Fowlkes C, Stevens CL, Jacobson NA, Channa R, Liu Y. Perspectives of Latinx Patients with Diabetes on Teleophthalmology, Artificial Intelligence-Based Image Interpretation, and Virtual Care: A Qualitative Study. TELEMEDICINE REPORTS 2023; 4:317-326. [PMID: 37908628 PMCID: PMC10615055 DOI: 10.1089/tmr.2023.0045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/28/2023] [Indexed: 11/02/2023]
Abstract
Background Latinx populations in the United States bear a disproportionate burden of diabetic eye disease. Teleophthalmology with and without artificial intelligence (AI)-based image interpretation are validated methods for diabetic eye screening, but limited literature exists on patient perspectives. This study aimed at understanding the perspectives of Latinx patients with diabetes on teleophthalmology, AI-based image interpretation, and general virtual care to prevent avoidable blindness in this population. Methods We conducted semi-structured, individual interviews with 20 Latinx patients with diabetes at an urban, federally qualified health center in Madison, WI. Interviews were transcribed verbatim, professionally translated from Spanish to English, and analyzed using both inductive open coding and deductive coding. Results Most participants had no prior experience with teleophthalmology but did have experience with virtual care. Participants expressed a preference for teleophthalmology compared with traditional in-person dilated eye exams but were willing to obtain whichever method of screening was recommended by their primary care clinician. They also strongly preferred having human physician oversight in image review compared with having images interpreted solely using AI. Many participants preferred in-person clinic visits to virtual health care due to the ability to have a more thorough physical exam, as well as for improved non-verbal communication with their clinician. Discussion Leveraging primary care providers' recommendations, human oversight of AI-based image interpretation, and improving communication may enhance acceptance and utilization of teleophthalmology, AI, and virtual care by Latinx patients. Conclusions Understanding Latinx patient perspectives may contribute toward the development of more effective telemedicine interventions to enhance health equity in Latinx communities.
Collapse
Affiliation(s)
- Christian Pelayo
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Johnson Hoang
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Maria Mora Pinzón
- Division of Geriatrics and Gerontology, Department of Medicine, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Loren J. Lock
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Christiana Fowlkes
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Chloe L. Stevens
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Nora A. Jacobson
- Institute for Clinical and Translational Research, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
- School of Nursing, Madison, Wisconsin, USA
| | - Roomasa Channa
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Yao Liu
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
27
|
Manning F, Mahmoud A, Meertens R. Understanding patient views and acceptability of predictive software in osteoporosis identification. Radiography (Lond) 2023; 29:1046-1053. [PMID: 37734275 DOI: 10.1016/j.radi.2023.08.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 08/21/2023] [Accepted: 08/28/2023] [Indexed: 09/23/2023]
Abstract
INTRODUCTION Research into patient and public views on predictive software and its use in healthcare is relatively new. This study aimed to understand older adults' acceptability of an opportunistic bone density assessment for osteoporosis diagnosis (IBEX BH), views on its integration into healthcare, and views on predictive software and AI in healthcare. METHODS Focus groups were conducted with participants aged over 50 years, based in South West England. Data were analysed using thematic analysis. Analysis was informed by the theoretical framework of acceptability. RESULTS Two focus groups were undertaken with a total of 14 participants. Overall, the participants were generally positive about the IBEX BH software, and predictive software's in general stating 'it sounds like a brilliant idea'. Although participants did not understand the intricacies of the software, they did not feel they needed to. Concerns about IBEX BH focussed more on the clinical indications of the software (e.g. more scans or medications), with participants expressing less trust in results if they indicated medication. Questions were also raised about how and who would receive the results of this software. Individual choice was evident in these discussions, however most indicated the preferences for spoken communication 'But I would expect that these results would be given by a human to another human.' CONCLUSIONS Focus group participants were generally accepting of the use of predictive software in healthcare. IMPLICATIONS FOR PRACTICE Thought and care needs to be taken when integrating predictive software into practice. Focusses on empowering patients, providing information on processes and results are key.
Collapse
Affiliation(s)
- F Manning
- Department of Health and Care Professions, University of Exeter Medical School, University of Exeter, Exeter, UK.
| | - A Mahmoud
- Department of Health and Community Sciences, University of Exeter Medical School, University of Exeter, Exeter, UK.
| | - R Meertens
- Department of Health and Care Professions, University of Exeter Medical School, University of Exeter, Exeter, UK.
| |
Collapse
|
28
|
Stam WT, Ingwersen EW, Ali M, Spijkerman JT, Kazemier G, Bruns ERJ, Daams F. Machine learning models in clinical practice for the prediction of postoperative complications after major abdominal surgery. Surg Today 2023; 53:1209-1215. [PMID: 36840764 PMCID: PMC10520164 DOI: 10.1007/s00595-023-02662-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 02/07/2023] [Indexed: 02/26/2023]
Abstract
Complications after surgery have a major impact on short- and long-term outcomes, and decades of technological advancement have not yet led to the eradication of their risk. The accurate prediction of complications, recently enhanced by the development of machine learning algorithms, has the potential to completely reshape surgical patient management. In this paper, we reflect on multiple issues facing the implementation of machine learning, from the development to the actual implementation of machine learning models in daily clinical practice, providing suggestions on the use of machine learning models for predicting postoperative complications after major abdominal surgery.
Collapse
Affiliation(s)
- Wessel T Stam
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, The Netherlands
- AGEM Amsterdam Gastroenterology, Endocrinology and Metabolism, Amsterdam, The Netherlands
| | - Erik W Ingwersen
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, The Netherlands
- AGEM Amsterdam Gastroenterology, Endocrinology and Metabolism, Amsterdam, The Netherlands
| | - Mahsoem Ali
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, The Netherlands
| | - Jorik T Spijkerman
- Independent Consultant in Computational Intelligence, Amsterdam, The Netherlands
| | - Geert Kazemier
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, The Netherlands
| | - Emma R J Bruns
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, The Netherlands
| | - Freek Daams
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands.
- Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, The Netherlands.
| |
Collapse
|
29
|
Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, Aldairem A, Alrashed M, Bin Saleh K, Badreldin HA, Al Yami MS, Al Harbi S, Albekairy AM. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC MEDICAL EDUCATION 2023; 23:689. [PMID: 37740191 PMCID: PMC10517477 DOI: 10.1186/s12909-023-04698-z] [Citation(s) in RCA: 59] [Impact Index Per Article: 59.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 09/19/2023] [Indexed: 09/24/2023]
Abstract
INTRODUCTION Healthcare systems are complex and challenging for all stakeholders, but artificial intelligence (AI) has transformed various fields, including healthcare, with the potential to improve patient care and quality of life. Rapid AI advancements can revolutionize healthcare by integrating it into clinical practice. Reporting AI's role in clinical practice is crucial for successful implementation by equipping healthcare providers with essential knowledge and tools. RESEARCH SIGNIFICANCE This review article provides a comprehensive and up-to-date overview of the current state of AI in clinical practice, including its potential applications in disease diagnosis, treatment recommendations, and patient engagement. It also discusses the associated challenges, covering ethical and legal considerations and the need for human expertise. By doing so, it enhances understanding of AI's significance in healthcare and supports healthcare organizations in effectively adopting AI technologies. MATERIALS AND METHODS The current investigation analyzed the use of AI in the healthcare system with a comprehensive review of relevant indexed literature, such as PubMed/Medline, Scopus, and EMBASE, with no time constraints but limited to articles published in English. The focused question explores the impact of applying AI in healthcare settings and the potential outcomes of this application. RESULTS Integrating AI into healthcare holds excellent potential for improving disease diagnosis, treatment selection, and clinical laboratory testing. AI tools can leverage large datasets and identify patterns to surpass human performance in several healthcare aspects. AI offers increased accuracy, reduced costs, and time savings while minimizing human errors. It can revolutionize personalized medicine, optimize medication dosages, enhance population health management, establish guidelines, provide virtual health assistants, support mental health care, improve patient education, and influence patient-physician trust. CONCLUSION AI can be used to diagnose diseases, develop personalized treatment plans, and assist clinicians with decision-making. Rather than simply automating tasks, AI is about developing technologies that can enhance patient care across healthcare settings. However, challenges related to data privacy, bias, and the need for human expertise must be addressed for the responsible and effective implementation of AI in healthcare.
Collapse
Affiliation(s)
- Shuroug A Alowais
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia.
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia.
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia.
| | - Sahar S Alghamdi
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
- Department of Pharmaceutical Sciences, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Nada Alsuhebany
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Tariq Alqahtani
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
- Department of Pharmaceutical Sciences, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Abdulrahman I Alshaya
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Sumaya N Almohareb
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Atheer Aldairem
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Mohammed Alrashed
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Khalid Bin Saleh
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Hisham A Badreldin
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Majed S Al Yami
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Shmeylan Al Harbi
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Abdulkareem M Albekairy
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| |
Collapse
|
30
|
Gould DJ, Dowsey MM, Glanville-Hearst M, Spelman T, Bailey JA, Choong PFM, Bunzli S. Patients' Views on AI for Risk Prediction in Shared Decision-Making for Knee Replacement Surgery: Qualitative Interview Study. J Med Internet Res 2023; 25:e43632. [PMID: 37721797 PMCID: PMC10546266 DOI: 10.2196/43632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 05/04/2023] [Accepted: 08/21/2023] [Indexed: 09/19/2023] Open
Abstract
BACKGROUND The use of artificial intelligence (AI) in decision-making around knee replacement surgery is increasing, and this technology holds promise to improve the prediction of patient outcomes. Ambiguity surrounds the definition of AI, and there are mixed views on its application in clinical settings. OBJECTIVE In this study, we aimed to explore the understanding and attitudes of patients who underwent knee replacement surgery regarding AI in the context of risk prediction for shared clinical decision-making. METHODS This qualitative study involved patients who underwent knee replacement surgery at a tertiary referral center for joint replacement surgery. The participants were selected based on their age and sex. Semistructured interviews explored the participants' understanding of AI and their opinions on its use in shared clinical decision-making. Data collection and reflexive thematic analyses were conducted concurrently. Recruitment continued until thematic saturation was achieved. RESULTS Thematic saturation was achieved with 19 interviews and confirmed with 1 additional interview, resulting in 20 participants being interviewed (female participants: n=11, 55%; male participants: n=9, 45%; median age: 66 years). A total of 11 (55%) participants had a substantial postoperative complication. Three themes captured the participants' understanding of AI and their perceptions of its use in shared clinical decision-making. The theme Expectations captured the participants' views of themselves as individuals with the right to self-determination as they sought therapeutic solutions tailored to their circumstances, needs, and desires, including whether to use AI at all. The theme Empowerment highlighted the potential of AI to enable patients to develop realistic expectations and equip them with personalized risk information to discuss in shared decision-making conversations with the surgeon. The theme Partnership captured the importance of symbiosis between AI and clinicians because AI has varied levels of interpretability and understanding of human emotions and empathy. CONCLUSIONS Patients who underwent knee replacement surgery in this study had varied levels of familiarity with AI and diverse conceptualizations of its definitions and capabilities. Educating patients about AI through nontechnical explanations and illustrative scenarios could help inform their decision to use it for risk prediction in the shared decision-making process with their surgeon. These findings could be used in the process of developing a questionnaire to ascertain the views of patients undergoing knee replacement surgery on the acceptability of AI in shared clinical decision-making. Future work could investigate the accuracy of this patient group's understanding of AI, beyond their familiarity with it, and how this influences their acceptance of its use. Surgeons may play a key role in finding a place for AI in the clinical setting as the uptake of this technology in health care continues to grow.
Collapse
Affiliation(s)
- Daniel J Gould
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - Michelle M Dowsey
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
- Department of Orthopaedics, St Vincent's Hospital Melbourne, Melbourne, Australia
| | | | - Tim Spelman
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - James A Bailey
- School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
| | - Peter F M Choong
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
- Department of Orthopaedics, St Vincent's Hospital Melbourne, Melbourne, Australia
| | - Samantha Bunzli
- School of Health Sciences and Social Work, Griffith University, Brisbane, Australia
| |
Collapse
|
31
|
Katirai A, Yamamoto BA, Kogetsu A, Kato K. Perspectives on artificial intelligence in healthcare from a Patient and Public Involvement Panel in Japan: an exploratory study. Front Digit Health 2023; 5:1229308. [PMID: 37781456 PMCID: PMC10533983 DOI: 10.3389/fdgth.2023.1229308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 08/28/2023] [Indexed: 10/03/2023] Open
Abstract
Patients and members of the public are the end users of healthcare, but little is known about their views on the use of artificial intelligence (AI) in healthcare, particularly in the Japanese context. This paper reports on an exploratory two-part workshop conducted with members of a Patient and Public Involvement Panel in Japan, which was designed to identify their expectations and concerns about the use of AI in healthcare broadly. 55 expectations and 52 concerns were elicited from workshop participants, who were then asked to cluster and title these expectations and concerns. Thematic content analysis was used to identify 12 major themes from this data. Participants had notable expectations around improved hospital administration, improved quality of care and patient experience, and positive changes in roles and relationships, and reductions in costs and disparities. These were counterbalanced by concerns about problematic changes to healthcare and a potential loss of autonomy, as well as risks around accountability and data management, and the possible emergence of new disparities. The findings reflect participants' expectations for AI as a possible solution for long-standing issues in healthcare, though their overall balanced view of AI mirrors findings reported in other contexts. Thus, this paper offers initial, novel insights into perspectives on AI in healthcare from the Japanese context. Moreover, the findings are used to argue for the importance of involving patient and public stakeholders in deliberation on AI in healthcare.
Collapse
Affiliation(s)
- Amelia Katirai
- Research Center on Ethical, Legal, and Social Issues, Osaka University, Suita, Japan
| | | | - Atsushi Kogetsu
- Department of Biomedical Ethics and Public Policy, Graduate School of Medicine, Osaka University, Suita, Japan
| | - Kazuto Kato
- Department of Biomedical Ethics and Public Policy, Graduate School of Medicine, Osaka University, Suita, Japan
| |
Collapse
|
32
|
Thiébaut R, Hejblum B, Mougin F, Tzourio C, Richert L. ChatGPT and beyond with artificial intelligence (AI) in health: Lessons to be learned. Joint Bone Spine 2023; 90:105607. [PMID: 37414138 DOI: 10.1016/j.jbspin.2023.105607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 06/16/2023] [Accepted: 06/23/2023] [Indexed: 07/08/2023]
Affiliation(s)
- Rodolphe Thiébaut
- Bordeaux Population Health, université Bordeaux, Inserm, U1219, 33000 Bordeaux cedex, France; INRIA, SISTM, 33000 Bordeaux cedex, France; Medical Information Department, CHU de Bordeaux, 33000 Bordeaux cedex, France.
| | - Boris Hejblum
- Bordeaux Population Health, université Bordeaux, Inserm, U1219, 33000 Bordeaux cedex, France; INRIA, SISTM, 33000 Bordeaux cedex, France
| | - Fleur Mougin
- Bordeaux Population Health, université Bordeaux, Inserm, U1219, 33000 Bordeaux cedex, France
| | - Christophe Tzourio
- Bordeaux Population Health, université Bordeaux, Inserm, U1219, 33000 Bordeaux cedex, France; Medical Information Department, CHU de Bordeaux, 33000 Bordeaux cedex, France
| | - Laura Richert
- Bordeaux Population Health, université Bordeaux, Inserm, U1219, 33000 Bordeaux cedex, France; INRIA, SISTM, 33000 Bordeaux cedex, France; Medical Information Department, CHU de Bordeaux, 33000 Bordeaux cedex, France
| |
Collapse
|
33
|
Felsky D, Cannitelli A, Pipitone J. Whole Person Modeling: a transdisciplinary approach to mental health research. DISCOVER MENTAL HEALTH 2023; 3:16. [PMID: 37638348 PMCID: PMC10449734 DOI: 10.1007/s44192-023-00041-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 08/10/2023] [Indexed: 08/29/2023]
Abstract
The growing global burden of mental illness has prompted calls for innovative research strategies. Theoretical models of mental health include complex contributions of biological, psychosocial, experiential, and other environmental influences. Accordingly, neuropsychiatric research has self-organized into largely isolated disciplines working to decode each individual contribution. However, research directly modeling objective biological measurements in combination with cognitive, psychological, demographic, or other environmental measurements is only now beginning to proliferate. This review aims to (1) to describe the landscape of modern mental health research and current movement towards integrative study, (2) to provide a concrete framework for quantitative integrative research, which we call Whole Person Modeling, (3) to explore existing and emerging techniques and methods used in Whole Person Modeling, and (4) to discuss our observations about the scarcity, potential value, and untested aspects of highly transdisciplinary research in general. Whole Person Modeling studies have the potential to provide a better understanding of multilevel phenomena, deliver more accurate diagnostic and prognostic tests to aid in clinical decision making, and test long standing theoretical models of mental illness. Some current barriers to progress include challenges with interdisciplinary communication and collaboration, systemic cultural barriers to transdisciplinary career paths, technical challenges in model specification, bias, and data harmonization, and gaps in transdisciplinary educational programs. We hope to ease anxiety in the field surrounding the often mysterious and intimidating world of transdisciplinary, data-driven mental health research and provide a useful orientation for students or highly specialized researchers who are new to this area.
Collapse
Affiliation(s)
- Daniel Felsky
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, 250 College Street, Toronto, ON M5T 1R8 Canada
- Department of Psychiatry, Faculty of Medicine, University of Toronto, Toronto, ON Canada
- Division of Biostatistics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON Canada
- Rotman Research Institute, Baycrest Hospital, Toronto, ON Canada
- Faculty of Medicine, McMaster University, Hamilton, ON Canada
| | - Alyssa Cannitelli
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, 250 College Street, Toronto, ON M5T 1R8 Canada
- Faculty of Medicine, McMaster University, Hamilton, ON Canada
| | - Jon Pipitone
- Department of Psychiatry, Queen’s University, Kingston, ON Canada
| |
Collapse
|
34
|
Teodorowski P, Gleason K, Gregory JJ, Martin M, Punjabi R, Steer S, Savasir S, Vema P, Murray K, Ward H, Chapko D. Participatory evaluation of the process of co-producing resources for the public on data science and artificial intelligence. RESEARCH INVOLVEMENT AND ENGAGEMENT 2023; 9:67. [PMID: 37580823 PMCID: PMC10426152 DOI: 10.1186/s40900-023-00480-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 07/31/2023] [Indexed: 08/16/2023]
Abstract
BACKGROUND The growth of data science and artificial intelligence offers novel healthcare applications and research possibilities. Patients should be able to make informed choices about using healthcare. Therefore, they must be provided with lay information about new technology. A team consisting of academic researchers, health professionals, and public contributors collaboratively co-designed and co-developed the new resource offering that information. In this paper, we evaluate this novel approach to co-production. METHODS We used participatory evaluation to understand the co-production process. This consisted of creative approaches and reflexivity over three stages. Firstly, everyone had an opportunity to participate in three online training sessions. The first one focused on the aims of evaluation, the second on photovoice (that included practical training on using photos as metaphors), and the third on being reflective (recognising one's biases and perspectives during analysis). During the second stage, using photovoice, everyone took photos that symbolised their experiences of being involved in the project. This included a session with a professional photographer. At the last stage, we met in person and, using data collected from photovoice, built the mandala as a representation of a joint experience of the project. This stage was supported by professional artists who summarised the mandala in the illustration. RESULTS The mandala is the artistic presentation of the findings from the evaluation. It is a shared journey between everyone involved. We divided it into six related layers. Starting from inside layers present the following experiences (1) public contributors had space to build confidence in a new topic, (2) relationships between individuals and within the project, (3) working remotely during the COVID-19 pandemic, (4) motivation that influenced people to become involved in this particular piece of work, (5) requirements that co-production needs to be inclusive and accessible to everyone, (6) expectations towards data science and artificial intelligence that researchers should follow to establish public support. CONCLUSIONS The participatory evaluation suggests that co-production around data science and artificial intelligence can be a meaningful process that is co-owned by everyone involved.
Collapse
Affiliation(s)
| | - Kelly Gleason
- Imperial Cancer Research UK Lead Nurse, Department of Surgery and Cancer, Imperial College London, London, UK
| | - Jonathan J Gregory
- Computational Oncology Group, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London, UK
| | - Martha Martin
- School of Primary Care and Public Health, Imperial College London, London, UK
| | | | | | | | | | - Kabelo Murray
- School of Public Health, Imperial College London, London, UK
- NIHR Applied Research Collaboration Northwest London, Imperial College London, London, UK
| | - Helen Ward
- School of Public Health, Imperial College London, London, UK
- NIHR Applied Research Collaboration Northwest London, Imperial College London, London, UK
- National Institute for Health Research Imperial Biomedical Research Centre, London, UK
| | - Dorota Chapko
- School of Public Health, Imperial College London, London, UK
- NIHR Applied Research Collaboration Northwest London, Imperial College London, London, UK
| |
Collapse
|
35
|
Jarab AS, Al-Qerem W, Alzoubi KH, Obeidat H, Abu Heshmeh S, Mukattash TL, Naser YA, Al-Azayzih A. Artificial intelligence in pharmacy practice: Attitude and willingness of the community pharmacists and the barriers for its implementation. Saudi Pharm J 2023; 31:101700. [PMID: 37555012 PMCID: PMC10404546 DOI: 10.1016/j.jsps.2023.101700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 07/04/2023] [Indexed: 08/10/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is the capacity of machines to perform tasks that ordinarily require human intelligence. AI can be utilized in various pharmaceutical applications with less time and cost. OBJECTIVES To evaluate community pharmacists' willingness and attitudes towards the adoption of AI technology at pharmacy settings, and the barriers that hinder AI implementation. METHODS This cross-sectional study was conducted among community pharmacists in Jordan using an online-based questionnaire. In addition to socio-demographics, the survey assessed pharmacists' willingness, attitudes, and barriers to AI adoption in pharmacy. Binary logistic regression was conducted to find the variables that are independently associated with willingness and attitude towards AI implementation. RESULTS The present study enrolled 401 pharmacist participants. The median age was 30 (29-33) years. Most of the pharmacists were females (66.6%), had bachelor's degree of pharmacy (56.1%), had low-income (54.6%), and had one to five years of experience (35.9%). The pharmacists showed good willingness and attitude towards AI implementation at pharmacy (n = 401). The most common barriers to AI were lack of AI-related software and hardware (79.2%), the need for human supervision (76.4%), and the high running cost of AI (74.6%). Longer weekly working hours (attitude: OR = 1.072, 95% C.I (1.040-1.104), P < 0.001, willingness: OR = 1.069, 95% Cl. 1.039-1.009, P-value = 0.011), and higher knowledge of AI applications (attitude: OR = 1.697, 95%Cl (1.327-2.170), willingness: OR = 1.790, 95%Cl. (1.396-2.297), P-value < 0.001 for both) were significantly associated with better willingness and attitude towards AI, whereas greater years of experience (OR = 20.859, 95% Cl (5.241-83.017), P-value < 0.001) were associated with higher willingness. In contrast, pharmacists with high income (OR = 0.382, 95% Cl. (0.183-0.795), P-value = 0.010), and those with<10 visitors (OR = 0.172, 95% Cl. (0.035-0.838), P-value = 0.029) or 31-50 visitors daily (OR = 0.392, 95% Cl. (0.162-0.944), P-value = 0.037) had less willingness to adopt AI. CONCLUSIONS Despite the pharmacists' positive willingness and attitudes toward AI, several barriers were identified, highlighting the importance of providing educational and training programs to improve pharmacists' knowledge of AI, as well as ensuring adequate funding support to overcome the issue of AI high operating costs.
Collapse
Affiliation(s)
- Anan S. Jarab
- Department of Clinical Pharmacy, Faculty of Pharmacy, Jordan University of Science and Technology. P.O. Box 3030. Irbid 22110, Jordan
- College of Pharmacy, AL Ain University, Abu Dhabi, United Arab Emirates
| | - Walid Al-Qerem
- Department of Pharmacy, Faculty of Pharmacy, Al-Zaytoonah University of Jordan. P.O. Box 130, Amman 11733, Jordan
| | - Karem H Alzoubi
- Department of Pharmacy Practice and Pharmacotherapeutics, College of Pharmacy, University of Sharjah, Sharjah, UAE
- Faculty of Pharmacy, Jordan University of Science and Technology, Irbid, Jordan
| | - Haneen Obeidat
- Department of Clinical Pharmacy, Faculty of Pharmacy, Jordan University of Science and Technology. P.O. Box 3030. Irbid 22110, Jordan
| | - Shrouq Abu Heshmeh
- Department of Clinical Pharmacy, Faculty of Pharmacy, Jordan University of Science and Technology. P.O. Box 3030. Irbid 22110, Jordan
| | - Tareq L. Mukattash
- Department of Clinical Pharmacy, Faculty of Pharmacy, Jordan University of Science and Technology. P.O. Box 3030. Irbid 22110, Jordan
| | - Yara A. Naser
- School of Pharmacy, Queen’s University Belfast, Medical Biology Centre, 97 Lisburn Road, Belfast BT9 7BL, Northern Ireland, UK
| | - Ahmad Al-Azayzih
- Department of Clinical Pharmacy, Faculty of Pharmacy, Jordan University of Science and Technology. P.O. Box 3030. Irbid 22110, Jordan
| |
Collapse
|
36
|
Bauer IL. Robots in travel clinics: building on tourism's use of technology and robots for infection control during a pandemic. Trop Dis Travel Med Vaccines 2023; 9:10. [PMID: 37525269 PMCID: PMC10391865 DOI: 10.1186/s40794-023-00197-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 06/14/2023] [Indexed: 08/02/2023] Open
Abstract
The arrival of COVID-19 impacted every aspect of life around the world. The virus, whose spread was facilitated overwhelmingly by people's close contact at home and by travelling, devastated the tourism, hospitality, and transportation industry. Economic survival depended largely on demonstrating to authorities and potential travellers the strict adherence to infection control measures. Fortunately, long before the pandemic, the industry had already employed digital technology, artificial intelligence, and service robots, not to keep the world safe, but to either bridge staff shortages or save costs, reduce waiting times, streamline administration, complete unattractive, tedious, or physical tasks, or use technology as marketing gimmicks. With COVID-19, offering social distancing and touchless service was an easy step by extending quickly what was already there. The question arose: could travellers' acceptance of technology and robots for infection control be useful in travel medicine? COVID-19 fostered the rapid and increased acceptance of touchless technology relating to all things travel. The public's expectations regarding hygiene, health and safety, and risk of infection have changed and may stay with us long after the pandemic is 'the new normal', or a new one approaches. This insight, combined with the current experience with robots in health and medicine, is useful in exploring how robots could assist travel medicine practice. However, several aspects need to be considered in terms of type of robot, tasks required, and the public's positive or negative attitudes towards robots to avoid known pitfalls. To meet the crucial infection control measures of social distancing and touch avoidance, the use of robots in travel medicine may not only be readily accepted but expected, and implications for management, practice, and research need to be considered.
Collapse
Affiliation(s)
- Irmgard L Bauer
- College of Healthcare Sciences, Academy - Tropical Health and Medicine, James Cook University, Townsville, QLD, 4811, Australia.
| |
Collapse
|
37
|
Grassini S. Development and validation of the AI attitude scale (AIAS-4): a brief measure of general attitude toward artificial intelligence. Front Psychol 2023; 14:1191628. [PMID: 37554139 PMCID: PMC10406504 DOI: 10.3389/fpsyg.2023.1191628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 06/16/2023] [Indexed: 08/10/2023] Open
Abstract
The rapid advancement of artificial intelligence (AI) has generated an increasing demand for tools that can assess public attitudes toward AI. This study proposes the development and the validation of the AI Attitude Scale (AIAS), a concise self-report instrument designed to evaluate public perceptions of AI technology. The first version of the AIAS that the present manuscript proposes comprises five items, including one reverse-scored item, which aims to gauge individuals' beliefs about AI's influence on their lives, careers, and humanity overall. The scale is designed to capture attitudes toward AI, focusing on the perceived utility and potential impact of technology on society and humanity. The psychometric properties of the scale were investigated using diverse samples in two separate studies. An exploratory factor analysis was initially conducted on a preliminary 5-item version of the scale. Such exploratory validation study revealed the need to divide the scale into two factors. While the results demonstrated satisfactory internal consistency for the overall scale and its correlation with related psychometric measures, separate analyses for each factor showed robust internal consistency for Factor 1 but insufficient internal consistency for Factor 2. As a result, a second version of the scale is developed and validated, omitting the item that displayed weak correlation with the remaining items in the questionnaire. The refined final 1-factor, 4-item AIAS demonstrated superior overall internal consistency compared to the initial 5-item scale and the proposed factors. Further confirmatory factor analyses, performed on a different sample of participants, confirmed that the 1-factor model (4-items) of the AIAS exhibited an adequate fit to the data, providing additional evidence for the scale's structural validity and generalizability across diverse populations. In conclusion, the analyses reported in this article suggest that the developed and validated 4-items AIAS can be a valuable instrument for researchers and professionals working on AI development who seek to understand and study users' general attitudes toward AI.
Collapse
Affiliation(s)
- Simone Grassini
- Department of Psychosocial Science, University of Bergen, Bergen, Norway
- Cognitive and Behavioral Neuroscience Lab, University of Stavanger, Stavanger, Norway
| |
Collapse
|
38
|
Nov O, Singh N, Mann D. Putting ChatGPT's Medical Advice to the (Turing) Test: Survey Study. JMIR MEDICAL EDUCATION 2023; 9:e46939. [PMID: 37428540 PMCID: PMC10366957 DOI: 10.2196/46939] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/26/2023] [Accepted: 06/14/2023] [Indexed: 07/11/2023]
Abstract
BACKGROUND Chatbots are being piloted to draft responses to patient questions, but patients' ability to distinguish between provider and chatbot responses and patients' trust in chatbots' functions are not well established. OBJECTIVE This study aimed to assess the feasibility of using ChatGPT (Chat Generative Pre-trained Transformer) or a similar artificial intelligence-based chatbot for patient-provider communication. METHODS A survey study was conducted in January 2023. Ten representative, nonadministrative patient-provider interactions were extracted from the electronic health record. Patients' questions were entered into ChatGPT with a request for the chatbot to respond using approximately the same word count as the human provider's response. In the survey, each patient question was followed by a provider- or ChatGPT-generated response. Participants were informed that 5 responses were provider generated and 5 were chatbot generated. Participants were asked-and incentivized financially-to correctly identify the response source. Participants were also asked about their trust in chatbots' functions in patient-provider communication, using a Likert scale from 1-5. RESULTS A US-representative sample of 430 study participants aged 18 and older were recruited on Prolific, a crowdsourcing platform for academic studies. In all, 426 participants filled out the full survey. After removing participants who spent less than 3 minutes on the survey, 392 respondents remained. Overall, 53.3% (209/392) of respondents analyzed were women, and the average age was 47.1 (range 18-91) years. The correct classification of responses ranged between 49% (192/392) to 85.7% (336/392) for different questions. On average, chatbot responses were identified correctly in 65.5% (1284/1960) of the cases, and human provider responses were identified correctly in 65.1% (1276/1960) of the cases. On average, responses toward patients' trust in chatbots' functions were weakly positive (mean Likert score 3.4 out of 5), with lower trust as the health-related complexity of the task in the questions increased. CONCLUSIONS ChatGPT responses to patient questions were weakly distinguishable from provider responses. Laypeople appear to trust the use of chatbots to answer lower-risk health questions. It is important to continue studying patient-chatbot interaction as chatbots move from administrative to more clinical roles in health care.
Collapse
Affiliation(s)
- Oded Nov
- Department of Technology Management, Tandon School of Engineering, New York University, New York, NY, United States
| | - Nina Singh
- Department of Population Health, Grossman School of Medicine, New York University, New York, NY, United States
| | - Devin Mann
- Department of Population Health, Grossman School of Medicine, New York University, New York, NY, United States
- Medical Center Information Technology, Langone Health, New York University, New York, NY, United States
| |
Collapse
|
39
|
Wehkamp K, Krawczak M, Schreiber S. The Quality and Utility of Artificial Intelligence in Patient Care. DEUTSCHES ARZTEBLATT INTERNATIONAL 2023; 120:463-469. [PMID: 37218054 PMCID: PMC10487679 DOI: 10.3238/arztebl.m2023.0124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 11/30/2022] [Accepted: 05/08/2023] [Indexed: 05/24/2023]
Abstract
BACKGROUND Artificial intelligence (AI) is increasingly being used in patient care. In the future, physicians will need to understand not only the basic functioning of AI applications, but also their quality, utility, and risks. METHODS This article is based on a selective review of the literature on the principles, quality, limitations, and benefits AI applications in patient care, along with examples of individual applications. RESULTS The number of AI applications in patient care is rising, with more than 500 approvals in the United States to date. Their quality and utility are based on a number of interdependent factors, including the real-life setting, the type and amount of data collected, the choice of variables used by the application, the algorithms used, and the goal and implementation of each application. Bias (which may be hidden) and errors can arise at all these levels. Any evaluation of the quality and utility of an AI application must, therefore, be conducted according to the scientific principles of evidence-based medicine-a requirement that is often hampered by a lack of transparency. CONCLUSION AI has the potential to improve patient care while meeting the challenge of dealing with an ever-increasing surfeit of information and data in medicine with limited human resources. The limitations and risks of AI applications require critical and responsible consideration. This can best be achieved through a combination of scientific.
Collapse
Affiliation(s)
- Kai Wehkamp
- Department of Internal Medicine I, University Medical Center Schleswig-Holstein, Campus Lübeck, Kiel, Germany
- Department for Medical Management, MSH Medical School Hamburg, Hamburg, Germany
| | - Michael Krawczak
- Institute of Medical Informatics and Statistics, Christian-Albrechts-University of Kiel, University Medical Center Schleswig-Holstein Campus Kiel, Germany
| | - Stefan Schreiber
- Department of Internal Medicine I, University Medical Center Schleswig-Holstein, Campus Lübeck, Kiel, Germany
- Institute of Clinical Molecular Biology, Christian-Albrechts-University of Kiel, University Medical Center Schleswig-Holstein Campus Kiel, Germany
| |
Collapse
|
40
|
Retson TA, Eghtedari M. Expanding Horizons: The Realities of CAD, the Promise of Artificial Intelligence, and Machine Learning's Role in Breast Imaging beyond Screening Mammography. Diagnostics (Basel) 2023; 13:2133. [PMID: 37443526 DOI: 10.3390/diagnostics13132133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/06/2023] [Accepted: 06/12/2023] [Indexed: 07/15/2023] Open
Abstract
Artificial intelligence (AI) applications in mammography have gained significant popular attention; however, AI has the potential to revolutionize other aspects of breast imaging beyond simple lesion detection. AI has the potential to enhance risk assessment by combining conventional factors with imaging and improve lesion detection through a comparison with prior studies and considerations of symmetry. It also holds promise in ultrasound analysis and automated whole breast ultrasound, areas marked by unique challenges. AI's potential utility also extends to administrative tasks such as MQSA compliance, scheduling, and protocoling, which can reduce the radiologists' workload. However, adoption in breast imaging faces limitations in terms of data quality and standardization, generalizability, benchmarking performance, and integration into clinical workflows. Developing methods for radiologists to interpret AI decisions, and understanding patient perspectives to build trust in AI results, will be key future endeavors, with the ultimate aim of fostering more efficient radiology practices and better patient care.
Collapse
Affiliation(s)
- Tara A Retson
- Department of Radiology, University of California, San Diego, CA 92093, USA
| | - Mohammad Eghtedari
- Department of Radiology, University of California, San Diego, CA 92093, USA
| |
Collapse
|
41
|
Sauerbrei A, Kerasidou A, Lucivero F, Hallowell N. The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Med Inform Decis Mak 2023; 23:73. [PMID: 37081503 PMCID: PMC10116477 DOI: 10.1186/s12911-023-02162-y] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 03/29/2023] [Indexed: 04/22/2023] Open
Abstract
Artificial intelligence (AI) is often cited as a possible solution to current issues faced by healthcare systems. This includes the freeing up of time for doctors and facilitating person-centred doctor-patient relationships. However, given the novelty of artificial intelligence tools, there is very little concrete evidence on their impact on the doctor-patient relationship or on how to ensure that they are implemented in a way which is beneficial for person-centred care.Given the importance of empathy and compassion in the practice of person-centred care, we conducted a literature review to explore how AI impacts these two values. Besides empathy and compassion, shared decision-making, and trust relationships emerged as key values in the reviewed papers. We identified two concrete ways which can help ensure that the use of AI tools have a positive impact on person-centred doctor-patient relationships. These are (1) using AI tools in an assistive role and (2) adapting medical education. The study suggests that we need to take intentional steps in order to ensure that the deployment of AI tools in healthcare has a positive impact on person-centred doctor-patient relationships. We argue that the proposed solutions are contingent upon clarifying the values underlying future healthcare systems.
Collapse
Affiliation(s)
- Aurelia Sauerbrei
- Ethox Centre, Nuffield Department of Population Health, University of Oxford, Big Data Institute, Old Road Campus, Oxford, OX3 7LF, UK.
| | - Angeliki Kerasidou
- Ethox Centre, Nuffield Department of Population Health, University of Oxford, Big Data Institute, Old Road Campus, Oxford, OX3 7LF, UK
| | - Federica Lucivero
- Ethox Centre, Nuffield Department of Population Health, University of Oxford, Big Data Institute, Old Road Campus, Oxford, OX3 7LF, UK
| | - Nina Hallowell
- Ethox Centre, Nuffield Department of Population Health, University of Oxford, Big Data Institute, Old Road Campus, Oxford, OX3 7LF, UK
| |
Collapse
|
42
|
Brauner P, Hick A, Philipsen R, Ziefle M. What does the public think about artificial intelligence?—A criticality map to understand bias in the public perception of AI. FRONTIERS IN COMPUTER SCIENCE 2023. [DOI: 10.3389/fcomp.2023.1113903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/18/2023] Open
Abstract
IntroductionArtificial Intelligence (AI) has become ubiquitous in medicine, business, manufacturing and transportation, and is entering our personal lives. Public perceptions of AI are often shaped either by admiration for its benefits and possibilities, or by uncertainties, potential threats and fears about this opaque and perceived as mysterious technology. Understanding the public perception of AI, as well as its requirements and attributions, is essential for responsible research and innovation and enables aligning the development and governance of future AI systems with individual and societal needs.MethodsTo contribute to this understanding, we asked 122 participants in Germany how they perceived 38 statements about artificial intelligence in different contexts (personal, economic, industrial, social, cultural, health). We assessed their personal evaluation and the perceived likelihood of these aspects becoming reality.ResultsWe visualized the responses in a criticality map that allows the identification of issues that require particular attention from research and policy-making. The results show that the perceived evaluation and the perceived expectations differ considerably between the domains. The aspect perceived as most critical is the fear of cybersecurity threats, which is seen as highly likely and least liked.DiscussionThe diversity of users influenced the evaluation: People with lower trust rated the impact of AI as more positive but less likely. Compared to people with higher trust, they consider certain features and consequences of AI to be more desirable, but they think the impact of AI will be smaller. We conclude that AI is still a “black box” for many. Neither the opportunities nor the risks can yet be adequately assessed, which can lead to biased and irrational control beliefs in the public perception of AI. The article concludes with guidelines for promoting AI literacy to facilitate informed decision-making.
Collapse
|
43
|
Bezrukova K, Griffith TL, Spell C, Rice V, Yang HE. Artificial Intelligence and Groups: Effects of Attitudes and Discretion on Collaboration. GROUP & ORGANIZATION MANAGEMENT 2023. [DOI: 10.1177/10596011231160574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
We theorize about human-team collaboration with AI by drawing upon research in groups and teams, social psychology, information systems, engineering, and beyond. Based on our review, we focus on two main issues in the teams and AI arena. The first is whether the team generally views AI positively or negatively. The second is whether the decision to use AI is left up to the team members (voluntary use of AI) or mandated by top management or other policy-setters in the organization. These two aspects guide our creation of a team-level conceptual framework modeling how AI introduced as a mandated addition to the team can have asymmetric effects on collaboration level depending on the team’s attitudes about AI. When AI is viewed positively by the team, the effect of mandatory use suppresses collaboration in the team. But when a team has negative attitudes toward AI, mandatory use elevates team collaboration. Our model emphasizes the need for managing team attitudes and discretion strategies and promoting new research directions regarding AI’s implications for teamwork.
Collapse
Affiliation(s)
| | | | - Chester Spell
- Rutgers University School of Business, Camden NJ, USA
| | | | | |
Collapse
|
44
|
Pearce FJ, Cruz Rivera S, Liu X, Manna E, Denniston AK, Calvert MJ. The role of patient-reported outcome measures in trials of artificial intelligence health technologies: a systematic evaluation of ClinicalTrials.gov records (1997-2022). Lancet Digit Health 2023; 5:e160-e167. [PMID: 36828608 DOI: 10.1016/s2589-7500(22)00249-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 09/29/2022] [Accepted: 12/07/2022] [Indexed: 02/24/2023]
Abstract
The extent to which patient-reported outcome measures (PROMs) are used in clinical trials for artificial intelligence (AI) technologies is unknown. In this systematic evaluation, we aim to establish how PROMs are being used to assess AI health technologies. We searched ClinicalTrials.gov for interventional trials registered from inception to Sept 20, 2022, and included trials that tested an AI health technology. We excluded observational studies, patient registries, and expanded access reports. We extracted data regarding the form, function, and intended use population of the AI health technology, in addition to the PROMs used and whether PROMs were incorporated as an input or output in the AI model. The search identified 2958 trials, of which 627 were included in the analysis. 152 (24%) of the included trials used one or more PROM, visual analogue scale, patient-reported experience measure, or usability measure as a trial endpoint. The type of AI health technologies used by these trials included AI-enabled smart devices, clinical decision support systems, and chatbots. The number of clinical trials of AI health technologies registered on ClinicalTrials.gov and the proportion of trials that used PROMs increased from registry inception to 2022. The most common clinical areas AI health technologies were designed for were digestive system health for non-PROM trials and musculoskeletal health (followed by mental and behavioural health) for PROM trials, with PROMs commonly used in clinical areas for which assessment of health-related quality of life and symptom burden is particularly important. Additionally, AI-enabled smart devices were the most common applications tested in trials that used at least one PROM. 24 trials tested AI models that captured PROM data as an input for the AI model. PROM use in clinical trials of AI health technologies falls behind PROM use in all clinical trials. Trial records having inadequate detail regarding the PROMs used or the type of AI health technology tested was a limitation of this systematic evaluation and might have contributed to inaccuracies in the data synthesised. Overall, the use of PROMs in the function and assessment of AI health technologies is not only possible, but is a powerful way of showing that, even in the most technologically advanced health-care systems, patients' perspectives remain central.
Collapse
Affiliation(s)
| | - Samantha Cruz Rivera
- Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, University of Birmingham, Birmingham, UK; Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK; Data-Enabled Medical Technologies and Devices Hub, University of Birmingham, Birmingham, UK.
| | - Xiaoxuan Liu
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK; University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Elaine Manna
- Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| | - Alastair K Denniston
- Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, University of Birmingham, Birmingham, UK; Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK; Data-Enabled Medical Technologies and Devices Hub, University of Birmingham, Birmingham, UK; Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK; University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Health Data Research UK, London, UK; National Institute for Health and Care Research Biomedical Research Centre for Ophthalmology, Moorfields Hospital London NHS Foundation Trust and Institute of Ophthalmology, University College London, London, UK
| | - Melanie J Calvert
- Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, University of Birmingham, Birmingham, UK; Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK; Data-Enabled Medical Technologies and Devices Hub, University of Birmingham, Birmingham, UK; National Institute for Health and Care Research Applied Research Collaboration West Midlands, University of Birmingham, Birmingham, UK; National Institute for Health and Care Research Birmingham Biomedical Research Centre, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; National Institute for Health and Care Research Surgical Reconstruction and Microbiology Centre, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Health Data Research UK, London, UK; National Institute for Health and Care Research Biomedical Research Centre for Ophthalmology, Moorfields Hospital London NHS Foundation Trust and Institute of Ophthalmology, University College London, London, UK; National Institute for Health and Care Research Birmingham-Oxford Blood and Transplant Research Unit in Precision Transplant and Cellular Therapeutics, Birmingham, UK
| |
Collapse
|
45
|
Macri R, Roberts SL. The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making. Curr Oncol 2023; 30:2178-2186. [PMID: 36826129 PMCID: PMC9955933 DOI: 10.3390/curroncol30020168] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/28/2023] [Accepted: 02/01/2023] [Indexed: 02/12/2023] Open
Abstract
Clinical applications of artificial intelligence (AI) in healthcare, including in the field of oncology, have the potential to advance diagnosis and treatment. The literature suggests that patient values should be considered in decision making when using AI in clinical care; however, there is a lack of practical guidance for clinicians on how to approach these conversations and incorporate patient values into clinical decision making. We provide a practical, values-based guide for clinicians to assist in critical reflection and the incorporation of patient values into shared decision making when deciding to use AI in clinical care. Values that are relevant to patients, identified in the literature, include trust, privacy and confidentiality, non-maleficence, safety, accountability, beneficence, autonomy, transparency, compassion, equity, justice, and fairness. The guide offers questions for clinicians to consider when adopting the potential use of AI in their practice; explores illness understanding between the patient and clinician; encourages open dialogue of patient values; reviews all clinically appropriate options; and makes a shared decision of what option best meets the patient's values. The guide can be used for diverse clinical applications of AI.
Collapse
Affiliation(s)
- Rosanna Macri
- Department of Bioethics, Sinai Health, Toronto, ON M5G 1X5, Canada
- Joint Centre for Bioethics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON M5T 1P8, Canada
- Department of Radiation Oncology, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5T 1P5, Canada
- Correspondence:
| | - Shannon L. Roberts
- Project-Specific Bioethics Research Volunteer Student, Hennick Bridgepoint Hospital, Sinai Health, Toronto, ON M4M 2B5, Canada
| |
Collapse
|
46
|
Towards precision medicine based on a continuous deep learning optimization and ensemble approach. NPJ Digit Med 2023; 6:18. [PMID: 36737644 PMCID: PMC9898519 DOI: 10.1038/s41746-023-00759-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 01/17/2023] [Indexed: 02/05/2023] Open
Abstract
We developed a continuous learning system (CLS) based on deep learning and optimization and ensemble approach, and conducted a retrospective data simulated prospective study using ultrasound images of breast masses for precise diagnoses. We extracted 629 breast masses and 2235 images from 561 cases in the institution to train the model in six stages to diagnose benign and malignant tumors, pathological types, and diseases. We randomly selected 180 out of 3098 cases from two external institutions. The CLS was tested with seven independent datasets and compared with 21 physicians, and the system's diagnostic ability exceeded 20 physicians by training stage six. The optimal integrated method we developed is expected accurately diagnose breast masses. This method can also be extended to the intelligent diagnosis of masses in other organs. Overall, our findings have potential value in further promoting the application of AI diagnosis in precision medicine.
Collapse
|
47
|
Chen Y, Hosin AA, George MJ, Asselbergs FW, Shah AD. Digital technology and patient and public involvement (PPI) in routine care and clinical research-A pilot study. PLoS One 2023; 18:e0278260. [PMID: 36735724 PMCID: PMC9897511 DOI: 10.1371/journal.pone.0278260] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 11/13/2022] [Indexed: 02/04/2023] Open
Abstract
BACKGROUND Patient and public involvement (PPI) has growing impact on the design of clinical care and research studies. There remains underreporting of formal PPI events including views related to using digital tools. This study aimed to assess the feasibility of hosting a hybrid PPI event to gather views on the use of digital tools in clinical care and research. METHODS A PPI focus day was held following local procedures and published recommendations related to advertisement, communication and delivery. Two exemplar projects were used as the basis for discussions and qualitative and quantitative data was collected. RESULTS 32 individuals expressed interest in the PPI day and 9 were selected to attend. 3 participated in person and 6 via an online video-calling platform. Selected written and verbal feedback was collected on two digitally themed projects and on the event itself. The overall quality and interactivity for the event was rated as 4/5 for those who attended in person and 4.5/5 and 4.8/5 respectively, for those who attended remotely. CONCLUSIONS A hybrid PPI event is feasible and offers a flexible format to capture the views of patients. The overall enthusiasm for digital tools amongst patients in routine care and clinical research is high, though further work and standardised, systematic reporting of PPI events is required.
Collapse
Affiliation(s)
- Yang Chen
- Institute of Health Informatics, Faculty of Population Health Sciences, University College London, London, United Kingdom
- Clinical Research Informatics Unit, University College London Hospitals, London, United Kingdom
- * E-mail:
| | - Ali A. Hosin
- Clinical Pharmacology Department, University College London Hospitals, London, United Kingdom
| | - Marc J. George
- Clinical Pharmacology Department, University College London Hospitals, London, United Kingdom
| | - Folkert W. Asselbergs
- Institute of Health Informatics, Faculty of Population Health Sciences, University College London, London, United Kingdom
- Clinical Research Informatics Unit, University College London Hospitals, London, United Kingdom
- Division Heart & Lungs, Department of Cardiology, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
- Institute of Cardiovascular Science, Faculty of Population Health Sciences, University College London, London, United Kingdom
| | - Anoop D. Shah
- Institute of Health Informatics, Faculty of Population Health Sciences, University College London, London, United Kingdom
- Clinical Research Informatics Unit, University College London Hospitals, London, United Kingdom
- Clinical Pharmacology Department, University College London Hospitals, London, United Kingdom
| |
Collapse
|
48
|
Hogg HDJ, Al-Zubaidy M, Talks J, Denniston AK, Kelly CJ, Malawana J, Papoutsi C, Teare MD, Keane PA, Beyer FR, Maniatopoulos G. Stakeholder Perspectives of Clinical Artificial Intelligence Implementation: Systematic Review of Qualitative Evidence. J Med Internet Res 2023; 25:e39742. [PMID: 36626192 PMCID: PMC9875023 DOI: 10.2196/39742] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 09/28/2022] [Accepted: 11/30/2022] [Indexed: 12/03/2022] Open
Abstract
BACKGROUND The rhetoric surrounding clinical artificial intelligence (AI) often exaggerates its effect on real-world care. Limited understanding of the factors that influence its implementation can perpetuate this. OBJECTIVE In this qualitative systematic review, we aimed to identify key stakeholders, consolidate their perspectives on clinical AI implementation, and characterize the evidence gaps that future qualitative research should target. METHODS Ovid-MEDLINE, EBSCO-CINAHL, ACM Digital Library, Science Citation Index-Web of Science, and Scopus were searched for primary qualitative studies on individuals' perspectives on any application of clinical AI worldwide (January 2014-April 2021). The definition of clinical AI includes both rule-based and machine learning-enabled or non-rule-based decision support tools. The language of the reports was not an exclusion criterion. Two independent reviewers performed title, abstract, and full-text screening with a third arbiter of disagreement. Two reviewers assigned the Joanna Briggs Institute 10-point checklist for qualitative research scores for each study. A single reviewer extracted free-text data relevant to clinical AI implementation, noting the stakeholders contributing to each excerpt. The best-fit framework synthesis used the Nonadoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework. To validate the data and improve accessibility, coauthors representing each emergent stakeholder group codeveloped summaries of the factors most relevant to their respective groups. RESULTS The initial search yielded 4437 deduplicated articles, with 111 (2.5%) eligible for inclusion (median Joanna Briggs Institute 10-point checklist for qualitative research score, 8/10). Five distinct stakeholder groups emerged from the data: health care professionals (HCPs), patients, carers and other members of the public, developers, health care managers and leaders, and regulators or policy makers, contributing 1204 (70%), 196 (11.4%), 133 (7.7%), 129 (7.5%), and 59 (3.4%) of 1721 eligible excerpts, respectively. All stakeholder groups independently identified a breadth of implementation factors, with each producing data that were mapped between 17 and 24 of the 27 adapted Nonadoption, Abandonment, Scale-up, Spread, and Sustainability subdomains. Most of the factors that stakeholders found influential in the implementation of rule-based clinical AI also applied to non-rule-based clinical AI, with the exception of intellectual property, regulation, and sociocultural attitudes. CONCLUSIONS Clinical AI implementation is influenced by many interdependent factors, which are in turn influenced by at least 5 distinct stakeholder groups. This implies that effective research and practice of clinical AI implementation should consider multiple stakeholder perspectives. The current underrepresentation of perspectives from stakeholders other than HCPs in the literature may limit the anticipation and management of the factors that influence successful clinical AI implementation. Future research should not only widen the representation of tools and contexts in qualitative research but also specifically investigate the perspectives of all stakeholder HCPs and emerging aspects of non-rule-based clinical AI implementation. TRIAL REGISTRATION PROSPERO (International Prospective Register of Systematic Reviews) CRD42021256005; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=256005. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) RR2-10.2196/33145.
Collapse
Affiliation(s)
- Henry David Jeffry Hogg
- Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
- Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Mohaimen Al-Zubaidy
- Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
- Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, United Kingdom
| | - James Talks
- Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, United Kingdom
| | - Alastair K Denniston
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom
| | | | - Johann Malawana
- The Healthcare Leadership Academy, London, United Kingdom
- The Institute of Leadership and Management, Birmingham, United Kingdom
| | - Chrysanthi Papoutsi
- Nuffield Department of Primary Healthcare Sciences, Oxford University, Oxford, United Kingdom
| | - Marion Dawn Teare
- Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Pearse A Keane
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
- Institute of Ophthalmology, University College London, London, United Kingdom
| | - Fiona R Beyer
- Evidence Synthesis Group, Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Gregory Maniatopoulos
- Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
- Faculty of Business and Law, Northumbria University, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
49
|
Gonsard A, AbouTaam R, Prévost B, Roy C, Hadchouel A, Nathan N, Taytard J, Pirojoc A, Delacourt C, Wanin S, Drummond D. Children's views on artificial intelligence and digital twins for the daily management of their asthma: a mixed-method study. Eur J Pediatr 2023; 182:877-888. [PMID: 36512148 PMCID: PMC9745267 DOI: 10.1007/s00431-022-04754-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 11/30/2022] [Accepted: 12/05/2022] [Indexed: 12/15/2022]
Abstract
New technologies enable the creation of digital twin systems (DTS) combining continuous data collection from children's home and artificial intelligence (AI)-based recommendations to adapt their care in real time. The objective was to assess whether children and adolescents with asthma would be ready to use such DTS. A mixed-method study was conducted with 104 asthma patients aged 8 to 17 years. The potential advantages and disadvantages associated with AI and the use of DTS were collected in semi-structured interviews. Children were then asked whether they would agree to use a DTS for the daily management of their asthma. The strength of their decision was assessed as well as the factors determining their choice. The main advantages of DTS identified by children were the possibility to be (i) supported in managing their asthma (ii) from home and (iii) in real time. Technical issues and the risk of loss of humanity were the main drawbacks reported. Half of the children (56%) were willing to use a DTS for the daily management of their asthma if it was as effective as current care, and up to 93% if it was more effective. Those with the best computer skills were more likely to choose the DTS, while those who placed a high value on the physician-patient relationship were less likely to do so. Conclusions: The majority of children were ready to use a DTS for the management of their asthma, particularly if it was more effective than current care. The results of this study support the development of DTS for childhood asthma and the evaluation of their effectiveness in clinical trials. What is Known: • New technologies enable the creation of digital twin systems (DTS) for children with asthma. • Acceptance of these DTSs by children with asthma is unknown. What is New: • Half of the children (56%) were willing to use a DTS for the daily management of their asthma if it was as effective as current care, and up to 93% if it was more effective. •Children identified the ability to be supported from home and in real time as the main benefits of DTS.
Collapse
Affiliation(s)
- Apolline Gonsard
- Department of Pediatric Pulmonology and Allergology, University Hospital Necker-Enfants Malades, AP-HP, 149 Rue de Sèvres, 75015 Paris, France
| | - Rola AbouTaam
- Department of Pediatric Pulmonology and Allergology, University Hospital Necker-Enfants Malades, AP-HP, 149 Rue de Sèvres, 75015 Paris, France
| | - Blandine Prévost
- Department of Pediatric Pulmonology, University Hospital Armand Trousseau, AP-HP Paris, France
| | - Charlotte Roy
- Department of Pediatric Pulmonology and Allergology, University Hospital Necker-Enfants Malades, AP-HP, 149 Rue de Sèvres, 75015 Paris, France
| | - Alice Hadchouel
- Department of Pediatric Pulmonology and Allergology, University Hospital Necker-Enfants Malades, AP-HP, 149 Rue de Sèvres, 75015 Paris, France
- Université Paris Cité, Paris, France
| | - Nadia Nathan
- Department of Pediatric Pulmonology, University Hospital Armand Trousseau, AP-HP Paris, France
| | - Jessica Taytard
- Department of Pediatric Pulmonology, University Hospital Armand Trousseau, AP-HP Paris, France
- UMRS1158 Neurophysiologie Respiratoire Expérimentale Et Clinique, Sorbonne Université, INSERM, Paris, France
| | | | - Christophe Delacourt
- Department of Pediatric Pulmonology and Allergology, University Hospital Necker-Enfants Malades, AP-HP, 149 Rue de Sèvres, 75015 Paris, France
- Université Paris Cité, Paris, France
| | - Stéphanie Wanin
- Department of Pediatric Allergology, University Hospital Armand Trousseau, APHP, Paris, France
| | - David Drummond
- Department of Pediatric Pulmonology and Allergology, University Hospital Necker-Enfants Malades, AP-HP, 149 Rue de Sèvres, 75015 Paris, France
- Université Paris Cité, Paris, France
- Inserm UMR 1138, Centre de Recherche Des Cordeliers, HeKA Team, 75006 Paris, France
| |
Collapse
|
50
|
Partnering with children and youth to advance artificial intelligence in healthcare. Pediatr Res 2023; 93:284-286. [PMID: 35681090 DOI: 10.1038/s41390-022-02139-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 04/29/2022] [Indexed: 11/08/2022]
|