1
|
Ni Y, Jia F. A Scoping Review of AI-Driven Digital Interventions in Mental Health Care: Mapping Applications Across Screening, Support, Monitoring, Prevention, and Clinical Education. Healthcare (Basel) 2025; 13:1205. [PMID: 40428041 PMCID: PMC12110772 DOI: 10.3390/healthcare13101205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2025] [Revised: 05/09/2025] [Accepted: 05/15/2025] [Indexed: 05/29/2025] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence (AI)-enabled digital interventions are increasingly used to expand access to mental health care. This PRISMA-ScR scoping review maps how AI technologies support mental health care across five phases: pre-treatment (screening), treatment (therapeutic support), post-treatment (monitoring), clinical education, and population-level prevention. METHODS We synthesized findings from 36 empirical studies published through January 2024 that implemented AI-driven digital tools, including large language models (LLMs), machine learning (ML) models, and conversational agents. Use cases include referral triage, remote patient monitoring, empathic communication enhancement, and AI-assisted psychotherapy delivered via chatbots and voice agents. RESULTS Across the 36 included studies, the most common AI modalities included chatbots, natural language processing tools, machine learning and deep learning models, and large language model-based agents. These technologies were predominantly used for support, monitoring, and self-management purposes rather than as standalone treatments. Reported benefits included reduced wait times, increased engagement, and improved symptom tracking. However, recurring challenges such as algorithmic bias, data privacy risks, and workflow integration barriers highlight the need for ethical design and human oversight. CONCLUSION By introducing a four-pillar framework, this review offers a comprehensive overview of current applications and future directions in AI-augmented mental health care. It aims to guide researchers, clinicians, and policymakers in developing safe, effective, and equitable digital mental health interventions.
Collapse
Affiliation(s)
- Yang Ni
- School of International and Public Affairs, Columbia University, New York, NY 10027, USA;
| | - Fanli Jia
- Department of Psychology, Seton Hall University, South Orange, NJ 07079, USA
| |
Collapse
|
2
|
McConnon AD, Nash AJ, Roberts JR, Juni SZ, Derenbecker A, Shanahan P, Waters AJ. Incorporating AI Into Military Behavioral Health: A Narrative Review. Mil Med 2025:usaf162. [PMID: 40327321 DOI: 10.1093/milmed/usaf162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Revised: 02/03/2025] [Accepted: 04/18/2025] [Indexed: 05/07/2025] Open
Abstract
INTRODUCTION Concerns regarding suicide rates and declining mental health among service members highlight the need for impactful approaches to address behavioral health needs of U.S. military populations and to improve force readiness. Research in civilian populations has revealed that artificial intelligence and machine learning (AI/ML) have the promise to advance behavioral health care in the following 6 domains: Education and Training, Screening and Assessment, Diagnosis, Treatment, Prognosis, and Clinical Documentation and Administrative Tasks. MATERIALS AND METHODS We conducted a narrative review of research conducted in U.S. military populations, published between 2019 and 2024, that involved AI/ML in behavioral health. Studies were extracted from Embase, PubMed, PsycInfo, and Defense Technical Information Center. Nine studies were considered appropriate for the review. RESULTS Compared to research in civilian populations, there has been much less research in U.S. military populations regarding the use of AI/ML in behavioral health. The studies selected using ML have shown promise for screening and assessment, such as predicting negative mental health outcomes in military populations. ML has also been applied to diagnosis as well as prognosis, with initial positive results. More research is needed to validate the results of the studies reviewed. CONCLUSIONS There is potential for AI/ML to be applied more extensively to military behavioral health, including education/training, treatment, and clinical documentation/administrative tasks. The article describes challenges for further integration of AI into military behavioral health, considering perspectives of service members, providers, and system-level infrastructure.
Collapse
Affiliation(s)
- Ann D McConnon
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| | - Airyn J Nash
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| | - John Ray Roberts
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| | - Shmuel Z Juni
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| | - Ashley Derenbecker
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| | - Patrice Shanahan
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| | - Andrew J Waters
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| |
Collapse
|
3
|
Rahsepar Meadi M, Sillekens T, Metselaar S, van Balkom A, Bernstein J, Batelaan N. Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review. JMIR Ment Health 2025; 12:e60432. [PMID: 39983102 PMCID: PMC11890142 DOI: 10.2196/60432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 12/21/2024] [Accepted: 12/23/2024] [Indexed: 02/23/2025] Open
Abstract
BACKGROUND Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns. OBJECTIVE We aimed to provide a comprehensive overview of ethical considerations surrounding CAI as a therapist for individuals with mental health issues. METHODS We conducted a systematic search across PubMed, Embase, APA PsycINFO, Web of Science, Scopus, the Philosopher's Index, and ACM Digital Library databases. Our search comprised 3 elements: embodied artificial intelligence, ethics, and mental health. We defined CAI as a conversational agent that interacts with a person and uses artificial intelligence to formulate output. We included articles discussing the ethical challenges of CAI functioning in the role of a therapist for individuals with mental health issues. We added additional articles through snowball searching. We included articles in English or Dutch. All types of articles were considered except abstracts of symposia. Screening for eligibility was done by 2 independent researchers (MRM and TS or AvB). An initial charting form was created based on the expected considerations and revised and complemented during the charting process. The ethical challenges were divided into themes. When a concern occurred in more than 2 articles, we identified it as a distinct theme. RESULTS We included 101 articles, of which 95% (n=96) were published in 2018 or later. Most were reviews (n=22, 21.8%) followed by commentaries (n=17, 16.8%). The following 10 themes were distinguished: (1) safety and harm (discussed in 52/101, 51.5% of articles); the most common topics within this theme were suicidality and crisis management, harmful or wrong suggestions, and the risk of dependency on CAI; (2) explicability, transparency, and trust (n=26, 25.7%), including topics such as the effects of "black box" algorithms on trust; (3) responsibility and accountability (n=31, 30.7%); (4) empathy and humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), including themes such as health inequalities due to differences in digital literacy; (6) anthropomorphization and deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy and confidentiality (n=62, 61.4%); and (10) concerns for health care workers' jobs (n=16, 15.8%). Other themes were discussed in 9.9% (n=10) of the identified articles. CONCLUSIONS Our scoping review has comprehensively covered ethical aspects of CAI in mental health care. While certain themes remain underexplored and stakeholders' perspectives are insufficiently represented, this study highlights critical areas for further research. These include evaluating the risks and benefits of CAI in comparison to human therapists, determining its appropriate roles in therapeutic contexts and its impact on care access, and addressing accountability. Addressing these gaps can inform normative analysis and guide the development of ethical guidelines for responsible CAI use in mental health care.
Collapse
Affiliation(s)
- Mehrdad Rahsepar Meadi
- Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Department of Ethics, Law, & Humanities, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Tomas Sillekens
- GGZ Centraal Mental Health Care, Amersfoort, The Netherlands
| | - Suzanne Metselaar
- Department of Ethics, Law, & Humanities, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Anton van Balkom
- Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Justin Bernstein
- Department of Philosophy, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Neeltje Batelaan
- Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Doan T, Sullivan B, Koerber J, Hickok K, Soares N. Perceptions of Machine Learning among Therapists Practicing Applied Behavior Analysis: A National Survey. Behav Anal Pract 2024; 17:1147-1159. [PMID: 39790908 PMCID: PMC11707160 DOI: 10.1007/s40617-024-00936-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/01/2024] [Indexed: 01/12/2025] Open
Abstract
Collecting data and logging behaviors of clients who have autism spectrum disorder (ASD) during applied behavior analysis (ABA) therapy sessions can be challenging in real time, especially when the behaviors require a rapid response, like self-injury or aggression. Little information is available about the automation of data collection in ABA therapy, such as through machine learning (ML). Our survey of ABA therapists nationally revealed mixed levels of familiarity with ML and generally neutral responses to statements endorsing the benefits of ML. Higher certification levels and more years of experience with ABA were associated with decreased confidence in ML's ability to accurately identify behaviors during ABA sessions whereas previous familiarity with ML was associated with confidence in ML, comfort with using ML, and trust that ML technology can keep client data secure. Understanding the perceptions of ABA therapists can guide future endeavors to incorporate ML for automated behavior logging into ABA practice.Applied behavior analysis (ABA) therapists perceive some value in utilizing machine learning (ML) in data collection during ABA sessions, but the majority of therapists are not familiar with the concept of ML.In our survey, ABA therapists with greater familiarity with ML were more likely to be comfortable using ML in their practice.Surveyed ABA therapists with higher certification levels and more experience with ABA were less likely to be confident in ML's ability to identify behaviors accurately.Awareness of ABA therapists' perspectives about ML, especially regarding privacy and security, and partnership with computer scientists can further the development of ML technology to augment data collection during ABA therapy.Educating ABA therapists about the potential of ML, especially the potential to reduce the burden of behavior logging while simultaneously intervening for aggressive and self-injurious behaviors, will be necessary for successful implementation of ML in ABA therapy settings. Supplementary Information The online version contains supplementary material available at 10.1007/s40617-024-00936-y.
Collapse
Affiliation(s)
- Tam Doan
- Western Michigan University Homer Stryker M.D. School of Medicine, Kalamazoo, MI USA
| | - Brittany Sullivan
- Western Michigan University Homer Stryker M.D. School of Medicine, Kalamazoo, MI USA
| | - Jeana Koerber
- Great Lakes Center for Autism Treatment and Research, Portage, MI USA
| | - Kirsten Hickok
- Department of Biomedical Informatics, Western Michigan University Homer Stryker M.D. School of Medicine, Kalamazoo, MI USA
| | - Neelkamal Soares
- Department of Pediatrics, University of Michigan Medical School, Ann Arbor, MI USA
| |
Collapse
|
5
|
Li W, Shi HY, Chen XL, Lan JZ, Rehman AU, Ge MW, Shen LT, Hu FH, Jia YJ, Li XM, Chen HL. Application of artificial intelligence in medical education: A meta-ethnographic synthesis. MEDICAL TEACHER 2024:1-14. [PMID: 39480998 DOI: 10.1080/0142159x.2024.2418936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Accepted: 10/16/2024] [Indexed: 11/02/2024]
Abstract
With the advancement of Artificial Intelligence (AI), it has had a profound impact on medical education. Understanding the advantages and issues of AI in medical education, providing guidance for educators, and overcoming challenges in the implementation process is particularly important. The objective of this study is to explore the current state of AI applications in medical education. A systematic search was conducted across databases such as PsycINFO, CINAHL, Scopus, PubMed, and Web of Science to identify relevant studies. The Critical Appraisal Skills Programme (CASP) was employed for the quality assessment of these studies, followed by thematic synthesis to analyze the themes from the included research. Ultimately, 21 studies were identified, establishing four themes: (1) Shaping the Future: Current Trends in AI within Medical Education; (2) Advancing Medical Instruction: The Transformative Power of AI; (3) Navigating the Ethical Landscape of AI in Medical Education; (4) Fostering Synergy: Integrating Artificial Intelligence in Medical Curriculum. Artificial intelligence's role in medical education, while not yet extensive, is impactful and promising. Despite challenges, including ethical concerns over privacy, responsibility, and humanistic care, future efforts should focus on integrating AI through targeted courses to improve educational quality.
Collapse
Affiliation(s)
- Wei Li
- School of Nursing and Rehabilitation, Nantong University, Nantong, Jiangsu, China
| | - Hai-Yan Shi
- Nantong University Affiliated Rugao Hospital, Rugao People's Hospital, Nantong, Jiangsu, China
| | - Xiao-Ling Chen
- Department of Respiratory Medicine, Dongtai People's Hospital, Yancheng, Jiangsu, China
| | - Jian-Zeng Lan
- School of Nursing and Rehabilitation, Nantong University, Nantong, Jiangsu, China
| | - Attiq-Ur Rehman
- School of Nursing and Rehabilitation, Nantong University, Nantong, Jiangsu, China
- Gulfreen Nursing College Avicenna Hospital Bedian, Lahore, Pakistan
| | - Meng-Wei Ge
- School of Nursing and Rehabilitation, Nantong University, Nantong, Jiangsu, China
| | - Lu-Ting Shen
- School of Nursing and Rehabilitation, Nantong University, Nantong, Jiangsu, China
| | - Fei-Hong Hu
- School of Nursing and Rehabilitation, Nantong University, Nantong, Jiangsu, China
| | - Yi-Jie Jia
- School of Nursing and Rehabilitation, Nantong University, Nantong, Jiangsu, China
| | - Xiao-Min Li
- Nantong First People's Hospital, The Second Affiliated Hospital of Nantong University, Nantong, Jiangsu, China
| | - Hong-Lin Chen
- School of Nursing and Rehabilitation, Nantong University, Nantong, Jiangsu, China
| |
Collapse
|
6
|
An Q, Yang J, Xu X, Zhang Y, Zhang H. Decoding AI ethics from Users' lens in education: A systematic review. Heliyon 2024; 10:e39357. [PMID: 39640624 PMCID: PMC11620203 DOI: 10.1016/j.heliyon.2024.e39357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 10/06/2024] [Accepted: 10/12/2024] [Indexed: 12/07/2024] Open
Abstract
In recent years, Artificial Intelligence (AI) has witnessed remarkable expansion, greatly benefiting the education sector. Nonetheless, this advancement brings forth several ethical dilemmas. The existing research on these ethical concerns within the educational framework is notably scarce, particularly when viewed from a user's standpoint. This research systematically reviewed 17 empirical articles from January 2018 to June 2023, sourced from peer-reviewed journals and conferences, to outlined existing ethical framework in Artificial Intelligence in Education (AIED), identify related concerns from user's perspectives, and construct Ethics Guideline for AIED. The finding revealed that certain ethical aspects, including the ethics of learning analytics and the ethics of algorithms in AIED, are often neglected in the existing ethical frameworks, principles, and standards for AIED. Based on the blank between existing ethical frameworks and ethic concerns from user's perspectives, the research proposes more inclusive and thoughtfully Ethics Guideline for AIED. The study also provides actionable recommendations for multiple stakeholders, emphasizing the need for guidelines that address user-centered concerns. In addition, How this Ethics Guideline for AIED could be developed is discussed, along with outlining potential avenues for future research.
Collapse
Affiliation(s)
- Qin An
- Higher Education Development Center, University of Otago, Otago, New Zealand
| | - Jingmei Yang
- School of International Business, Chengdu International Studies University, Sichuan, China
| | - Xiaoshu Xu
- School of Foreign Studies, Wenzhou University, Wenzhou, China
| | - Yunfeng Zhang
- Faculty of Languages and Translation, Macao Polytechnic University, Macao, China
| | - Huanhuan Zhang
- Faculty of Applied Sciences, Macao Polytechnic University, Macao, China
| |
Collapse
|
7
|
Tozsin A, Ucmak H, Soyturk S, Aydin A, Gozen AS, Fahim MA, Güven S, Ahmed K. The Role of Artificial Intelligence in Medical Education: A Systematic Review. Surg Innov 2024; 31:415-423. [PMID: 38632898 DOI: 10.1177/15533506241248239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/19/2024]
Abstract
BACKGROUND To examine the artificial intelligence (AI) tools currently being studied in modern medical education, and critically evaluate the level of validation and the quality of evidence presented in each individual study. METHODS This review (PROSPERO ID: CRD42023410752) was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement. A database search was conducted using PubMed, Embase, and Cochrane Library. Articles written in the English language between 2000 and March 2023 were reviewed retrospectively using the MeSH Terms "AI" and "medical education" A total of 4642 potentially relevant studies were found. RESULTS After a thorough screening process, 36 studies were included in the final analysis. These studies consisted of 26 quantitative studies and 10 studies investigated the development and validation of AI tools. When examining the results of studies in which Support vector machines (SVMs) were employed, it has demonstrated high accuracy in assessing students' experiences, diagnosing acute abdominal pain, classifying skilled and novice participants, and evaluating surgical training levels. Particularly in the comparison of surgical skill levels, it has achieved an accuracy rate of over 92%. CONCLUSION AI tools demonstrated effectiveness in improving practical skills, diagnosing diseases, and evaluating student performance. However, further research with rigorous validation is required to identify the most effective AI tools for medical education.
Collapse
Affiliation(s)
- Atinc Tozsin
- Department of Urology, Trakya University School of Medicine, Edirne, Turkey
| | - Harun Ucmak
- Department of Urology, Meram School of Medicine, Necmettin Erbakan University, Konya, Turkey
| | - Selim Soyturk
- Department of Urology, Meram School of Medicine, Necmettin Erbakan University, Konya, Turkey
| | - Abdullatif Aydin
- MRC Centre for Transplantation, Guy's Hospital, King's College London, London, UK
- Department of Urology, King's College Hospital NHS Foundation Trust, London, UK
| | | | - Maha Al Fahim
- Medical Education Department, Sheikh Khalifa Medical City, Abu Dhabi, UAE
| | - Selcuk Güven
- Department of Urology, Meram School of Medicine, Necmettin Erbakan University, Konya, Turkey
| | - Kamran Ahmed
- MRC Centre for Transplantation, Guy's Hospital, King's College London, London, UK
- Khalifa University, Abu Dhabi, UAE
| |
Collapse
|
8
|
Vargas EP, Carrasco-Ribelles LA, Marin-Morales J, Molina CA, Raya MA. Feasibility of virtual reality and machine learning to assess personality traits in an organizational environment. Front Psychol 2024; 15:1342018. [PMID: 39114589 PMCID: PMC11305179 DOI: 10.3389/fpsyg.2024.1342018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 05/27/2024] [Indexed: 08/10/2024] Open
Abstract
Introduction Personality plays a crucial role in shaping an individual's interactions with the world. The Big Five personality traits are widely used frameworks that help describe people's psychological behaviours. These traits predict how individuals behave within an organizational setting. Methods In this article, we introduce a virtual reality (VR) strategy for relatively scoring an individual's personality to evaluate the feasibility of predicting personality traits from implicit measures captured from users interacting in VR simulations of different organizational situations. Specifically, eye-tracking and decision-making patterns were used to classify individuals according to their level in each of the Big Five dimensions using statistical machine learning (ML) methods. The virtual environment was designed using an evidence-centered design approach. Results The dimensions were assessed using NEO-FFI inventory. A random forest ML model provided 83% accuracy in predicting agreeableness. A k-nearest neighbour ML model provided 75%, 75%, and 77% accuracy in predicting openness, neuroticism, and conscientiousness, respectively. A support vector machine model provided 85% accuracy for predicting extraversion. These analyses indicated that the dimensions could be differentiated by eye-gaze patterns and behaviours during immersive VR. Discussion Eye-tracking measures contributed more significantly to this differentiation than the behavioural metrics. Currently, we have obtained promising results with our group of participants, but to ensure the robustness and generalizability of our findings, it is imperative to replicate the study with a considerably larger sample. This study demonstrates the potential of VR and ML to recognize personality traits.
Collapse
Affiliation(s)
- Elena Parra Vargas
- Laboratory of Immersive Neurotechnologies (LabLENI) – Institute Human-Tech, Valencia, Spain
| | | | - Javier Marin-Morales
- Laboratory of Immersive Neurotechnologies (LabLENI) – Institute Human-Tech, Valencia, Spain
| | - Carla Ayuso Molina
- Laboratory of Immersive Neurotechnologies (LabLENI) – Institute Human-Tech, Valencia, Spain
| | - Mariano Alcañiz Raya
- Laboratory of Immersive Neurotechnologies (LabLENI) – Institute Human-Tech, Valencia, Spain
| |
Collapse
|
9
|
Fazakarley CA, Breen M, Thompson B, Leeson P, Williamson V. Beliefs, experiences and concerns of using artificial intelligence in healthcare: A qualitative synthesis. Digit Health 2024; 10:20552076241230075. [PMID: 38347935 PMCID: PMC10860471 DOI: 10.1177/20552076241230075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/16/2024] [Indexed: 02/15/2024] Open
Abstract
Objective Artificial intelligence (AI) is a developing field in the context of healthcare. As this technology continues to be implemented in patient care, there is a growing need to understand the thoughts and experiences of stakeholders in this area to ensure that future AI development and implementation is successful. The aim of this study was to conduct a literature search of qualitative studies exploring the opinions of stakeholders such as clinicians, patients, and technology experts in order to establish the most common themes and ideas that have been presented in this research. Methods A literature search was conducted of existing qualitative research on stakeholder beliefs about the use of AI use in healthcare. Twenty-one papers were selected and analysed resulting in the development of four key themes relating to patient care, patient-doctor relationships, lack of education and resources, and the need for regulations. Results Overall, patients and healthcare workers are open to the use of AI in care and appear positive about potential benefits. However, concerns were raised relating to the lack of empathy in interactions of AI tools, and potential risks that may arise from the data collection needed for AI use and development. Stakeholders in the healthcare, technology, and business sectors all stressed that there was a lack of appropriate education, funding, and guidelines surrounding AI, and these concerns needed to be addressed to ensure future implementation is safe and suitable for patient care. Conclusion Ultimately, the results found in this study highlighted that there was a need for communication between stakeholder in order for these concerns to be addressed, mitigate potential risks, and maximise benefits for patients and clinicians alike. The results also identified a need for further qualitative research in this area to further understand stakeholder experiences as AI use continues to develop.
Collapse
Affiliation(s)
| | | | | | - Paul Leeson
- RDM Division of Cardiovascular Medicine, University of Oxford, John Radcliffe Hospital, Oxford, UK
| | - Victoria Williamson
- King's Centre for Military Health Research, King's College London, London, UK
| |
Collapse
|
10
|
Zhang M, Scandiffio J, Younus S, Jeyakumar T, Karsan I, Charow R, Salhia M, Wiljer D. The Adoption of AI in Mental Health Care-Perspectives From Mental Health Professionals: Qualitative Descriptive Study. JMIR Form Res 2023; 7:e47847. [PMID: 38060307 PMCID: PMC10739240 DOI: 10.2196/47847] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 10/08/2023] [Accepted: 10/11/2023] [Indexed: 12/08/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is transforming the mental health care environment. AI tools are increasingly accessed by clients and service users. Mental health professionals must be prepared not only to use AI but also to have conversations about it when delivering care. Despite the potential for AI to enable more efficient and reliable and higher-quality care delivery, there is a persistent gap among mental health professionals in the adoption of AI. OBJECTIVE A needs assessment was conducted among mental health professionals to (1) understand the learning needs of the workforce and their attitudes toward AI and (2) inform the development of AI education curricula and knowledge translation products. METHODS A qualitative descriptive approach was taken to explore the needs of mental health professionals regarding their adoption of AI through semistructured interviews. To reach maximum variation sampling, mental health professionals (eg, psychiatrists, mental health nurses, educators, scientists, and social workers) in various settings across Ontario (eg, urban and rural, public and private sector, and clinical and research) were recruited. RESULTS A total of 20 individuals were recruited. Participants included practitioners (9/20, 45% social workers and 1/20, 5% mental health nurses), educator scientists (5/20, 25% with dual roles as professors/lecturers and researchers), and practitioner scientists (3/20, 15% with dual roles as researchers and psychiatrists and 2/20, 10% with dual roles as researchers and mental health nurses). Four major themes emerged: (1) fostering practice change and building self-efficacy to integrate AI into patient care; (2) promoting system-level change to accelerate the adoption of AI in mental health; (3) addressing the importance of organizational readiness as a catalyst for AI adoption; and (4) ensuring that mental health professionals have the education, knowledge, and skills to harness AI in optimizing patient care. CONCLUSIONS AI technologies are starting to emerge in mental health care. Although many digital tools, web-based services, and mobile apps are designed using AI algorithms, mental health professionals have generally been slower in the adoption of AI. As indicated by this study's findings, the implications are 3-fold. At the individual level, digital professionals must see the value in digitally compassionate tools that retain a humanistic approach to care. For mental health professionals, resistance toward AI adoption must be acknowledged through educational initiatives to raise awareness about the relevance, practicality, and benefits of AI. At the organizational level, digital professionals and leaders must collaborate on governance and funding structures to promote employee buy-in. At the societal level, digital and mental health professionals should collaborate in the creation of formal AI training programs specific to mental health to address knowledge gaps. This study promotes the design of relevant and sustainable education programs to support the adoption of AI within the mental health care sphere.
Collapse
Affiliation(s)
| | | | | | - Tharshini Jeyakumar
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
| | | | - Rebecca Charow
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
| | - Mohammad Salhia
- Rotman School of Management, University of Toronto, Toronto, ON, Canada
| | - David Wiljer
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
- Department of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
11
|
Vo V, Chen G, Aquino YSJ, Carter SM, Do QN, Woode ME. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc Sci Med 2023; 338:116357. [PMID: 37949020 DOI: 10.1016/j.socscimed.2023.116357] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/04/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023]
Abstract
INTRODUCTION Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals' perspectives to understand these issues from multiple perspectives. METHODOLOGY A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human - AI relationship. RESULTS The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. CONCLUSIONS While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
Collapse
Affiliation(s)
- Vinh Vo
- Centre for Health Economics, Monash University, Australia.
| | - Gang Chen
- Centre for Health Economics, Monash University, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Quynh Nga Do
- Department of Economics, Monash University, Australia
| | - Maame Esi Woode
- Centre for Health Economics, Monash University, Australia; Monash Data Futures Research Institute, Australia
| |
Collapse
|
12
|
Li LT, Haley LC, Boyd AK, Bernstam EV. Technical/Algorithm, Stakeholder, and Society (TASS) barriers to the application of artificial intelligence in medicine: A systematic review. J Biomed Inform 2023; 147:104531. [PMID: 37884177 DOI: 10.1016/j.jbi.2023.104531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 09/14/2023] [Accepted: 10/22/2023] [Indexed: 10/28/2023]
Abstract
INTRODUCTION The use of artificial intelligence (AI), particularly machine learning and predictive analytics, has shown great promise in health care. Despite its strong potential, there has been limited use in health care settings. In this systematic review, we aim to determine the main barriers to successful implementation of AI in healthcare and discuss potential ways to overcome these challenges. METHODS We conducted a literature search in PubMed (1/1/2001-1/1/2023). The search was restricted to publications in the English language, and human study subjects. We excluded articles that did not discuss AI, machine learning, predictive analytics, and barriers to the use of these techniques in health care. Using grounded theory methodology, we abstracted concepts to identify major barriers to AI use in medicine. RESULTS We identified a total of 2,382 articles. After reviewing the 306 included papers, we developed 19 major themes, which we categorized into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). These themes included: Lack of Explainability, Need for Validation Protocols, Need for Standards for Interoperability, Need for Reporting Guidelines, Need for Standardization of Performance Metrics, Lack of Plan for Updating Algorithm, Job Loss, Skills Loss, Workflow Challenges, Loss of Patient Autonomy and Consent, Disturbing the Patient-Clinician Relationship, Lack of Trust in AI, Logistical Challenges, Lack of strategic plan, Lack of Cost-effectiveness Analysis and Proof of Efficacy, Privacy, Liability, Bias and Social Justice, and Education. CONCLUSION We identified 19 major barriers to the use of AI in healthcare and categorized them into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). Future studies should expand on barriers in pediatric care and focus on developing clearly defined protocols to overcome these barriers.
Collapse
Affiliation(s)
- Linda T Li
- Department of Surgery, Division of Pediatric Surgery, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY 10029, United States; McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States.
| | - Lauren C Haley
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Alexandra K Boyd
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Elmer V Bernstam
- McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States; McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| |
Collapse
|
13
|
Timmons AC, Duong JB, Fiallo NS, Lee T, Vo HPQ, Ahle MW, Comer JS, Brewer LC, Frazier SL, Chaspari T. A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1062-1096. [PMID: 36490369 PMCID: PMC10250563 DOI: 10.1177/17456916221134490] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Advances in computer science and data-analytic methods are driving a new era in mental health research and application. Artificial intelligence (AI) technologies hold the potential to enhance the assessment, diagnosis, and treatment of people experiencing mental health problems and to increase the reach and impact of mental health care. However, AI applications will not mitigate mental health disparities if they are built from historical data that reflect underlying social biases and inequities. AI models biased against sensitive classes could reinforce and even perpetuate existing inequities if these models create legacies that differentially impact who is diagnosed and treated, and how effectively. The current article reviews the health-equity implications of applying AI to mental health problems, outlines state-of-the-art methods for assessing and mitigating algorithmic bias, and presents a call to action to guide the development of fair-aware AI in psychological science.
Collapse
Affiliation(s)
- Adela C. Timmons
- University of Texas at Austin Institute for Mental Health Research
- Colliga Apps Corporation
| | | | | | | | | | | | | | - LaPrincess C. Brewer
- Department of Cardiovascular Medicine, May Clinic College of Medicine, Rochester, Minnesota, United States
- Center for Health Equity and Community Engagement Research, Mayo Clinic, Rochester, Minnesota, United States
| | | | | |
Collapse
|
14
|
Blease C, Kharko A, Bernstein M, Bradley C, Houston M, Walsh I, D Mandl K. Computerization of the Work of General Practitioners: Mixed Methods Survey of Final-Year Medical Students in Ireland. JMIR MEDICAL EDUCATION 2023; 9:e42639. [PMID: 36939809 PMCID: PMC10131917 DOI: 10.2196/42639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 12/14/2022] [Accepted: 01/15/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND The potential for digital health technologies, including machine learning (ML)-enabled tools, to disrupt the medical profession is the subject of ongoing debate within biomedical informatics. OBJECTIVE We aimed to describe the opinions of final-year medical students in Ireland regarding the potential of future technology to replace or work alongside general practitioners (GPs) in performing key tasks. METHODS Between March 2019 and April 2020, using a convenience sample, we conducted a mixed methods paper-based survey of final-year medical students. The survey was administered at 4 out of 7 medical schools in Ireland across each of the 4 provinces in the country. Quantitative data were analyzed using descriptive statistics and nonparametric tests. We used thematic content analysis to investigate free-text responses. RESULTS In total, 43.1% (252/585) of the final-year students at 3 medical schools responded, and data collection at 1 medical school was terminated due to disruptions associated with the COVID-19 pandemic. With regard to forecasting the potential impact of artificial intelligence (AI)/ML on primary care 25 years from now, around half (127/246, 51.6%) of all surveyed students believed the work of GPs will change minimally or not at all. Notably, students who did not intend to enter primary care predicted that AI/ML will have a great impact on the work of GPs. CONCLUSIONS We caution that without a firm curricular foundation on advances in AI/ML, students may rely on extreme perspectives involving self-preserving optimism biases that demote the impact of advances in technology on primary care on the one hand and technohype on the other. Ultimately, these biases may lead to negative consequences in health care. Improvements in medical education could help prepare tomorrow's doctors to optimize and lead the ethical and evidence-based implementation of AI/ML-enabled tools in medicine for enhancing the care of tomorrow's patients.
Collapse
Affiliation(s)
- Charlotte Blease
- General Medicine and Primary Care, Beth Israel Deaconess Medical Center, Boston, MA, United States
| | - Anna Kharko
- Healthcare Sciences and e-Health, Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden
- School of Psychology, University of Plymouth, Plymouth, United Kingdom
| | - Michael Bernstein
- Department of Behavioral and Social Sciences, School of Public Health, Brown University, Providence, RI, United States
- Department of Diagnostic Imaging, Warren Alpert Medical School, Brown University, Providence, RI, United States
| | - Colin Bradley
- School of Medicine, University College Cork, Cork, Ireland
| | - Muiris Houston
- School of Medicine, National University of Ireland Galway, Galway, Ireland
- School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Ian Walsh
- Dentistry and Biomedical Sciences, School of Medicine, Queen's University, Belfast, Ireland
| | - Kenneth D Mandl
- Computational Health Informatics Program, Boston Children's Hospital, Boston, MA, United States
| |
Collapse
|
15
|
Klimova B, Pikhart M, Kacetl J. Ethical issues of the use of AI-driven mobile apps for education. Front Public Health 2023; 10:1118116. [PMID: 36711343 PMCID: PMC9874223 DOI: 10.3389/fpubh.2022.1118116] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 12/23/2022] [Indexed: 01/12/2023] Open
Abstract
Nowadays, artificial intelligence (AI) affects our lives every single day and brings with it both benefits and risks for all spheres of human activities, including education. Out of these risks, the most striking seems to be ethical issues of the use of AI, such as misuse of private data or surveillance of people's lives. Therefore, the aim of this systematic review is to describe the key ethical issues related to the use of AI-driven mobile apps in education, as well as to list some of the implications based on the identified studies associated with this research topic. The methodology of this review study was based on the PRISMA guidelines for systematic reviews and meta-analyses. The results indicate four key ethical principles that should be followed, out of which the principle of algorithmovigilance should be considered in order to monitor, understand and prevent the adverse effects of algorithms in the use of AI in education. Furthermore, all stakeholders should be identified, as well as their joint engagement and collaboration to guarantee the ethical use of AI in education. Thus, the contribution of this study consists in emphasizing the need for joint cooperation and research of all stakeholders when using AI-driven mobile technologies in education with special attention to the ethical issues since the present research based on the review studies is scarce and neglected in this respect.
Collapse
|
16
|
Blease C, Kharko A, Bernstein M, Bradley C, Houston M, Walsh I, Hägglund M, DesRoches C, Mandl KD. Machine learning in medical education: a survey of the experiences and opinions of medical students in Ireland. BMJ Health Care Inform 2022; 29:bmjhci-2021-100480. [PMID: 35105606 PMCID: PMC8808371 DOI: 10.1136/bmjhci-2021-100480] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Affiliation(s)
- Charlotte Blease
- Division of General Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
| | - Anna Kharko
- Faculty of Health and Human Sciences, University of Plymouth, Plymouth, UK.,Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden
| | - Michael Bernstein
- School of Public Health, Brown University, Providence, Rhode Island, USA
| | - Colin Bradley
- School of Medicine, University College Cork, Cork, Ireland
| | - Muiris Houston
- School of Medicine, National University of Ireland Galway, Galway, Ireland.,School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Ian Walsh
- School of Medicine, Dentistry and Biomedical Sciences, Queen's University, Belfast, Belfast, Northern Ireland, UK
| | - Maria Hägglund
- Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden
| | - Catherine DesRoches
- Division of General Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA
| | - Kenneth D Mandl
- Harvard Medical School, Boston, Massachusetts, USA.,Computational Health Informatics Program, Boston Children's Hospital, Boston, Massachusetts, USA
| |
Collapse
|