1
|
Moylan K, Doherty K. Expert and Interdisciplinary Analysis of AI-Driven Chatbots for Mental Health Support: Mixed Methods Study. J Med Internet Res 2025; 27:e67114. [PMID: 40279575 PMCID: PMC12064976 DOI: 10.2196/67114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2024] [Revised: 12/20/2024] [Accepted: 02/25/2025] [Indexed: 04/27/2025] Open
Abstract
BACKGROUND Recent years have seen an immense surge in the creation and use of chatbots as social and mental health companions. Aiming to provide empathic responses in support of the delivery of personalized support, these tools are often presented as offering immense potential. However, it is also essential that we understand the risks of their deployment, including their potential adverse impacts on the mental health of users, including those most at risk. OBJECTIVE The study aims to assess the ethical and pragmatic clinical implications of using chatbots that claim to aid mental health. While several studies within human-computer interaction and related fields have examined users' perceptions of such systems, few studies have engaged mental health professionals in critical analysis of their conduct as mental health support tools. This paper comprises, in turn, an effort to assess the ethical and pragmatic clinical implications of using chatbots that claim to aid mental health. METHODS This study included 8 interdisciplinary mental health professional participants (from psychology and psychotherapy to social care and crisis volunteer workers) in a mixed methods and hands-on analysis of 2 popular mental health-related chatbots' data handling, interface design, and responses. This analysis was carried out through profession-specific tasks with each chatbot, eliciting participants' perceptions through both the Trust in Automation scale and semistructured interviews. Through thematic analysis and a 2-tailed, paired t test, these chatbots' implications for mental health support were thus evaluated. RESULTS Qualitative analysis revealed emphatic initial impressions among mental health professionals of chatbot responses likely to produce harm, exhibiting a generic mode of care, and risking user dependence and manipulation given the central role of trust in the therapeutic relationship. Trust scores from the Trust in Automation scale, while exhibiting no statistically significant differences between the chatbots (t6=-0.76; P=.48), indicated medium to low trust scores for each chatbot. The findings of this work highlight that the design and development of artificial intelligence (AI)-driven mental health-related solutions must be undertaken with utmost caution. The mental health professionals in this study collectively resist these chatbots and make clear that AI-driven chatbots used for mental health by at-risk users invite several potential and specific harms. CONCLUSIONS Through this work, we contributed insights into the mental health professional perspective on the design of chatbots used for mental health and underscore the necessity of ongoing critical assessment and iterative refinement to maximize the benefits and minimize the risks associated with integrating AI into mental health support.
Collapse
Affiliation(s)
- Kayley Moylan
- School of Information and Communication Studies, University College Dublin, Dublin, Ireland
| | - Kevin Doherty
- School of Information and Communication Studies, University College Dublin, Dublin, Ireland
| |
Collapse
|
2
|
Teo JZT, Yoong SQ, Chan YX, Jiang Y. The Effects of Commercial Conversational Agents on Older Adults' Mental Health: A Scoping Review. J Am Med Dir Assoc 2025; 26:105523. [PMID: 40157394 DOI: 10.1016/j.jamda.2025.105523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Revised: 01/17/2025] [Accepted: 01/21/2025] [Indexed: 04/01/2025]
Abstract
OBJECTIVES With increasing life expectancy, more older adults are experiencing poor mental health because of the significant life transitions they face. Commercial conversational agents (CAs) are emerging devices that can potentially support older adults' mental well-being. However, limited literature has evaluated the influence of commercial CAs on older adults' mental health. This study aims to examine what is known about the effects of commercial CAs on older adults' mental health and the associated features. DESIGN This scoping review was conducted in accordance with Arksey and O'Malley's framework. SETTING AND PARTICIPANTS The review primarily focused on community-dwelling older adults aged 60 and above who used any commercial CAs. METHODS Quantitative, qualitative, mixed-method peer-reviewed studies and dissertations were included. Eleven databases were searched for relevant articles published from January 1, 2010, until April 9, 2024. Data extracted included the author(s), year, country, objective, population details, eligibility criteria, study design, commercial CA type, and findings related to research questions. Inductive basic content analysis was used for data synthesis. RESULTS Twenty-nine articles from 28 studies (n = 1017 older adults) were included. Five categories were synthesized: social wellness, emotional reactions, cognitive stimulation, autonomy, and depression. Common features impacting older adults' mental health were the CAs' conversational capacity and anthropomorphism, voice-activated functions, music, calling and other functions, and technological limitations. There were more positive than adverse effects on older adults' mental health categories. CONCLUSION AND IMPLICATIONS Commercial CAs can potentially mitigate mental health in older adults, but the evidence is still very preliminary. Their effects must be verified in randomized controlled trials using objective and validated tools and through mixed-method studies.
Collapse
Affiliation(s)
- Jolene Zi Tong Teo
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Si Qi Yoong
- Duke-NUS Medical School, Singapore, Singapore
| | - Yi Xuan Chan
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ying Jiang
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.
| |
Collapse
|
3
|
Bickmann P, Froböse I, Grieben C. [Predetermined breaking points and recommendations for action in development of digital prevention services in public health: a practical example]. DAS GESUNDHEITSWESEN 2025. [PMID: 40185147 DOI: 10.1055/a-2549-0446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2025]
Abstract
Digital prevention is essential for a sustainable healthcare system in Germany, with health insurance companies playing a key role. Despite promising approaches in research, studies indicate that hybrid and digital prevention solutions are often short-lived, pointing to systematic challenges. This paper presents the development of a prevention app commissioned by a German health insurance company. The app promotes a health-oriented lifestyle through an integrated chatbot. The scientifically accompanied development process highlights the importance of user participation and emphasizes the challenges of the long-term implementation of digital prevention solutions. These include project communication, scientific monitoring, data protection requirements, and the technical infrastructure of the health insurance company. Practical insights provide recommendations for the development and a structural model for future projects is proposed. It focuses on the effective integration of expert knowledge from various fields, such as prevention and software development. This collaboration is more crucial than ever for the future use of AI in health prevention.
Collapse
Affiliation(s)
- Peter Bickmann
- Institut für Bewegungstherapie und bewegungsorientierte Prävention und Rehabilitation, Deutsche Sporthochschule Köln, Köln, Germany
| | - Ingo Froböse
- Institut für Bewegungstherapie und bewegungsorientierte Prävention und Rehabilitation, Deutsche Sporthochschule Köln, Köln, Germany
| | - Christopher Grieben
- Fachbereich Personal/Gesundheit/Soziales, Fachhochschule des Mittelstands GmbH, Bielefeld, Germany
| |
Collapse
|
4
|
Baek G, Cha C, Han JH. AI Chatbots for Psychological Health for Health Professionals: Scoping Review. JMIR Hum Factors 2025; 12:e67682. [PMID: 40106346 PMCID: PMC11939020 DOI: 10.2196/67682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Revised: 02/02/2025] [Accepted: 02/14/2025] [Indexed: 03/22/2025] Open
Abstract
Background Health professionals face significant psychological burdens including burnout, anxiety, and depression. These can negatively impact their well-being and patient care. Traditional psychological health interventions often encounter limitations such as a lack of accessibility and privacy. Artificial intelligence (AI) chatbots are being explored as potential solutions to these challenges, offering available and immediate support. Therefore, it is necessary to systematically evaluate the characteristics and effectiveness of AI chatbots designed specifically for health professionals. Objective This scoping review aims to evaluate the existing literature on the use of AI chatbots for psychological health support among health professionals. Methods Following Arksey and O'Malley's framework, a comprehensive literature search was conducted across eight databases, covering studies published before 2024, including backward and forward citation tracking and manual searching from the included studies. Studies were screened for relevance based on inclusion and exclusion criteria, among 2465 studies retrieved, 10 studies met the criteria for review. Results Among the 10 studies, six chatbots were delivered via mobile platforms, and four via web-based platforms, all enabling one-on-one interactions. Natural language processing algorithms were used in six studies and cognitive behavioral therapy techniques were applied to psychological health in four studies. Usability was evaluated in six studies through participant feedback and engagement metrics. Improvements in anxiety, depression, and burnout were observed in four studies, although one reported an increase in depressive symptoms. Conclusions AI chatbots show potential tools to support the psychological health of health professionals by offering personalized and accessible interventions. Nonetheless, further research is required to establish standardized protocols and validate the effectiveness of these interventions. Future studies should focus on refining chatbot designs and assessing their impact on diverse health professionals.
Collapse
Affiliation(s)
- Gumhee Baek
- College of Nursing, Ewha Womans University, 52 Ewhayeodae-gil, Daehyun-dong, Seodaemun-gu, Seoul, 03760, Republic of Korea, 82 1035065701
- Graduate Program in System Health Science and Engineering, Ewha Womans University, Seoul, Republic of Korea
| | - Chiyoung Cha
- College of Nursing, Ewha Womans University, 52 Ewhayeodae-gil, Daehyun-dong, Seodaemun-gu, Seoul, 03760, Republic of Korea, 82 1035065701
- College of Nursing, Ewha Research Institute of Nursing Science, Ewha Womans University, Seoul, Republic of Korea
| | - Jin-Hui Han
- College of Nursing, Ewha Womans University, 52 Ewhayeodae-gil, Daehyun-dong, Seodaemun-gu, Seoul, 03760, Republic of Korea, 82 1035065701
| |
Collapse
|
5
|
Lim B, Lirios G, Sakalkale A, Satheakeerthy S, Hayes D, Yeung JMC. Assessing the efficacy of artificial intelligence to provide peri-operative information for patients with a stoma. ANZ J Surg 2025; 95:464-496. [PMID: 39620607 DOI: 10.1111/ans.19337] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 10/11/2024] [Accepted: 11/17/2024] [Indexed: 03/27/2025]
Abstract
BACKGROUND Stomas present significant lifestyle and psychological challenges for patients, requiring comprehensive education and support. Current educational methods have limitations in offering relevant information to the patient, highlighting a potential role for artificial intelligence (AI). This study examined the utility of AI in enhancing stoma therapy management following colorectal surgery. MATERIAL AND METHODS We compared the efficacy of four prominent large language models (LLM)-OpenAI's ChatGPT-3.5 and ChatGPT-4.0, Google's Gemini, and Bing's CoPilot-against a series of metrics to evaluate their suitability as supplementary clinical tools. Through qualitative and quantitative analyses, including readability scores (Flesch-Kincaid, Flesch-Reading Ease, and Coleman-Liau index) and reliability assessments (Likert scale, DISCERN score and QAMAI tool), the study aimed to assess the appropriateness of LLM-generated advice for patients managing stomas. RESULTS There are varying degrees of readability and reliability across the evaluated models, with CoPilot and ChatGPT-4 demonstrating superior performance in several key metrics such as readability and comprehensiveness. However, the study underscores the infant stage of LLM technology in clinical applications. All responses required high school to college level education to comprehend comfortably. While the LLMs addressed users' questions directly, the absence of incorporating patient-specific factors such as past medical history generated broad and generic responses rather than offering tailored advice. CONCLUSION The complexity of individual patient conditions can challenge AI systems. The use of LLMs in clinical settings holds promise for improving patient education and stoma management support, but requires careful consideration of the models' capabilities and the context of their use.
Collapse
Affiliation(s)
- Bryan Lim
- Department of Colorectal Surgery, Western Health, Melbourne, Australia
| | - Gabriel Lirios
- Department of Colorectal Surgery, Western Health, Melbourne, Australia
| | - Aditya Sakalkale
- Department of Surgery, Western Precinct, University of Melbourne, Melbourne, Australia
| | | | - Diana Hayes
- Department of Colorectal Surgery, Western Health, Melbourne, Australia
| | - Justin M C Yeung
- Department of Colorectal Surgery, Western Health, Melbourne, Australia
- Department of Surgery, Western Precinct, University of Melbourne, Melbourne, Australia
| |
Collapse
|
6
|
Lenton‐Brym AP, Collins A, Lane J, Busso C, Ouyang J, Fitzpatrick S, Kuo JR, Monson CM. Using machine learning to increase access to and engagement with trauma-focused interventions for posttraumatic stress disorder. BRITISH JOURNAL OF CLINICAL PSYCHOLOGY 2025; 64:125-136. [PMID: 38715445 PMCID: PMC11797152 DOI: 10.1111/bjc.12468] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 04/09/2024] [Indexed: 02/06/2025]
Abstract
BACKGROUND Post-traumatic stress disorder (PTSD) poses a global public health challenge. Evidence-based psychotherapies (EBPs) for PTSD reduce symptoms and improve functioning (Forbes et al., Guilford Press, 2020, 3). However, a number of barriers to access and engagement with these interventions prevail. As a result, the use of EBPs in community settings remains disappointingly low (Charney et al., Psychological Trauma: Theory, Research, Practice, and Policy, 11, 2019, 793; Richards et al., Community Mental Health Journal, 53, 2017, 215), and not all patients who receive an EBP for PTSD benefit optimally (Asmundson et al., Cognitive Behaviour Therapy, 48, 2019, 1). Advancements in artificial intelligence (AI) have introduced new possibilities for increasinfg access to and quality of mental health interventions. AIMS The present paper reviews key barriers to accessing and engaging in EBPs for PTSD, discusses current applications of AI in PTSD treatment and provides recommendations for future AI integrations aimed at reducing barriers to access and engagement. DISCUSSION We propose that AI may be utilized to (1) assess treatment fidelity; (2) elucidate novel predictors of treatment dropout and outcomes; and (3) facilitate patient engagement with the tasks of therapy, including therapy practice. Potential avenues for technological advancements are also considered.
Collapse
Affiliation(s)
| | - Alexis Collins
- Nellie Health
- Toronto Metropolitan UniversityTorontoOntarioCanada
- Present address:
Department of PsychologyUniversity of WaterlooWaterlooOntarioCanada
| | - Jeanine Lane
- Toronto Metropolitan UniversityTorontoOntarioCanada
| | | | | | | | - Janice R. Kuo
- Nellie Health
- Palo Alto UniversityPalo AltoCaliforniaUSA
| | | |
Collapse
|
7
|
Park JK, Singh VK, Wisniewski P. Current Landscape and Future Directions for Mental Health Conversational Agents for Youth: Scoping Review. JMIR Med Inform 2025; 13:e62758. [PMID: 40053735 PMCID: PMC11909484 DOI: 10.2196/62758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 12/12/2024] [Accepted: 12/25/2024] [Indexed: 03/09/2025] Open
Abstract
BACKGROUND Conversational agents (CAs; chatbots) are systems with the ability to interact with users using natural human dialogue. They are increasingly used to support interactive knowledge discovery of sensitive topics such as mental health topics. While much of the research on CAs for mental health has focused on adult populations, the insights from such research may not apply to CAs for youth. OBJECTIVE This study aimed to comprehensively evaluate the state-of-the-art research on mental health CAs for youth. METHODS Following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, we identified 39 peer-reviewed studies specific to mental health CAs designed for youth across 4 databases, including ProQuest, Scopus, Web of Science, and PubMed. We conducted a scoping review of the literature to evaluate the characteristics of research on mental health CAs designed for youth, the design and computational considerations of mental health CAs for youth, and the evaluation outcomes reported in the research on mental health CAs for youth. RESULTS We found that many mental health CAs (11/39, 28%) were designed as older peers to provide therapeutic or educational content to promote youth mental well-being. All CAs were designed based on expert knowledge, with a few that incorporated inputs from youth. The technical maturity of CAs was in its infancy, focusing on building prototypes with rule-based models to deliver prewritten content, with limited safety features to respond to imminent risk. Research findings suggest that while youth appreciate the 24/7 availability of friendly or empathetic conversation on sensitive topics with CAs, they found the content provided by CAs to be limited. Finally, we found that most (35/39, 90%) of the reviewed studies did not address the ethical aspects of mental health CAs, while youth were concerned about the privacy and confidentiality of their sensitive conversation data. CONCLUSIONS Our study highlights the need for researchers to continue to work together to align evidence-based research on mental health CAs for youth with lessons learned on how to best deliver these technologies to youth. Our review brings to light mental health CAs needing further development and evaluation. The new trend of large language model-based CAs can make such technologies more feasible. However, the privacy and safety of the systems should be prioritized. Although preliminary evidence shows positive trends in mental health CAs, long-term evaluative research with larger sample sizes and robust research designs is needed to validate their efficacy. More importantly, collaboration between youth and clinical experts is essential from the early design stages through to the final evaluation to develop safe, effective, and youth-centered mental health chatbots. Finally, best practices for risk mitigation and ethical development of CAs with and for youth are needed to promote their mental well-being.
Collapse
Affiliation(s)
- Jinkyung Katie Park
- Human-Centered Computing Division, School of Computing, Clemson University, Clemson, SC, United States
| | - Vivek K Singh
- Department of Library and Information, School of Communication and Information, Rutgers University, New Brunswick, NJ, United States
| | - Pamela Wisniewski
- Department of Computer Science, School of Engineering, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
8
|
Rahsepar Meadi M, Sillekens T, Metselaar S, van Balkom A, Bernstein J, Batelaan N. Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review. JMIR Ment Health 2025; 12:e60432. [PMID: 39983102 PMCID: PMC11890142 DOI: 10.2196/60432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 12/21/2024] [Accepted: 12/23/2024] [Indexed: 02/23/2025] Open
Abstract
BACKGROUND Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns. OBJECTIVE We aimed to provide a comprehensive overview of ethical considerations surrounding CAI as a therapist for individuals with mental health issues. METHODS We conducted a systematic search across PubMed, Embase, APA PsycINFO, Web of Science, Scopus, the Philosopher's Index, and ACM Digital Library databases. Our search comprised 3 elements: embodied artificial intelligence, ethics, and mental health. We defined CAI as a conversational agent that interacts with a person and uses artificial intelligence to formulate output. We included articles discussing the ethical challenges of CAI functioning in the role of a therapist for individuals with mental health issues. We added additional articles through snowball searching. We included articles in English or Dutch. All types of articles were considered except abstracts of symposia. Screening for eligibility was done by 2 independent researchers (MRM and TS or AvB). An initial charting form was created based on the expected considerations and revised and complemented during the charting process. The ethical challenges were divided into themes. When a concern occurred in more than 2 articles, we identified it as a distinct theme. RESULTS We included 101 articles, of which 95% (n=96) were published in 2018 or later. Most were reviews (n=22, 21.8%) followed by commentaries (n=17, 16.8%). The following 10 themes were distinguished: (1) safety and harm (discussed in 52/101, 51.5% of articles); the most common topics within this theme were suicidality and crisis management, harmful or wrong suggestions, and the risk of dependency on CAI; (2) explicability, transparency, and trust (n=26, 25.7%), including topics such as the effects of "black box" algorithms on trust; (3) responsibility and accountability (n=31, 30.7%); (4) empathy and humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), including themes such as health inequalities due to differences in digital literacy; (6) anthropomorphization and deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy and confidentiality (n=62, 61.4%); and (10) concerns for health care workers' jobs (n=16, 15.8%). Other themes were discussed in 9.9% (n=10) of the identified articles. CONCLUSIONS Our scoping review has comprehensively covered ethical aspects of CAI in mental health care. While certain themes remain underexplored and stakeholders' perspectives are insufficiently represented, this study highlights critical areas for further research. These include evaluating the risks and benefits of CAI in comparison to human therapists, determining its appropriate roles in therapeutic contexts and its impact on care access, and addressing accountability. Addressing these gaps can inform normative analysis and guide the development of ethical guidelines for responsible CAI use in mental health care.
Collapse
Affiliation(s)
- Mehrdad Rahsepar Meadi
- Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Department of Ethics, Law, & Humanities, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Tomas Sillekens
- GGZ Centraal Mental Health Care, Amersfoort, The Netherlands
| | - Suzanne Metselaar
- Department of Ethics, Law, & Humanities, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Anton van Balkom
- Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Justin Bernstein
- Department of Philosophy, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Neeltje Batelaan
- Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
9
|
Rządeczka M, Sterna A, Stolińska J, Kaczyńska P, Moskalewicz M. The Efficacy of Conversational AI in Rectifying the Theory-of-Mind and Autonomy Biases: Comparative Analysis. JMIR Ment Health 2025; 12:e64396. [PMID: 39919295 PMCID: PMC11845887 DOI: 10.2196/64396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Revised: 10/28/2024] [Accepted: 10/29/2024] [Indexed: 02/09/2025] Open
Abstract
BACKGROUND The increasing deployment of conversational artificial intelligence (AI) in mental health interventions necessitates an evaluation of their efficacy in rectifying cognitive biases and recognizing affect in human-AI interactions. These biases are particularly relevant in mental health contexts as they can exacerbate conditions such as depression and anxiety by reinforcing maladaptive thought patterns or unrealistic expectations in human-AI interactions. OBJECTIVE This study aimed to assess the effectiveness of therapeutic chatbots (Wysa and Youper) versus general-purpose language models (GPT-3.5, GPT-4, and Gemini Pro) in identifying and rectifying cognitive biases and recognizing affect in user interactions. METHODS This study used constructed case scenarios simulating typical user-bot interactions to examine how effectively chatbots address selected cognitive biases. The cognitive biases assessed included theory-of-mind biases (anthropomorphism, overtrust, and attribution) and autonomy biases (illusion of control, fundamental attribution error, and just-world hypothesis). Each chatbot response was evaluated based on accuracy, therapeutic quality, and adherence to cognitive behavioral therapy principles using an ordinal scale to ensure consistency in scoring. To enhance reliability, responses underwent a double review process by 2 cognitive scientists, followed by a secondary review by a clinical psychologist specializing in cognitive behavioral therapy, ensuring a robust assessment across interdisciplinary perspectives. RESULTS This study revealed that general-purpose chatbots outperformed therapeutic chatbots in rectifying cognitive biases, particularly in overtrust bias, fundamental attribution error, and just-world hypothesis. GPT-4 achieved the highest scores across all biases, whereas the therapeutic bot Wysa scored the lowest. Notably, general-purpose bots showed more consistent accuracy and adaptability in recognizing and addressing bias-related cues across different contexts, suggesting a broader flexibility in handling complex cognitive patterns. In addition, in affect recognition tasks, general-purpose chatbots not only excelled but also demonstrated quicker adaptation to subtle emotional nuances, outperforming therapeutic bots in 67% (4/6) of the tested biases. CONCLUSIONS This study shows that, while therapeutic chatbots hold promise for mental health support and cognitive bias intervention, their current capabilities are limited. Addressing cognitive biases in AI-human interactions requires systems that can both rectify and analyze biases as integral to human cognition, promoting precision and simulating empathy. The findings reveal the need for improved simulated emotional intelligence in chatbot design to provide adaptive, personalized responses that reduce overreliance and encourage independent coping skills. Future research should focus on enhancing affective response mechanisms and addressing ethical concerns such as bias mitigation and data privacy to ensure safe, effective AI-based mental health support.
Collapse
Affiliation(s)
- Marcin Rządeczka
- IDEAS NCBR, Warsaw, Poland
- Institute of Philosophy, Maria Curie-Skłodowska University, Lublin, Poland
| | - Anna Sterna
- Philosophy of Mental Health Unit, Department of Social Sciences and the Humanities, Poznan University of Medical Sciences, Poznań, Poland
| | | | - Paulina Kaczyńska
- Faculty of Mathematics, Informatics, and Mechanics, University of Warsaw, Warsaw, Poland
| | - Marcin Moskalewicz
- IDEAS NCBR, Warsaw, Poland
- Institute of Philosophy, Maria Curie-Skłodowska University, Lublin, Poland
- Philosophy of Mental Health Unit, Department of Social Sciences and the Humanities, Poznan University of Medical Sciences, Poznań, Poland
| |
Collapse
|
10
|
Han Q, Zhao C. Unleashing the potential of chatbots in mental health: bibliometric analysis. Front Psychiatry 2025; 16:1494355. [PMID: 39967582 PMCID: PMC11832554 DOI: 10.3389/fpsyt.2025.1494355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/10/2024] [Accepted: 01/17/2025] [Indexed: 02/20/2025] Open
Abstract
Background The proliferation of chatbots in the digital mental health sector is gaining momentum, offering a promising solution to address the pressing shortage of mental health professionals. By providing accessible and convenient mental health services and support, chatbots are poised to become a primary technological intervention in bridging the gap between mental health needs and available resources. Objective This study undertakes a thorough bibliometric analysis and discourse on the applications of chatbots in mental health, with the objective of elucidating the underlying scientific patterns that emerge at the intersection of chatbot technology and mental health care on a global scale. Methods The bibliometric software Biblioshiny and VOSviewer were used to conduct a comprehensive analysis of 261 scientific articles published in the Web of Science Core Collection between 2015 and 2024. Publications distribution are analyzed to measure productivity on countries, institutions, and sources. Scientific collaboration networks are generated to analyze the influence as well as communications between countries and institutions. Research topics and trends are formulated by using a keyword co-occurrence network. Results Over the last decade, researches on utilization of chatbots in mental health has appeared to be increasing steadily at an annual rate of 46.19%. The United States have made significant contributions to the development and expansion of publications, accounting for 27.97% of the total research output with 2452 citation counts. England came second to the US in terms of publications and citations, and followed by Australia, China, and France. National Center for Scientific Research in France ranked first among all institutions, followed by Imperial College London and University of Zurich. The number of articles published in Journal of Medical Internet Research was exceptionally high, accounting for 12.26% of the total number of articles, and JMIR Mental Health is the most influential publication sources in terms of average citations per article. Collaboration among universities in the USA, United Kingdom, Switzerland, and Singapore demonstrated a high level. The keyword co-occurrence network highlights the prominent techniques in this multidisciplinary area and reveals 5 research topics, showing a significant overlap between clusters. High-frequency terms such as "ChatGPT", "machine learning", and "large language models" underscore the current state of research, highlighting the cutting-edge advancements and frontiers in this field. Conclusions This study provides an in-depth analysis of the most prominent countries, institutions, publications, collaboration status, and research topics associated with utilization of chatbots in mental health over the last decade. It offers insights to mental health professionals without an AI background and individuals interested in the development of mental health chatbots. The findings suggest that chatbots hold a significant role in promoting mental health well-being and exhibit considerable potential in demonstrating empathy, curiosity, understanding, and collaborative capabilities with users.
Collapse
Affiliation(s)
- Qing Han
- Department of Information, Zhejiang Chinese Medical University, Hangzhou, China
| | - Chenyang Zhao
- Department of Humanities, Zhejiang Chinese Medical University, Hangzhou, China
| |
Collapse
|
11
|
Kleine AK, Kokje E, Hummelsberger P, Lermer E, Schaffernak I, Gaube S. AI-enabled clinical decision support tools for mental healthcare: A product review. Artif Intell Med 2025; 160:103052. [PMID: 39662140 DOI: 10.1016/j.artmed.2024.103052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 09/27/2024] [Accepted: 12/05/2024] [Indexed: 12/13/2024]
Abstract
The review seeks to promote transparency in the availability of regulated AI-enabled Clinical Decision Support Systems (AI-CDSS) for mental healthcare. From 84 potential products, seven fulfilled the inclusion criteria. The products can be categorized into three major areas: diagnosis of autism spectrum disorder (ASD) based on clinical history, behavioral, and eye-tracking data; diagnosis of multiple disorders based on conversational data; and medication selection based on clinical history and genetic data. We found five scientific articles evaluating the devices' performance and external validity. The average completeness of reporting, indicated by 52 % adherence to the Consolidated Standards of Reporting Trials Artificial Intelligence (CONSORT-AI) checklist, was modest, signaling room for improvement in reporting quality. Our findings stress the importance of obtaining regulatory approval, adhering to scientific standards, and staying up-to-date with the latest changes in the regulatory landscape. Refining regulatory guidelines and implementing effective tracking systems for AI-CDSS could enhance transparency and oversight in the field.
Collapse
Affiliation(s)
| | | | | | - Eva Lermer
- LMU Munich, Germany; Technical University of Applied Sciences Augsburg, Germany
| | | | - Susanne Gaube
- University College London, United Kingdom of Great Britain and Northern Ireland
| |
Collapse
|
12
|
Lee HS, Wright C, Ferranto J, Buttimer J, Palmer CE, Welchman A, Mazor KM, Fisher KA, Smelson D, O’Connor L, Fahey N, Soni A. Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop. Front Psychiatry 2025; 15:1505024. [PMID: 39957757 PMCID: PMC11826059 DOI: 10.3389/fpsyt.2024.1505024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Accepted: 12/26/2024] [Indexed: 02/18/2025] Open
Abstract
Background Digital mental health interventions, such as artificial intelligence (AI) conversational agents, hold promise for improving access to care by innovating therapy and supporting delivery. However, little research exists on patient perspectives regarding AI conversational agents, which is crucial for their successful implementation. This study aimed to fill the gap by exploring patients' perceptions and acceptability of AI conversational agents in mental healthcare. Methods Adults with self-reported mild to moderate anxiety were recruited from the UMass Memorial Health system. Participants engaged in semi-structured interviews to discuss their experiences, perceptions, and acceptability of AI conversational agents in mental healthcare. Anxiety levels were assessed using the Generalized Anxiety Disorder scale. Data were collected from December 2022 to February 2023, and three researchers conducted rapid qualitative analysis to identify and synthesize themes. Results The sample included 29 adults (ages 19-66), predominantly under age 35, non-Hispanic, White, and female. Participants reported a range of positive and negative experiences with AI conversational agents. Most held positive attitudes towards AI conversational agents, appreciating their utility and potential to increase access to care, yet some also expressed cautious optimism. About half endorsed negative opinions, citing AI's lack of empathy, technical limitations in addressing complex mental health situations, and data privacy concerns. Most participants desired some human involvement in AI-driven therapy and expressed concern about the risk of AI conversational agents being seen as replacements for therapy. A subgroup preferred AI conversational agents for administrative tasks rather than care provision. Conclusions AI conversational agents were perceived as useful and beneficial for increasing access to care, but concerns about AI's empathy, capabilities, safety, and human involvement in mental healthcare were prevalent. Future implementation and integration of AI conversational agents should consider patient perspectives to enhance their acceptability and effectiveness.
Collapse
Affiliation(s)
- Hyein S. Lee
- Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
- Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - Colton Wright
- Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - Julia Ferranto
- Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | | | | | | | - Kathleen M. Mazor
- Division of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - Kimberly A. Fisher
- Division of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - David Smelson
- Division of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - Laurel O’Connor
- Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
- Department of Emergency Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - Nisha Fahey
- Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
- Department of Pediatrics, University of Massachusetts Chan Medical School, Worcester, MA, United States
| | - Apurv Soni
- Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
- Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, United States
- Division of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United States
| |
Collapse
|
13
|
Nieminen H, Vartiainen AK, Bond R, Laukkanen E, Mulvenna M, Kuosmanen L. Recommendations for Mental Health Chatbot Conversations: An Integrative Review. J Adv Nurs 2025. [PMID: 39844575 DOI: 10.1111/jan.16762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 12/18/2024] [Accepted: 01/10/2025] [Indexed: 01/24/2025]
Abstract
AIM To identify and synthesise recommendations and guidelines for mental health chatbot conversational design. DESIGN Integrative review. METHODS Suitable publications presenting recommendations or guidelines for mental health conversational design were included. The quality of included publications was assessed using Joanna Briggs Institute Critical Appraisal Tools. Thematic analysis was conducted. DATA SOURCES Primary searches limited to last 10 years were conducted in PubMed, Scopus, ACM Digital Library and EBSCO databases including APA PsycINFO, CINAHL, APA PsycArticles and MEDLINE in February 2023 and updated in October 2023. A secondary search was conducted in Google Scholar in May 2023. RESULTS Of 1684 articles screened, 16 publications were selected. Three overarching themes were developed: (1) explicit knowledge about chatbot design and domain, (2) knowing your audience and (3) creating a safe space to engage. Results highlight that creating pleasant and effective conversations with a mental health chatbot requires careful and professional planning in advance, defining the target group and working together with it to address its needs and preferences. It is essential to emphasise the pleasant user experience and safety from both technical and psychological perspectives. CONCLUSION Recommendations for mental health chatbot conversational design have evolved and become more specific in recent years. Recommendations set high standards for mental health chatbots. To meet that, co-design, explicit knowledge of the user needs, domain and conversational design is needed. IMPLICATIONS FOR THE PROFESSION AND/OR PATIENT CARE Mental health professionals participating in chatbot development can utilise this review. The results can also inform technical development teams not involving healthcare professionals directly. IMPACT Knowledge of developing mental health chatbot conversations appears scattered. In mental health chatbots, features that enhance the chatbot's ability to meet users' needs and increase safety should be considered. This review is useful for developers of mental health chatbots and other health applications used independently. REPORTING METHOD This integrative review was reported according to PRISMA guidelines, as applicable. PATIENT OR PUBLIC CONTRIBUTION No patient or public contribution.
Collapse
Affiliation(s)
- Heidi Nieminen
- Department of Nursing Science, University of Eastern Finland, Kuopio, Finland
| | - Anna-Kaisa Vartiainen
- Department of Health and Social Management, University of Eastern Finland, Kuopio, Finland
| | - Raymond Bond
- School of Computing, Ulster University, Belfast, UK
| | | | | | - Lauri Kuosmanen
- Department of Nursing Science, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
14
|
Ovsyannikova D, de Mello VO, Inzlicht M. Third-party evaluators perceive AI as more compassionate than expert humans. COMMUNICATIONS PSYCHOLOGY 2025; 3:4. [PMID: 39794410 PMCID: PMC11723910 DOI: 10.1038/s44271-024-00182-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 12/18/2024] [Indexed: 01/13/2025]
Abstract
Empathy connects us but strains under demanding settings. This study explored how third parties evaluated AI-generated empathetic responses versus human responses in terms of compassion, responsiveness, and overall preference across four preregistered experiments. Participants (N = 556) read empathy prompts describing valenced personal experiences and compared the AI responses to select non-expert or expert humans. Results revealed that AI responses were preferred and rated as more compassionate compared to select human responders (Study 1). This pattern of results remained when author identity was made transparent (Study 2), when AI was compared to expert crisis responders (Study 3), and when author identity was disclosed to all participants (Study 4). Third parties perceived AI as being more responsive-conveying understanding, validation, and care-which partially explained AI's higher compassion ratings in Study 4. These findings suggest that AI has robust utility in contexts requiring empathetic interaction, with the potential to address the increasing need for empathy in supportive communication contexts.
Collapse
Affiliation(s)
| | | | - Michael Inzlicht
- Department of Psychology, University of Toronto, Toronto, ON, Canada.
- Rotman School of Management, University of Toronto, Toronto, ON, Canada.
| |
Collapse
|
15
|
Rackoff GN, Zhang ZZ, Newman MG. Chatbot-delivered mental health support: Attitudes and utilization in a sample of U.S. college students. Digit Health 2025; 11:20552076241313401. [PMID: 39839954 PMCID: PMC11748072 DOI: 10.1177/20552076241313401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Accepted: 12/04/2024] [Indexed: 01/23/2025] Open
Abstract
Objective Chatbots' rapid advancements raise the possibility that they can be used to deliver mental health support. However, public utilization of and opinions toward chatbots for mental health support are poorly understood. Methods Survey study of 428 U.S. university students who participated in early 2024, just over one year after the release of ChatGPT. Descriptive analyses examined utilization of and attitudes toward both traditional mental health services (i.e. psychotherapy, counseling, or medication) and chatbot-delivered mental health support. Results Nearly half (49%) of participants reported having used a chatbot for any purpose, yet only 5% reported seeking mental health support from a chatbot (8% when only considering participants with probable depression or generalized anxiety disorder). Attitudes toward traditional mental health services were broadly positive, and attitudes toward chatbot-delivered support were neutral and significantly less positive (d = 1.18, p < .001). Participants reported lack of need and doubts about helpfulness as barriers to using chatbot-delivered support more frequently than they reported them as barriers to traditional services. Cost, time, and stigma barriers were less frequently reported for chatbot-delivered support than for traditional services. Attitudes were generally consistent as a function of mental health status. Conclusion Among U.S. students, utilization of chatbots for mental health support is uncommon. Chatbots are perceived as less likely to be beneficial, yet also less affected by cost, time, and stigma barriers than traditional services. Rigorous outcome research may increase public trust in and utilization of chatbots for mental health support.
Collapse
Affiliation(s)
- Gavin N. Rackoff
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA
| | - Zhenyu Z. Zhang
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA
| | - Michelle G. Newman
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA
| |
Collapse
|
16
|
Chua JYX, Choolani M, Chee CYI, Yi H, Chan YH, Lalor JG, Chong YS, Shorey S. The effectiveness of Parentbot - a digital healthcare assistant - on parenting outcomes: A randomized controlled trial. Int J Nurs Stud 2024; 160:104906. [PMID: 39305680 DOI: 10.1016/j.ijnurstu.2024.104906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 09/04/2024] [Accepted: 09/06/2024] [Indexed: 12/02/2024]
Abstract
BACKGROUND Transitioning to parenthood is a stressful period that makes parents more prone to depression and anxiety. Mobile application-based interventions and chatbots could improve parents' well-being across the perinatal period. Hence, the Parentbot - a Digital healthcare Assistant was developed to support parents across the perinatal period. OBJECTIVE To evaluate the effectiveness of the Parentbot - a Digital healthcare Assistant in improving parenting self-efficacy (primary outcome), stress, depression, anxiety, social support, parent-child bonding, and parenting satisfaction (secondary outcomes) among parents across the perinatal period. METHODS A two-group pre-test and repeated post-test randomized controlled trial was used where 118 heterosexual couples (118 mothers and 118 fathers) were recruited from a public tertiary hospital in Singapore. Couples were randomly assigned to the intervention group receiving the Parentbot - a Digital healthcare Assistant and standardized care (59 couples) and a control group receiving the standard care only (59 couples). Data collection occurred at baseline (>24 weeks of gestation - age of viability in Singapore) and at one month (post-test 1) and three months (post-test 2) postpartum. Linear mixed models were used to compare parental outcomes between groups and a linear mixed model with repeated measures was used to analyze within-group differences. General linear models were used to conduct subgroup analyses of mothers and fathers between groups. RESULTS After adjusting for baseline values and sociodemographic covariates, parents in the intervention group had higher parenting self-efficacy compared to the control group at one-month postpartum (mean difference = 1.22, 95 % CI: 0.06 to 2.39, p = 0.04; Cohen standardized effect size = 0.14), and mothers had lower state-anxiety compared to the control group at three-months postpartum (mean difference = -2.21, 95 % CI: -4.18 to -0.24, p = 0.03; Cohen standardized effect size = -0.22). Non-statistically significant differences between groups were reported for the other parental outcomes. CONCLUSIONS This study showed that the Parentbot - a Digital healthcare Assistant is feasible and promising in supporting parents especially enhancing their self-efficacy across the perinatal period. The lack of statistical significance in most outcomes showed that further evaluation of the intervention is required among varied populations of parents across different cultural and geographical contexts. The intervention could be enhanced to support more diverse groups of parents including single parents, parents with high-risk pregnancies and infants with medical complications, and parents with limited English language skills. Future trials could explore the cost-effectiveness of such interventions and investigate infant outcomes for a more comprehensive assessment of mobile application-based perinatal interventions. TRIAL REGISTRATION Clinicaltrails.gov (NCT05463926).
Collapse
Affiliation(s)
- Joelle Yan Xin Chua
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Mahesh Choolani
- Department of Obstetrics and Gynaecology, National University Hospital, Singapore
| | | | - Huso Yi
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Yiong Huak Chan
- Biostatistics Unit, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | | | - Yap Seng Chong
- Department of Obstetrics and Gynaecology, National University Hospital, Singapore
| | - Shefaly Shorey
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| |
Collapse
|
17
|
Meadows R, Hine C. Entanglements of Technologies, Agency and Selfhood: Exploring the Complexity in Attitudes Toward Mental Health Chatbots. Cult Med Psychiatry 2024; 48:840-857. [PMID: 39153178 PMCID: PMC11570556 DOI: 10.1007/s11013-024-09876-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/02/2024] [Indexed: 08/19/2024]
Abstract
Whilst chatbots for mental health are becoming increasingly prevalent, research on user experiences and expectations is relatively scarce and also equivocal on their acceptability and utility. This paper asks how people formulate their understandings of what might be appropriate in this space. We draw on data from a group of non-users who have experienced a need for support, and so can imagine self as therapeutic target-enabling us to tap into their imaginative speculations of the self in relation to the chatbot other and the forms of agency they see as being at play; unconstrained by a specific actual chatbot. Analysis points towards ambiguity over some key issues: whether the apps were seen as having a role in specific episodes of mental health or in relation to an ongoing project of supporting wellbeing; whether the chatbot could be viewed as having a therapeutic agency or was a mere tool; and how far these issues related to matters of the user's personal qualities or the specific nature of the mental health condition. A range of traditions, norms and practices were used to construct diverse expectations on whether chatbots could offer a solution to cost-effective mental health support at scale.
Collapse
Affiliation(s)
- Robert Meadows
- Department of Sociology, University of Surrey, Guildford, GU2 7XH, UK.
| | - Christine Hine
- Department of Sociology, University of Surrey, Guildford, GU2 7XH, UK
| |
Collapse
|
18
|
Reading Turchioe M, Desai P, Harkins S, Kim J, Kumar S, Zhang Y, Joly R, Pathak J, Hermann A, Benda N. Differing perspectives on artificial intelligence in mental healthcare among patients: a cross-sectional survey study. Front Digit Health 2024; 6:1410758. [PMID: 39679142 PMCID: PMC11638230 DOI: 10.3389/fdgth.2024.1410758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 10/14/2024] [Indexed: 12/17/2024] Open
Abstract
Introduction Artificial intelligence (AI) is being developed for mental healthcare, but patients' perspectives on its use are unknown. This study examined differences in attitudes towards AI being used in mental healthcare by history of mental illness, current mental health status, demographic characteristics, and social determinants of health. Methods We conducted a cross-sectional survey of an online sample of 500 adults asking about general perspectives, comfort with AI, specific concerns, explainability and transparency, responsibility and trust, and the importance of relevant bioethical constructs. Results Multiple vulnerable subgroups perceive potential harms related to AI being used in mental healthcare, place importance on upholding bioethical constructs, and would blame or reduce trust in multiple parties, including mental healthcare professionals, if harm or conflicting assessments resulted from AI. Discussion Future research examining strategies for ethical AI implementation and supporting clinician AI literacy is critical for optimal patient and clinician interactions with AI in mental healthcare.
Collapse
Affiliation(s)
| | - Pooja Desai
- Department of Biomedical Informatics, Columbia University, New York, NY, United States
| | - Sarah Harkins
- Columbia University School of Nursing, New York, NY, United States
| | - Jessica Kim
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Shiveen Kumar
- College of Agriculture and Life Sciences, Cornell University, Ithaca, NY, United States
| | - Yiye Zhang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Rochelle Joly
- Department of Obstetrics and Gynecology, Weill Cornell Medicine, New York, NY, United States
| | - Jyotishman Pathak
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Alison Hermann
- Department of Psychiatry, Weill Cornell Medicine, New York, NY, United States
| | - Natalie Benda
- Columbia University School of Nursing, New York, NY, United States
| |
Collapse
|
19
|
Frischholz K, Tanaka H, Shidara K, Onishi K, Nakamura S. Examining the Effects of Cognitive Behavioral Therapy With a Virtual Agent on User Motivation and Improvement in Psychological Distress and Anxiety: Two-Session Experimental Study. JMIR Form Res 2024; 8:e55234. [PMID: 39405101 PMCID: PMC11522660 DOI: 10.2196/55234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 07/13/2024] [Accepted: 08/14/2024] [Indexed: 11/01/2024] Open
Abstract
BACKGROUND Cognitive behavioral therapy (CBT) is a valuable treatment for mood disorders and anxiety. CBT methods, such as cognitive restructuring, are employed to change automatic negative thoughts to more realistic ones. OBJECTIVE This study extends on previous research conducted by the authors, focused on the process of correcting automatic negative thoughts to realistic ones and reducing distress and anxiety via CBT with a virtual agent. It was aimed to investigate whether the previously applied virtual agent would achieve changes in automatic negative thoughts when modifications to the previous experimental paradigm are applied and when user motivation is taken into consideration. Furthermore, the potential effects of existing participant knowledge concerning CBT or automatic thoughts were explored. METHODS A single-group, 2-session experiment was conducted using a within-group design. The study recruited 35 participants from May 15, 2023, to June 2, 2023, via Inter Group Corporation, with data collection following from June 5 to June 20, 2023, at Nara Institute of Science and Technology, Japan. There were 19 male and 16 female participants (age range: 18-50 years; mean 33.66, SD 10.77 years). Participants answered multiple questionnaires covering depressive symptomatology and other cognitive variables before and after a CBT session. CBT was carried out using a virtual agent, who participants conversed with using a CBT dialogue scenario on the topic of automatic negative thoughts. Session 2 of the experiment took place 1 week after session 1. Changes in distress and state anxiety were analyzed using a Wilcoxon signed-rank test and t-test for paired samples. The relationships of motivation with cognitive changes and distress or anxiety changes were investigated via correlation analysis. Multiple linear regression was used to analyze the potential predictive qualities of previous knowledge of CBT and automatic negative thoughts regarding outcome measures. RESULTS Significant reductions in distress (all P<.001) and state anxiety (all P<.003) emerged throughout the first and second experimental sessions. The CBT intervention increased participants' recognition of their negative thinking and their intention to change it, namely their motivation to change it. However, no clear correlations of motivation with changes in distress or anxiety were found (all P>.04). Participants reported moderate subjective changes in their cognition, which were in part positively correlated with their motivation (all P<.007). Lastly, existing knowledge of CBT did not predict reductions in distress during the first session of the experiment (P=.02). CONCLUSIONS CBT using a virtual agent and a CBT dialogue scenario was successful in reducing distress and anxiety when talking about automatic negative thoughts. The promotion of client motivation needs to be critically considered when designing interventions using CBT with a virtual agent, and further experimental investigations on the causal influences between motivation and outcome measures need to be conducted.
Collapse
Affiliation(s)
- Katja Frischholz
- Department of Psychology, University of Regensburg, Regensburg, Germany
- Department of Information Science, Nara Institute of Science and Technology, Ikoma, Japan
| | - Hiroki Tanaka
- Department of Information Science, Nara Institute of Science and Technology, Ikoma, Japan
- Division of Arts and Sciences, International Christian University, Mitaka, Japan
| | - Kazuhiro Shidara
- Department of Information Science, Nara Institute of Science and Technology, Ikoma, Japan
| | - Kazuyo Onishi
- Department of Information Science, Nara Institute of Science and Technology, Ikoma, Japan
| | - Satoshi Nakamura
- Department of Information Science, Nara Institute of Science and Technology, Ikoma, Japan
| |
Collapse
|
20
|
Held P, Pridgen SA, Chen Y, Akhtar Z, Amin D, Pohorence S. A Novel Cognitive Behavioral Therapy-Based Generative AI Tool (Socrates 2.0) to Facilitate Socratic Dialogue: Protocol for a Mixed Methods Feasibility Study. JMIR Res Protoc 2024; 13:e58195. [PMID: 39388255 PMCID: PMC11502974 DOI: 10.2196/58195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 06/30/2024] [Accepted: 08/22/2024] [Indexed: 10/12/2024] Open
Abstract
BACKGROUND Digital mental health tools, designed to augment traditional mental health treatments, are becoming increasingly important due to a wide range of barriers to accessing mental health care, including a growing shortage of clinicians. Most existing tools use rule-based algorithms, often leading to interactions that feel unnatural compared with human therapists. Large language models (LLMs) offer a solution for the development of more natural, engaging digital tools. In this paper, we detail the development of Socrates 2.0, which was designed to engage users in Socratic dialogue surrounding unrealistic or unhelpful beliefs, a core technique in cognitive behavioral therapies. The multiagent LLM-based tool features an artificial intelligence (AI) therapist, Socrates, which receives automated feedback from an AI supervisor and an AI rater. The combination of multiple agents appeared to help address common LLM issues such as looping, and it improved the overall dialogue experience. Initial user feedback from individuals with lived experiences of mental health problems as well as cognitive behavioral therapists has been positive. Moreover, tests in approximately 500 scenarios showed that Socrates 2.0 engaged in harmful responses in under 1% of cases, with the AI supervisor promptly correcting the dialogue each time. However, formal feasibility studies with potential end users are needed. OBJECTIVE This mixed methods study examines the feasibility of Socrates 2.0. METHODS On the basis of the initial data, we devised a formal feasibility study of Socrates 2.0 to gather qualitative and quantitative data about users' and clinicians' experience of interacting with the tool. Using a mixed method approach, the goal is to gather feasibility and acceptability data from 100 users and 50 clinicians to inform the eventual implementation of generative AI tools, such as Socrates 2.0, in mental health treatment. We designed this study to better understand how users and clinicians interact with the tool, including the frequency, length, and time of interactions, users' satisfaction with the tool overall, quality of each dialogue and individual responses, as well as ways in which the tool should be improved before it is used in efficacy trials. Descriptive and inferential analyses will be performed on data from validated usability measures. Thematic analysis will be performed on the qualitative data. RESULTS Recruitment will begin in February 2024 and is expected to conclude by February 2025. As of September 25, 2024, overall, 55 participants have been recruited. CONCLUSIONS The development of Socrates 2.0 and the outlined feasibility study are important first steps in applying generative AI to mental health treatment delivery and lay the foundation for formal feasibility studies. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) DERR1-10.2196/58195.
Collapse
Affiliation(s)
- Philip Held
- Department of Psychiatry and Behavioral Sciences, Rush University Medical Center, Chicago, IL, United States
| | - Sarah A Pridgen
- Department of Psychiatry and Behavioral Sciences, Rush University Medical Center, Chicago, IL, United States
| | - Yaozhong Chen
- Department of Psychiatry and Behavioral Sciences, Rush University Medical Center, Chicago, IL, United States
| | - Zuhaib Akhtar
- Department of Psychiatry and Behavioral Sciences, Rush University Medical Center, Chicago, IL, United States
| | - Darpan Amin
- Department of Psychiatry and Behavioral Sciences, Rush University Medical Center, Chicago, IL, United States
| | | |
Collapse
|
21
|
MacNeill AL, MacNeill L, Luke A, Doucet S. Health Professionals' Views on the Use of Conversational Agents for Health Care: Qualitative Descriptive Study. J Med Internet Res 2024; 26:e49387. [PMID: 39320936 PMCID: PMC11464950 DOI: 10.2196/49387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 03/01/2024] [Accepted: 06/01/2024] [Indexed: 09/26/2024] Open
Abstract
BACKGROUND In recent years, there has been an increase in the use of conversational agents for health promotion and service delivery. To date, health professionals' views on the use of this technology have received limited attention in the literature. OBJECTIVE The purpose of this study was to gain a better understanding of how health professionals view the use of conversational agents for health care. METHODS Physicians, nurses, and regulated mental health professionals were recruited using various web-based methods. Participants were interviewed individually using the Zoom (Zoom Video Communications, Inc) videoconferencing platform. Interview questions focused on the potential benefits and risks of using conversational agents for health care, as well as the best way to integrate conversational agents into the health care system. Interviews were transcribed verbatim and uploaded to NVivo (version 12; QSR International, Inc) for thematic analysis. RESULTS A total of 24 health professionals participated in the study (19 women, 5 men; mean age 42.75, SD 10.71 years). Participants said that the use of conversational agents for health care could have certain benefits, such as greater access to care for patients or clients and workload support for health professionals. They also discussed potential drawbacks, such as an added burden on health professionals (eg, program familiarization) and the limited capabilities of these programs. Participants said that conversational agents could be used for routine or basic tasks, such as screening and assessment, providing information and education, and supporting individuals between appointments. They also said that health professionals should have some oversight in terms of the development and implementation of these programs. CONCLUSIONS The results of this study provide insight into health professionals' views on the use of conversational agents for health care, particularly in terms of the benefits and drawbacks of these programs and how they should be integrated into the health care system. These collective findings offer useful information and guidance to stakeholders who have an interest in the development and implementation of this technology.
Collapse
Affiliation(s)
- A Luke MacNeill
- Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada
- Department of Nursing and Health Sciences, University of New Brunswick, Saint John, NB, Canada
| | - Lillian MacNeill
- Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada
- Department of Nursing and Health Sciences, University of New Brunswick, Saint John, NB, Canada
| | - Alison Luke
- Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada
- Department of Nursing and Health Sciences, University of New Brunswick, Saint John, NB, Canada
| | - Shelley Doucet
- Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada
- Department of Nursing and Health Sciences, University of New Brunswick, Saint John, NB, Canada
| |
Collapse
|
22
|
Benda N, Desai P, Reza Z, Zheng A, Kumar S, Harkins S, Hermann A, Zhang Y, Joly R, Kim J, Pathak J, Reading Turchioe M. Patient Perspectives on AI for Mental Health Care: Cross-Sectional Survey Study. JMIR Ment Health 2024; 11:e58462. [PMID: 39293056 PMCID: PMC11447436 DOI: 10.2196/58462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/26/2024] [Accepted: 07/14/2024] [Indexed: 09/20/2024] Open
Abstract
BACKGROUND The application of artificial intelligence (AI) to health and health care is rapidly increasing. Several studies have assessed the attitudes of health professionals, but far fewer studies have explored the perspectives of patients or the general public. Studies investigating patient perspectives have focused on somatic issues, including those related to radiology, perinatal health, and general applications. Patient feedback has been elicited in the development of specific mental health care solutions, but broader perspectives toward AI for mental health care have been underexplored. OBJECTIVE This study aims to understand public perceptions regarding potential benefits of AI, concerns about AI, comfort with AI accomplishing various tasks, and values related to AI, all pertaining to mental health care. METHODS We conducted a 1-time cross-sectional survey with a nationally representative sample of 500 US-based adults. Participants provided structured responses on their perceived benefits, concerns, comfort, and values regarding AI for mental health care. They could also add free-text responses to elaborate on their concerns and values. RESULTS A plurality of participants (245/497, 49.3%) believed AI may be beneficial for mental health care, but this perspective differed based on sociodemographic variables (all P<.05). Specifically, Black participants (odds ratio [OR] 1.76, 95% CI 1.03-3.05) and those with lower health literacy (OR 2.16, 95% CI 1.29-3.78) perceived AI to be more beneficial, and women (OR 0.68, 95% CI 0.46-0.99) perceived AI to be less beneficial. Participants endorsed concerns about accuracy, possible unintended consequences such as misdiagnosis, the confidentiality of their information, and the loss of connection with their health professional when AI is used for mental health care. A majority of participants (80.4%, 402/500) valued being able to understand individual factors driving their risk, confidentiality, and autonomy as it pertained to the use of AI for their mental health. When asked who was responsible for the misdiagnosis of mental health conditions using AI, 81.6% (408/500) of participants found the health professional to be responsible. Qualitative results revealed similar concerns related to the accuracy of AI and how its use may impact the confidentiality of patients' information. CONCLUSIONS Future work involving the use of AI for mental health care should investigate strategies for conveying the level of AI's accuracy, factors that drive patients' mental health risks, and how data are used confidentially so that patients can determine with their health professionals when AI may be beneficial. It will also be important in a mental health care context to ensure the patient-health professional relationship is preserved when AI is used.
Collapse
Affiliation(s)
- Natalie Benda
- School of Nursing, Columbia University, New York, NY, United States
| | - Pooja Desai
- Department of Biomedical Informatics, Columbia University, New York, NY, United States
| | - Zayan Reza
- Mailman School of Public Health, Columbia University, New York, NY, United States
| | - Anna Zheng
- Stuyvestant High School, New York, NY, United States
| | - Shiveen Kumar
- College of Agriculture and Life Science, Cornell University, Ithaca, NY, United States
| | - Sarah Harkins
- School of Nursing, Columbia University, New York, NY, United States
| | - Alison Hermann
- Department of Psychiatry, Weill Cornell Medicine, New York, NY, United States
| | - Yiye Zhang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Rochelle Joly
- Department of Obstetrics and Gynecology, Weill Cornell Medicine, New York, NY, United States
| | - Jessica Kim
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Jyotishman Pathak
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | | |
Collapse
|
23
|
Sanjeewa R, Iyer R, Apputhurai P, Wickramasinghe N, Meyer D. Empathic Conversational Agent Platform Designs and Their Evaluation in the Context of Mental Health: Systematic Review. JMIR Ment Health 2024; 11:e58974. [PMID: 39250799 PMCID: PMC11420590 DOI: 10.2196/58974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 07/01/2024] [Accepted: 07/02/2024] [Indexed: 09/11/2024] Open
Abstract
BACKGROUND The demand for mental health (MH) services in the community continues to exceed supply. At the same time, technological developments make the use of artificial intelligence-empowered conversational agents (CAs) a real possibility to help fill this gap. OBJECTIVE The objective of this review was to identify existing empathic CA design architectures within the MH care sector and to assess their technical performance in detecting and responding to user emotions in terms of classification accuracy. In addition, the approaches used to evaluate empathic CAs within the MH care sector in terms of their acceptability to users were considered. Finally, this review aimed to identify limitations and future directions for empathic CAs in MH care. METHODS A systematic literature search was conducted across 6 academic databases to identify journal articles and conference proceedings using search terms covering 3 topics: "conversational agents," "mental health," and "empathy." Only studies discussing CA interventions for the MH care domain were eligible for this review, with both textual and vocal characteristics considered as possible data inputs. Quality was assessed using appropriate risk of bias and quality tools. RESULTS A total of 19 articles met all inclusion criteria. Most (12/19, 63%) of these empathic CA designs in MH care were machine learning (ML) based, with 26% (5/19) hybrid engines and 11% (2/19) rule-based systems. Among the ML-based CAs, 47% (9/19) used neural networks, with transformer-based architectures being well represented (7/19, 37%). The remaining 16% (3/19) of the ML models were unspecified. Technical assessments of these CAs focused on response accuracies and their ability to recognize, predict, and classify user emotions. While single-engine CAs demonstrated good accuracy, the hybrid engines achieved higher accuracy and provided more nuanced responses. Of the 19 studies, human evaluations were conducted in 16 (84%), with only 5 (26%) focusing directly on the CA's empathic features. All these papers used self-reports for measuring empathy, including single or multiple (scale) ratings or qualitative feedback from in-depth interviews. Only 1 (5%) paper included evaluations by both CA users and experts, adding more value to the process. CONCLUSIONS The integration of CA design and its evaluation is crucial to produce empathic CAs. Future studies should focus on using a clear definition of empathy and standardized scales for empathy measurement, ideally including expert assessment. In addition, the diversity in measures used for technical assessment and evaluation poses a challenge for comparing CA performances, which future research should also address. However, CAs with good technical and empathic performance are already available to users of MH care services, showing promise for new applications, such as helpline services.
Collapse
Affiliation(s)
- Ruvini Sanjeewa
- School of Health Sciences, Swinburne University of Technology, Hawthorn, Australia
| | - Ravi Iyer
- School of Health Sciences, Swinburne University of Technology, Hawthorn, Australia
| | | | - Nilmini Wickramasinghe
- School of Computing, Engineering and Mathematical Sciences, La Trobe University, Bundoora, Australia
| | - Denny Meyer
- School of Health Sciences, Swinburne University of Technology, Hawthorn, Australia
| |
Collapse
|
24
|
Wu PF, Summers C, Panesar A, Kaura A, Zhang L. AI Hesitancy and Acceptability-Perceptions of AI Chatbots for Chronic Health Management and Long COVID Support: Survey Study. JMIR Hum Factors 2024; 11:e51086. [PMID: 39045815 PMCID: PMC11287232 DOI: 10.2196/51086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 02/22/2024] [Accepted: 05/09/2024] [Indexed: 07/25/2024] Open
Abstract
Background Artificial intelligence (AI) chatbots have the potential to assist individuals with chronic health conditions by providing tailored information, monitoring symptoms, and offering mental health support. Despite their potential benefits, research on public attitudes toward health care chatbots is still limited. To effectively support individuals with long-term health conditions like long COVID (or post-COVID-19 condition), it is crucial to understand their perspectives and preferences regarding the use of AI chatbots. Objective This study has two main objectives: (1) provide insights into AI chatbot acceptance among people with chronic health conditions, particularly adults older than 55 years and (2) explore the perceptions of using AI chatbots for health self-management and long COVID support. Methods A web-based survey study was conducted between January and March 2023, specifically targeting individuals with diabetes and other chronic conditions. This particular population was chosen due to their potential awareness and ability to self-manage their condition. The survey aimed to capture data at multiple intervals, taking into consideration the public launch of ChatGPT, which could have potentially impacted public opinions during the project timeline. The survey received 1310 clicks and garnered 900 responses, resulting in a total of 888 usable data points. Results Although past experience with chatbots (P<.001, 95% CI .110-.302) and online information seeking (P<.001, 95% CI .039-.084) are strong indicators of respondents' future adoption of health chatbots, they are in general skeptical or unsure about the use of AI chatbots for health care purposes. Less than one-third of the respondents (n=203, 30.1%) indicated that they were likely to use a health chatbot in the next 12 months if available. Most were uncertain about a chatbot's capability to provide accurate medical advice. However, people seemed more receptive to using voice-based chatbots for mental well-being, health data collection, and analysis. Half of the respondents with long COVID showed interest in using emotionally intelligent chatbots. Conclusions AI hesitancy is not uniform across all health domains and user groups. Despite persistent AI hesitancy, there are promising opportunities for chatbots to offer support for chronic conditions in areas of lifestyle enhancement and mental well-being, potentially through voice-based user interfaces.
Collapse
Affiliation(s)
- Philip Fei Wu
- School of Business and Management, Royal Holloway, University of London, Egham, United Kingdom
| | - Charlotte Summers
- DDM Health, Coventry, United Kingdom
- Warwick Medical School, University of Warwick, Coventry, United Kingdom
| | - Arjun Panesar
- DDM Health, Coventry, United Kingdom
- Warwick Medical School, University of Warwick, Coventry, United Kingdom
| | - Amit Kaura
- DDM Health, Coventry, United Kingdom
- Imperial College Healthcare NHS Trust, London, United Kingdom
| | - Li Zhang
- Department of Computer Science, Royal Holloway, University of London, Egham, United Kingdom
| |
Collapse
|
25
|
Laymouna M, Ma Y, Lessard D, Schuster T, Engler K, Lebouché B. Roles, Users, Benefits, and Limitations of Chatbots in Health Care: Rapid Review. J Med Internet Res 2024; 26:e56930. [PMID: 39042446 PMCID: PMC11303905 DOI: 10.2196/56930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 04/07/2024] [Accepted: 04/12/2024] [Indexed: 07/24/2024] Open
Abstract
BACKGROUND Chatbots, or conversational agents, have emerged as significant tools in health care, driven by advancements in artificial intelligence and digital technology. These programs are designed to simulate human conversations, addressing various health care needs. However, no comprehensive synthesis of health care chatbots' roles, users, benefits, and limitations is available to inform future research and application in the field. OBJECTIVE This review aims to describe health care chatbots' characteristics, focusing on their diverse roles in the health care pathway, user groups, benefits, and limitations. METHODS A rapid review of published literature from 2017 to 2023 was performed with a search strategy developed in collaboration with a health sciences librarian and implemented in the MEDLINE and Embase databases. Primary research studies reporting on chatbot roles or benefits in health care were included. Two reviewers dual-screened the search results. Extracted data on chatbot roles, users, benefits, and limitations were subjected to content analysis. RESULTS The review categorized chatbot roles into 2 themes: delivery of remote health services, including patient support, care management, education, skills building, and health behavior promotion, and provision of administrative assistance to health care providers. User groups spanned across patients with chronic conditions as well as patients with cancer; individuals focused on lifestyle improvements; and various demographic groups such as women, families, and older adults. Professionals and students in health care also emerged as significant users, alongside groups seeking mental health support, behavioral change, and educational enhancement. The benefits of health care chatbots were also classified into 2 themes: improvement of health care quality and efficiency and cost-effectiveness in health care delivery. The identified limitations encompassed ethical challenges, medicolegal and safety concerns, technical difficulties, user experience issues, and societal and economic impacts. CONCLUSIONS Health care chatbots offer a wide spectrum of applications, potentially impacting various aspects of health care. While they are promising tools for improving health care efficiency and quality, their integration into the health care system must be approached with consideration of their limitations to ensure optimal, safe, and equitable use.
Collapse
Affiliation(s)
- Moustafa Laymouna
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
| | - Yuanchao Ma
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
- Department of Biomedical Engineering, Polytechnique Montréal, Montreal, QC, Canada
| | - David Lessard
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| | - Tibor Schuster
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Kim Engler
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| | - Bertrand Lebouché
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| |
Collapse
|
26
|
Abid A, Baxter SL. Breaking Barriers in Behavioral Change: The Potential of Artificial Intelligence-Driven Motivational Interviewing. J Glaucoma 2024; 33:473-477. [PMID: 38595151 DOI: 10.1097/ijg.0000000000002382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 03/16/2024] [Indexed: 04/11/2024]
Abstract
Patient outcomes in ophthalmology are greatly influenced by adherence and patient participation, which can be particularly challenging in diseases like glaucoma, where medication regimens can be complex. A well-studied and evidence-based intervention for behavioral change is motivational interviewing (MI), a collaborative and patient-centered counseling approach that has been shown to improve medication adherence in glaucoma patients. However, there are many barriers to clinicians being able to provide motivational interviewing in-office, including short visit durations within high-volume ophthalmology clinics and inadequate billing structures for counseling. Recently, Large Language Models (LLMs), a type of artificial intelligence, have advanced such that they can follow instructions and carry coherent conversations, offering novel solutions to a wide range of clinical problems. In this paper, we discuss the potential of LLMs to provide chatbot-driven MI to improve adherence in glaucoma patients and provide an example conversation as a proof of concept. We discuss the advantages of AI-driven MI, such as demonstrated effectiveness, scalability, and accessibility. We also explore the risks and limitations, including issues of safety and privacy, as well as the factual inaccuracies and hallucinations to which LLMs are susceptible. Domain-specific training may be needed to ensure the accuracy and completeness of information provided in subspecialty areas such as glaucoma. Despite the current limitations, AI-driven motivational interviewing has the potential to offer significant improvements in adherence and should be further explored to maximally leverage the potential of artificial intelligence for our patients.
Collapse
Affiliation(s)
- Areeba Abid
- Emory University School of Medicine, Atlanta, GA
| | - Sally L Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego
- Divison of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA
| |
Collapse
|
27
|
D’Adamo L, Grammer AC, Rackoff GN, Shah J, Firebaugh ML, Taylor CB, Wilfley DE, Fitzsimmons-Craft EE. Rates and correlates of study enrolment and use of a chatbot aimed to promote mental health services use for eating disorders following online screening. EUROPEAN EATING DISORDERS REVIEW 2024; 32:748-757. [PMID: 38502605 PMCID: PMC11144085 DOI: 10.1002/erv.3082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 02/19/2024] [Accepted: 02/24/2024] [Indexed: 03/21/2024]
Abstract
OBJECTIVE We developed a chatbot aimed to facilitate mental health services use for eating disorders (EDs) and offered the opportunity to enrol in a research study and use the chatbot to all adult respondents to a publicly available online ED screen who screened positive for clinical/subclinical EDs and reported not currently being in treatment. We examined the rates and correlates of enrolment in the study and uptake of the chatbot. METHOD Following screening, eligible respondents (≥18 years, screened positive for a clinical/subclinical ED, not in treatment for an ED) were shown the study opportunity. Chi-square tests and logistic regressions explored differences in demographics, ED symptoms, suicidality, weight, and probable ED diagnoses between those who enroled and engaged with the chatbot versus those who did not. RESULTS 6747 respondents were shown the opportunity (80.0% of all adult screens). 3.0% enroled, of whom 90.2% subsequently used the chatbot. Enrolment and chatbot uptake were more common among respondents aged ≥25 years old versus those aged 18-24 and less common among respondents who reported engaging in regular dietary restriction. CONCLUSIONS Overall enrolment was low, yet uptake was high among those that enroled and did not differ across most demographics and symptom presentations. Future directions include evaluating respondents' attitudes towards treatment-promoting tools and removing barriers to uptake.
Collapse
Affiliation(s)
- Laura D’Adamo
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, USA
- Center for Weight, Eating, and Lifestyle Science (WELL Center) and Department of Psychological and Brain Sciences, Philadelphia, PA, USA
| | - Anne Claire Grammer
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, USA
| | - Gavin N. Rackoff
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA
| | - Jillian Shah
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, USA
| | - Marie-Laure Firebaugh
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, USA
| | - C. Barr Taylor
- Center for m2Health, Palo Alto University, Palo Alto, CA, USA
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA
| | - Denise E. Wilfley
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, USA
| | | |
Collapse
|
28
|
Chua JYX, Choolani M, Chee CYI, Yi H, Chan YH, Lalor JG, Chong YS, Shorey S. Parents' Perceptions of Their Parenting Journeys and a Mobile App Intervention (Parentbot-A Digital Healthcare Assistant): Qualitative Process Evaluation. J Med Internet Res 2024; 26:e56894. [PMID: 38905628 PMCID: PMC11226932 DOI: 10.2196/56894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 03/16/2024] [Accepted: 04/18/2024] [Indexed: 06/23/2024] Open
Abstract
BACKGROUND Parents experience many challenges during the perinatal period. Mobile app-based interventions and chatbots show promise in delivering health care support for parents during the perinatal period. OBJECTIVE This descriptive qualitative process evaluation study aims to explore the perinatal experiences of parents in Singapore, as well as examine the user experiences of the mobile app-based intervention with an in-built chatbot titled Parentbot-a Digital Healthcare Assistant (PDA). METHODS A total of 20 heterosexual English-speaking parents were recruited via purposive sampling from a single tertiary hospital in Singapore. The parents (control group: 10/20, 50%; intervention group: 10/20, 50%) were also part of an ongoing randomized trial between November 2022 and August 2023 that aimed to evaluate the effectiveness of the PDA in improving parenting outcomes. Semistructured one-to-one interviews were conducted via Zoom from February to June 2023. All interviews were conducted in English, audio recorded, and transcribed verbatim. Data analysis was guided by the thematic analysis framework. The COREQ (Consolidated Criteria for Reporting Qualitative Research) checklist was used to guide the reporting of data. RESULTS Three themes with 10 subthemes describing parents' perceptions of their parenting journeys and their experiences with the PDA were identified. The main themes were (1) new babies, new troubles, and new wonders; (2) support system for the parents; and (3) reshaping perinatal support for future parents. CONCLUSIONS Overall, the PDA provided parents with informational, socioemotional, and psychological support and could be used to supplement the perinatal care provided for future parents. To optimize users' experience with the PDA, the intervention could be equipped with a more sophisticated chatbot, equipped with more gamification features, and programmed to deliver personalized care to parents. Researchers and health care providers could also strive to promote more peer-to-peer interactions among users. The provision of continuous, holistic, and family-centered care by health care professionals could also be emphasized. Moreover, policy changes regarding maternity and paternity leaves, availability of infant care centers, and flexible work arrangements could be further explored to promote healthy work-family balance for parents.
Collapse
Affiliation(s)
- Joelle Yan Xin Chua
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Mahesh Choolani
- Department of Obstetrics and Gynaecology, National University Hospital, Singapore, Singapore
| | - Cornelia Yin Ing Chee
- Department of Psychological Medicine, National University Hospital, Singapore, Singapore
| | - Huso Yi
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore, Singapore
| | - Yiong Huak Chan
- Biostatistics Unit, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | | | - Yap Seng Chong
- Department of Obstetrics and Gynaecology, National University Hospital, Singapore, Singapore
| | - Shefaly Shorey
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
29
|
Priyadarshana YHPP, Senanayake A, Liang Z, Piumarta I. Prompt engineering for digital mental health: a short review. Front Digit Health 2024; 6:1410947. [PMID: 38933900 PMCID: PMC11199861 DOI: 10.3389/fdgth.2024.1410947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 05/28/2024] [Indexed: 06/28/2024] Open
Abstract
Prompt engineering, the process of arranging input or prompts given to a large language model to guide it in producing desired outputs, is an emerging field of research that shapes how these models understand tasks, process information, and generate responses in a wide range of natural language processing (NLP) applications. Digital mental health, on the other hand, is becoming increasingly important for several reasons including early detection and intervention, and to mitigate limited availability of highly skilled medical staff for clinical diagnosis. This short review outlines the latest advances in prompt engineering in the field of NLP for digital mental health. To our knowledge, this review is the first attempt to discuss the latest prompt engineering types, methods, and tasks that are used in digital mental health applications. We discuss three types of digital mental health tasks: classification, generation, and question answering. To conclude, we discuss the challenges, limitations, ethical considerations, and future directions in prompt engineering for digital mental health. We believe that this short review contributes a useful point of departure for future research in prompt engineering for digital mental health.
Collapse
Affiliation(s)
- Y. H. P. P. Priyadarshana
- Ubiquitous and Personal Computing Lab, Faculty of Engineering, Kyoto University of Advanced Science (KUAS), Kyoto, Japan
| | | | | | | |
Collapse
|
30
|
Maher C, Singh B, Wylde A, Chastin S. Virtual health assistants: a grand challenge in health communications and behavior change. Front Digit Health 2024; 6:1418695. [PMID: 38827384 PMCID: PMC11140094 DOI: 10.3389/fdgth.2024.1418695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 05/08/2024] [Indexed: 06/04/2024] Open
Affiliation(s)
- Carol Maher
- Alliance for Research in Exercise Nutrition and Activity (ARENA), University of South Australia, Adelaide, SA, Australia
| | - Ben Singh
- Alliance for Research in Exercise Nutrition and Activity (ARENA), University of South Australia, Adelaide, SA, Australia
| | - Allison Wylde
- School of Health and Life Sciences, Glasgow Caledonian University, Glasgow, United Kingdom
| | - Sebastien Chastin
- Department of Movement and Sports Sciences, Ghent University, Ghent, Belgium
| |
Collapse
|
31
|
Clement A, Ravet M, Stanger C, Gabrielli J. Feasibility, usability, and acceptability of MobileCoach-Teen: A smartphone app-based preventative intervention for risky adolescent drinking behavior. JOURNAL OF SUBSTANCE USE AND ADDICTION TREATMENT 2024; 159:209275. [PMID: 38110119 PMCID: PMC11027171 DOI: 10.1016/j.josat.2023.209275] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 11/20/2023] [Accepted: 12/13/2023] [Indexed: 12/20/2023]
Abstract
BACKGROUND Older adolescence (ages 15-18) is a critical period for experimentation with substance use, especially alcohol. Adolescent drinking poses hazards to physical and mental health, amplifies risk associated with other activities typically initiated during this life stage (e.g., driving, sexual activity), and is associated with adverse outcomes in adolescence and adulthood. Existing preventative interventions are expensive and have questionable long-term efficacy. Digital interventions may represent an accessible and personalized approach to providing preventative intervention content to youth. METHODS This study recruited 29 adolescents aged 16-18 (M = 17.24, SD = 0.74) for a pilot feasibility trial of the MobileCoach-Teen (MC-Teen) smartphone app-based intervention. The study team randomized participants to receive either the alcohol intervention (MC-Teen) or attention control pseudo-intervention (MC-Fit). MC-Teen participants received 12 weeks of content adapted from a prior Swiss-based trial of a preventative alcohol intervention. Participants provided qualitative and quantitative feedback at baseline, via six biweekly surveys during and post-intervention. RESULTS Both groups rated the application as easy to download (M = 4.31, SD = 0.93; 5-point Likert). All participants completed the baseline survey in less than the estimated time of 10 min (M = 7:42, SD = 2:15) and rated the survey as easy to complete (M = 4.69, SD = 0.60; 5-point Likert). MC-Teen participants favorably assessed application user experience, message user experience, and digital working alliance with application. Qualitative themes included a desire for increased rate/amount and diversity of content, greater representation via coach options, user interface/user experience improvements, and additional features. CONCLUSION The MC-Teen intervention is feasible and acceptable based on a pilot feasibility trial with a sample of U.S. adolescents.
Collapse
Affiliation(s)
- Alex Clement
- Department of Clinical and Health Psychology, University of Florida, 1225 Center Drive, Gainesville, FL, United States of America.
| | - Mariah Ravet
- Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, United States of America
| | - Catherine Stanger
- Geisel School of Medicine, Center for Technology and Behavioral Health, Dartmouth College, Hanover, NH, United States of America
| | - Joy Gabrielli
- Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, United States of America
| |
Collapse
|
32
|
Nkabane-Nkholongo E, Mpata-Mokgatle M, Jack BW, Julce C, Bickmore T. Usability and Acceptability of a Conversational Agent Health Education App (Nthabi) for Young Women in Lesotho: Quantitative Study. JMIR Hum Factors 2024; 11:e52048. [PMID: 38470460 DOI: 10.2196/52048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 12/08/2023] [Accepted: 12/11/2023] [Indexed: 03/13/2024] Open
Abstract
BACKGROUND Young women in Lesotho face myriad sexual and reproductive health problems. There is little time to provide health education to women in low-resource settings with critical shortages of human resources for health. OBJECTIVE This study aims to determine the acceptability and usability of a conversational agent system, the Nthabi health promotion app, which was culturally adapted for use in Lesotho. METHODS We conducted a descriptive quantitative study, using a 22-item Likert scale survey to assess the perceptions of the usability and acceptability of 172 young women aged 18-28 years in rural districts of Lesotho, who used the system on either smartphones or tablets for up to 6 weeks. Descriptive statistics were used to calculate the averages and frequencies of the variables. χ2 tests were used to determine any associations among variables. RESULTS A total of 138 participants were enrolled and completed the survey. The mean age was 22 years, most were unmarried, 56 (40.6%) participants had completed high school, 39 (28.3%) participants were unemployed, and 88 (63.8%) participants were students. Respondents believed the app was helpful, with 134 (97.1%) participants strongly agreeing or agreeing that the app was "effective in helping them make decisions" and "could quickly improve health education and counselling." In addition, 136 (98.5%) participants strongly agreed or agreed that the app was "simple to use," 130 (94.2 %) participants reported that Nthabi could "easily repeat words that were not well understood," and 128 (92.7%) participants reported that the app "could quickly load the information on the screen." Respondents were generally satisfied with the app, with 132 (95.6%) participants strongly agreeing or agreeing that the health education content delivered by the app was "well organised and delivered in a timely way," while 133 (96.4%) participants "enjoyed using the interface." They were satisfied with the cultural adaptation, with 133 (96.4%) participants strongly agreeing or agreeing that the app was "culturally appropriate and that it could be easily shared with a family or community members." They also reported that Nthabi was worthwhile, with 127 (92%) participants reporting that they strongly agreed or agreed that they were "satisfied with the application and intended to continue using it," while 135 (97.8%) participants would "encourage others to use it." Participants aged 18-24 years (vs those aged 25-28 years) agreed that the "Nthabi app was simple to use" (106/106, 100% vs 30/32, 98.8%; P=.01), and agreed that "the educational content was well organised and delivered in a timely way" (104/106, 98.1% vs 28/32, 87.5%; P=.01). CONCLUSIONS These results support further study of conversational agent systems as alternatives to traditional face-to-face provision of health education services in Lesotho, where there are critical shortages of human resources for health. TRIAL REGISTRATION ClinicalTrials.gov NCT04354168; https://www.clinicaltrials.gov/study/NCT04354168.
Collapse
Affiliation(s)
| | | | - Brian W Jack
- Chobanian & Avedisian School of Medicine, Boston University, Boston, MA, United States
| | - Clevanne Julce
- Umass Chan Medical School, University of Massachusetts, Worcester, MA, United States
| | - Timothy Bickmore
- Khoury College of Computer Sciences, Northeastern University, Boston, MA, United States
| |
Collapse
|
33
|
Xu X, Yao B, Dong Y, Gabriel S, Yu H, Hendler J, Ghassemi M, Dey AK, Wang D. Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data. PROCEEDINGS OF THE ACM ON INTERACTIVE, MOBILE, WEARABLE AND UBIQUITOUS TECHNOLOGIES 2024; 8:31. [PMID: 39925940 PMCID: PMC11806945 DOI: 10.1145/3643540] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/11/2025]
Abstract
Advances in large language models (LLMs) have empowered a variety of applications. However, there is still a significant gap in research when it comes to understanding and enhancing the capabilities of LLMs in the field of mental health. In this work, we present a comprehensive evaluation of multiple LLMs on various mental health prediction tasks via online text data, including Alpaca, Alpaca-LoRA, FLAN-T5, GPT-3.5, and GPT-4. We conduct a broad range of experiments, covering zero-shot prompting, few-shot prompting, and instruction fine-tuning. The results indicate a promising yet limited performance of LLMs with zero-shot and few-shot prompt designs for mental health tasks. More importantly, our experiments show that instruction finetuning can significantly boost the performance of LLMs for all tasks simultaneously. Our best-finetuned models, Mental-Alpaca and Mental-FLAN-T5, outperform the best prompt design of GPT-3.5 (25 and 15 times bigger) by 10.9% on balanced accuracy and the best of GPT-4 (250 and 150 times bigger) by 4.8%. They further perform on par with the state-of-the-art task-specific language model. We also conduct an exploratory case study on LLMs' capability on mental health reasoning tasks, illustrating the promising capability of certain models such as GPT-4. We summarize our findings into a set of action guidelines for potential methods to enhance LLMs' capability for mental health tasks. Meanwhile, we also emphasize the important limitations before achieving deployability in real-world mental health settings, such as known racial and gender bias. We highlight the important ethical risks accompanying this line of research.
Collapse
Affiliation(s)
- Xuhai Xu
- Massachusetts Institute of Technology & University of Washington, USA
| | | | | | | | - Hong Yu
- University of Massachusetts Lowell, USA
| | | | | | | | | |
Collapse
|
34
|
Kim HK. The Effects of Artificial Intelligence Chatbots on Women's Health: A Systematic Review and Meta-Analysis. Healthcare (Basel) 2024; 12:534. [PMID: 38470645 PMCID: PMC10930454 DOI: 10.3390/healthcare12050534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 02/20/2024] [Accepted: 02/21/2024] [Indexed: 03/14/2024] Open
Abstract
PURPOSE This systematic review and meta-analysis aimed to investigate the effects of artificial intelligence chatbot interventions on health outcomes in women. METHODS Ten relevant studies published between 2019 and 2023 were extracted from the PubMed, Cochrane Library, EMBASE, CINAHL, and RISS databases in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. This review focused on experimental studies concerning chatbot interventions in women's health. The literature was assessed using the ROB 2 quality appraisal checklist, and the results were visualized with a risk-of-bias visualization program. RESULTS This review encompassed seven randomized controlled trials and three single-group experimental studies. Chatbots were effective in addressing anxiety, depression, distress, healthy relationships, cancer self-care behavior, preconception intentions, risk perception in eating disorders, and gender attitudes. Chatbot users experienced benefits in terms of internalization, acceptability, feasibility, and interaction. A meta-analysis of three studies revealed significant effects in reducing anxiety (I2 = 0%, Q = 8.10, p < 0.017), with an effect size of -0.30 (95% CI, -0.42 to -0.18). CONCLUSIONS Artificial intelligence chatbot interventions had positive effects on physical, physiological, and cognitive health outcomes. Using chatbots may represent pivotal nursing interventions for female populations to improve health status and support women socially as a form of digital therapy.
Collapse
Affiliation(s)
- Hyun-Kyoung Kim
- Department of Nursing, Kongju National University, 56 Gongjudaehak-ro, Gongju 32588, Republic of Korea
| |
Collapse
|
35
|
Ni Z, Peng ML, Balakrishnan V, Tee V, Azwa I, Saifi R, Nelson LE, Vlahov D, Altice FL. Implementation of Chatbot Technology in Health Care: Protocol for a Bibliometric Analysis. JMIR Res Protoc 2024; 13:e54349. [PMID: 38228575 PMCID: PMC10905346 DOI: 10.2196/54349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 12/07/2023] [Accepted: 01/16/2024] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND Chatbots have the potential to increase people's access to quality health care. However, the implementation of chatbot technology in the health care system is unclear due to the scarce analysis of publications on the adoption of chatbot in health and medical settings. OBJECTIVE This paper presents a protocol of a bibliometric analysis aimed at offering the public insights into the current state and emerging trends in research related to the use of chatbot technology for promoting health. METHODS In this bibliometric analysis, we will select published papers from the databases of CINAHL, IEEE Xplore, PubMed, Scopus, and Web of Science that pertain to chatbot technology and its applications in health care. Our search strategy includes keywords such as "chatbot," "virtual agent," "virtual assistant," "conversational agent," "conversational AI," "interactive agent," "health," and "healthcare." Five researchers who are AI engineers and clinicians will independently review the titles and abstracts of selected papers to determine their eligibility for a full-text review. The corresponding author (ZN) will serve as a mediator to address any discrepancies and disputes among the 5 reviewers. Our analysis will encompass various publication patterns of chatbot research, including the number of annual publications, their geographic or institutional distribution, and the number of annual grants supporting chatbot research, and further summarize the methodologies used in the development of health-related chatbots, along with their features and applications in health care settings. Software tool VOSViewer (version 1.6.19; Leiden University) will be used to construct and visualize bibliometric networks. RESULTS The preparation for the bibliometric analysis began on December 3, 2021, when the research team started the process of familiarizing themselves with the software tools that may be used in this analysis, VOSViewer and CiteSpace, during which they consulted 3 librarians at the Yale University regarding search terms and tentative results. Tentative searches on the aforementioned databases yielded a total of 2340 papers. The official search phase started on July 27, 2023. Our goal is to complete the screening of papers and the analysis by February 15, 2024. CONCLUSIONS Artificial intelligence chatbots, such as ChatGPT (OpenAI Inc), have sparked numerous discussions within the health care industry regarding their impact on human health. Chatbot technology holds substantial promise for advancing health care systems worldwide. However, developing a sophisticated chatbot capable of precise interaction with health care consumers, delivering personalized care, and providing accurate health-related information and knowledge remain considerable challenges. This bibliometric analysis seeks to fill the knowledge gap in the existing literature on health-related chatbots, entailing their applications, the software used in their development, and their preferred functionalities among users. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/54349.
Collapse
Affiliation(s)
- Zhao Ni
- School of Nursing, Yale University, Orange, CT, United States
- Center for Interdisciplinary Research on AIDS, Yale University, New Haven, CT, United States
| | - Mary L Peng
- Department of Global Health and Social Medicine, Harvard Medical School, Harvard University, Boston, MA, United States
| | - Vimala Balakrishnan
- Department of Information Systems, Faculty of Computer Science and Information Technology, Unversity of Malaya, Kuala Lumpur, Malaysia
| | - Vincent Tee
- Centre of Excellence for Research in AIDS, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Iskandar Azwa
- Centre of Excellence for Research in AIDS, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
- Infectious Disease Unit, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Rumana Saifi
- Centre of Excellence for Research in AIDS, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - LaRon E Nelson
- School of Nursing, Yale University, Orange, CT, United States
- Center for Interdisciplinary Research on AIDS, Yale University, New Haven, CT, United States
| | - David Vlahov
- School of Nursing, Yale University, Orange, CT, United States
- Center for Interdisciplinary Research on AIDS, Yale University, New Haven, CT, United States
| | - Frederick L Altice
- Center for Interdisciplinary Research on AIDS, Yale University, New Haven, CT, United States
- Centre of Excellence for Research in AIDS, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
- Section of Infectious Disease, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, United States
- Division of Epidemiology of Microbial Diseases, Yale School of Public Health, New Haven, CT, United States
| |
Collapse
|
36
|
Alanezi F. Assessing the Effectiveness of ChatGPT in Delivering Mental Health Support: A Qualitative Study. J Multidiscip Healthc 2024; 17:461-471. [PMID: 38314011 PMCID: PMC10838501 DOI: 10.2147/jmdh.s447368] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/08/2024] [Indexed: 02/06/2024] Open
Abstract
Background Artificial Intelligence (AI) applications are widely researched for their potential in effectively improving the healthcare operations and disease management. However, the research trend shows that these applications also have significant negative implications on the service delivery. Purpose To assess the use of ChatGPT for mental health support. Methods Due to the novelty and unfamiliarity of the ChatGPT technology, a quasi-experimental design was chosen for this study. Outpatients from a public hospital were included in the sample. A two-week experiment followed by semi-structured interviews was conducted in which participants used ChatGPT for mental health support. Semi-structured interviews were conducted with 24 individuals with mental health conditions. Results Eight positive factors (psychoeducation, emotional support, goal setting and motivation, referral and resource information, self-assessment and monitoring, cognitive behavioral therapy, crisis interventions, and psychotherapeutic exercises) and four negative factors (ethical and legal considerations, accuracy and reliability, limited assessment capabilities, and cultural and linguistic considerations) were associated with the use of ChatGPT for mental health support. Conclusion It is important to carefully consider the ethical, reliability, accuracy, and legal challenges and develop appropriate strategies to mitigate them in order to ensure safe and effective use of AI-based applications like ChatGPT in mental health support.
Collapse
Affiliation(s)
- Fahad Alanezi
- College of Business Administration, Department Management Information Systems, Imam Abdulrahman Bin Faisal University, Dammam, 31441, Saudi Arabia
| |
Collapse
|
37
|
Maples B, Cerit M, Vishwanath A, Pea R. Loneliness and suicide mitigation for students using GPT3-enabled chatbots. NPJ MENTAL HEALTH RESEARCH 2024; 3:4. [PMID: 38609517 PMCID: PMC10955814 DOI: 10.1038/s44184-023-00047-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 12/07/2023] [Indexed: 04/14/2024]
Abstract
Mental health is a crisis for learners globally, and digital support is increasingly seen as a critical resource. Concurrently, Intelligent Social Agents receive exponentially more engagement than other conversational systems, but their use in digital therapy provision is nascent. A survey of 1006 student users of the Intelligent Social Agent, Replika, investigated participants' loneliness, perceived social support, use patterns, and beliefs about Replika. We found participants were more lonely than typical student populations but still perceived high social support. Many used Replika in multiple, overlapping ways-as a friend, a therapist, and an intellectual mirror. Many also held overlapping and often conflicting beliefs about Replika-calling it a machine, an intelligence, and a human. Critically, 3% reported that Replika halted their suicidal ideation. A comparative analysis of this group with the wider participant population is provided.
Collapse
Affiliation(s)
- Bethanie Maples
- Graduate School of Education, Stanford University, Stanford, CA, 94305, USA.
| | - Merve Cerit
- Graduate School of Education, Stanford University, Stanford, CA, 94305, USA
| | - Aditya Vishwanath
- Graduate School of Education, Stanford University, Stanford, CA, 94305, USA
| | - Roy Pea
- Graduate School of Education, Stanford University, Stanford, CA, 94305, USA
| |
Collapse
|
38
|
Nguyen QC, Aparicio EM, Jasczynski M, Channell Doig A, Yue X, Mane H, Srikanth N, Gutierrez FXM, Delcid N, He X, Boyd-Graber J. Rosie, a Health Education Question-and-Answer Chatbot for New Mothers: Randomized Pilot Study. JMIR Form Res 2024; 8:e51361. [PMID: 38214963 PMCID: PMC10818229 DOI: 10.2196/51361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 10/24/2023] [Accepted: 11/24/2023] [Indexed: 01/13/2024] Open
Abstract
BACKGROUND Stark disparities exist in maternal and child outcomes and there is a need to provide timely and accurate health information. OBJECTIVE In this pilot study, we assessed the feasibility and acceptability of a health chatbot for new mothers of color. METHODS Rosie, a question-and-answer chatbot, was developed as a mobile app and is available to answer questions about pregnancy, parenting, and child development. From January 9, 2023, to February 9, 2023, participants were recruited using social media posts and through engagement with community organizations. Inclusion criteria included being aged ≥14 years, being a woman of color, and either being currently pregnant or having given birth within the past 6 months. Participants were randomly assigned to the Rosie treatment group (15/29, 52% received the Rosie app) or control group (14/29, 48% received a children's book each month) for 3 months. Those assigned to the treatment group could ask Rosie questions and receive an immediate response generated from Rosie's knowledgebase. Upon detection of a possible health emergency, Rosie sends emergency resources and relevant hotline information. In addition, a study staff member, who is a clinical social worker, reaches out to the participant within 24 hours to follow up. Preintervention and postintervention tests were completed to qualitatively and quantitatively evaluate Rosie and describe changes across key health outcomes, including postpartum depression and the frequency of emergency room visits. These measurements were used to inform the clinical trial's sample size calculations. RESULTS Of 41 individuals who were screened and eligible, 31 (76%) enrolled and 29 (71%) were retained in the study. More than 87% (13/15) of Rosie treatment group members reported using Rosie daily (5/15, 33%) or weekly (8/15, 53%) across the 3-month study period. Most users reported that Rosie was easy to use (14/15, 93%) and provided responses quickly (13/15, 87%). The remaining issues identified included crashing of the app (8/15, 53%), and users were not satisfied with some of Rosie's answers (12/15, 80%). Mothers in both the Rosie treatment group and control group experienced a decline in depression scores from pretest to posttest periods, but the decline was statistically significant only among treatment group mothers (P=.008). In addition, a low proportion of treatment group infants had emergency room visits (1/11, 9%) compared with control group members (3/13, 23%). Nonetheless, no between-group differences reached statistical significance at P<.05. CONCLUSIONS Rosie was found to be an acceptable, feasible, and appropriate intervention for ethnic and racial minority pregnant women and mothers of infants owing to the chatbot's ability to provide a personalized, flexible tool to increase the timeliness and accessibility of high-quality health information to individuals during a period of elevated health risks for the mother and child. TRIAL REGISTRATION ClinicalTrials.gov NCT06053515; https://clinicaltrials.gov/study/NCT06053515.
Collapse
Affiliation(s)
- Quynh C Nguyen
- Department of Epidemiology and Biostatistics, University of Maryland School of Public Health, College Park, MD, United States
| | - Elizabeth M Aparicio
- Department of Behavioral and Community Health, University of Maryland School of Public Health, College Park, MD, United States
| | - Michelle Jasczynski
- Department of Behavioral and Community Health, University of Maryland School of Public Health, College Park, MD, United States
| | - Amara Channell Doig
- Department of Behavioral and Community Health, University of Maryland School of Public Health, College Park, MD, United States
| | - Xiaohe Yue
- Department of Epidemiology and Biostatistics, University of Maryland School of Public Health, College Park, MD, United States
| | - Heran Mane
- Department of Epidemiology and Biostatistics, University of Maryland School of Public Health, College Park, MD, United States
| | - Neha Srikanth
- Department of Computer Science, University of Maryland Institute for Advanced Computer Studies, University of Maryland, College Park, MD, United States
| | - Francia Ximena Marin Gutierrez
- Department of Behavioral and Community Health, University of Maryland School of Public Health, College Park, MD, United States
| | - Nataly Delcid
- Department of Epidemiology and Biostatistics, University of Maryland School of Public Health, College Park, MD, United States
| | - Xin He
- Department of Epidemiology and Biostatistics, University of Maryland School of Public Health, College Park, MD, United States
| | - Jordan Boyd-Graber
- Department of Computer Science, University of Maryland Institute for Advanced Computer Studies, University of Maryland, College Park, MD, United States
| |
Collapse
|
39
|
Cook D, Peters D, Moradbakhti L, Su T, Da Re M, Schuller BW, Quint J, Wong E, Calvo RA. A text-based conversational agent for asthma support: Mixed-methods feasibility study. Digit Health 2024; 10:20552076241258276. [PMID: 38894942 PMCID: PMC11185032 DOI: 10.1177/20552076241258276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 05/13/2024] [Indexed: 06/21/2024] Open
Abstract
Objective Millions of people in the UK have asthma, yet 70% do not access basic care, leading to the largest number of asthma-related deaths in Europe. Chatbots may extend the reach of asthma support and provide a bridge to traditional healthcare. This study evaluates 'Brisa', a chatbot designed to improve asthma patients' self-assessment and self-management. Methods We recruited 150 adults with an asthma diagnosis to test our chatbot. Participants were recruited over three waves through social media and a research recruitment platform. Eligible participants had access to 'Brisa' via a WhatsApp or website version for 28 days and completed entry and exit questionnaires to evaluate user experience and asthma control. Weekly symptom tracking, user interaction metrics, satisfaction measures, and qualitative feedback were utilised to evaluate the chatbot's usability and potential effectiveness, focusing on changes in asthma control and self-reported behavioural improvements. Results 74% of participants engaged with 'Brisa' at least once. High task completion rates were observed: asthma attack risk assessment (86%), voice recording submission (83%) and asthma control tracking (95.5%). Post use, an 8% improvement in asthma control was reported. User satisfaction surveys indicated positive feedback on helpfulness (80%), privacy (87%), trustworthiness (80%) and functionality (84%) but highlighted a need for improved conversational depth and personalisation. Conclusions The study indicates that chatbots are effective for asthma support, demonstrated by the high usage of features like risk assessment and control tracking, as well as a statistically significant improvement in asthma control. However, lower satisfaction in conversational flexibility highlights rising expectations for chatbot fluency, influenced by advanced models like ChatGPT. Future health-focused chatbots must balance conversational capability with accuracy and safety to maintain engagement and effectiveness.
Collapse
Affiliation(s)
- Darren Cook
- Dyson School of Design Engineering, Imperial College London, London, UK
| | - Dorian Peters
- Dyson School of Design Engineering, Imperial College London, London, UK
| | - Laura Moradbakhti
- Dyson School of Design Engineering, Imperial College London, London, UK
| | - Ting Su
- Dyson School of Design Engineering, Imperial College London, London, UK
| | - Marco Da Re
- Dyson School of Design Engineering, Imperial College London, London, UK
| | - Bjorn W. Schuller
- Dyson School of Design Engineering, Imperial College London, London, UK
| | | | - Ernie Wong
- Imperial College Healthcare NHS Trust, London, UK
| | - Rafael A. Calvo
- Dyson School of Design Engineering, Imperial College London, London, UK
| |
Collapse
|
40
|
Tan TC, Roslan NEB, Li JW, Zou X, Chen X, Santosa A. Patient Acceptability of Symptom Screening and Patient Education Using a Chatbot for Autoimmune Inflammatory Diseases: Survey Study. JMIR Form Res 2023; 7:e49239. [PMID: 37219234 PMCID: PMC11019963 DOI: 10.2196/49239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 08/27/2023] [Accepted: 11/05/2023] [Indexed: 05/24/2023] Open
Abstract
BACKGROUND Chatbots have the potential to enhance health care interaction, satisfaction, and service delivery. However, data regarding their acceptance across diverse patient populations are limited. In-depth studies on the reception of chatbots by patients with chronic autoimmune inflammatory diseases are lacking, although such studies are vital for facilitating the effective integration of chatbots in rheumatology care. OBJECTIVE We aim to assess patient perceptions and acceptance of a chatbot designed for autoimmune inflammatory rheumatic diseases (AIIRDs). METHODS We administered a comprehensive survey in an outpatient setting at a top-tier rheumatology referral center. The target cohort included patients who interacted with a chatbot explicitly tailored to facilitate diagnosis and obtain information on AIIRDs. Following the RE-AIM (Reach, Effectiveness, Adoption, Implementation and Maintenance) framework, the survey was designed to gauge the effectiveness, user acceptability, and implementation of the chatbot. RESULTS Between June and October 2022, we received survey responses from 200 patients, with an equal number of 100 initial consultations and 100 follow-up (FU) visits. The mean scores on a 5-point acceptability scale ranged from 4.01 (SD 0.63) to 4.41 (SD 0.54), indicating consistently high ratings across the different aspects of chatbot performance. Multivariate regression analysis indicated that having a FU visit was significantly associated with a greater willingness to reuse the chatbot for symptom determination (P=.01). Further, patients' comfort with chatbot diagnosis increased significantly after meeting physicians (P<.001). We observed no significant differences in chatbot acceptance according to sex, education level, or diagnosis category. CONCLUSIONS This study underscores that chatbots tailored to AIIRDs have a favorable reception. The inclination of FU patients to engage with the chatbot signifies the possible influence of past clinical encounters and physician affirmation on its use. Although further exploration is required to refine their integration, the prevalent positive perceptions suggest that chatbots have the potential to strengthen the bridge between patients and health care providers, thus enhancing the delivery of rheumatology care to various cohorts.
Collapse
Affiliation(s)
- Tze Chin Tan
- Department of Rheumatology and Immunology, Singapore General Hospital, Singapore, Singapore
- Medicine Academic Clinical Programme, SingHealth-Duke-NUS, Singapore, Singapore
| | - Nur Emillia Binte Roslan
- Medicine Academic Clinical Programme, SingHealth-Duke-NUS, Singapore, Singapore
- Department of General Medicine, Sengkang General Hospital, Singapore, Singapore
| | - James Weiquan Li
- Medicine Academic Clinical Programme, SingHealth-Duke-NUS, Singapore, Singapore
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore, Singapore
| | - Xinying Zou
- Internal Medicine Clinic, Changi General Hospital, Singapore, Singapore
| | - Xiangmei Chen
- Internal Medicine Clinic, Changi General Hospital, Singapore, Singapore
| | - Anindita Santosa
- Medicine Academic Clinical Programme, SingHealth-Duke-NUS, Singapore, Singapore
- Division of Rheumatology and Immunology, Department of Medicine, Changi General Hospital, Singapore, Singapore
| |
Collapse
|
41
|
Xue J, Zhang B, Zhao Y, Zhang Q, Zheng C, Jiang J, Li H, Liu N, Li Z, Fu W, Peng Y, Logan J, Zhang J, Xiang X. Evaluation of the Current State of Chatbots for Digital Health: Scoping Review. J Med Internet Res 2023; 25:e47217. [PMID: 38113097 PMCID: PMC10762606 DOI: 10.2196/47217] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/15/2023] [Accepted: 11/24/2023] [Indexed: 12/21/2023] Open
Abstract
BACKGROUND Chatbots have become ubiquitous in our daily lives, enabling natural language conversations with users through various modes of communication. Chatbots have the potential to play a significant role in promoting health and well-being. As the number of studies and available products related to chatbots continues to rise, there is a critical need to assess product features to enhance the design of chatbots that effectively promote health and behavioral change. OBJECTIVE This scoping review aims to provide a comprehensive assessment of the current state of health-related chatbots, including the chatbots' characteristics and features, user backgrounds, communication models, relational building capacity, personalization, interaction, responses to suicidal thoughts, and users' in-app experiences during chatbot use. Through this analysis, we seek to identify gaps in the current research, guide future directions, and enhance the design of health-focused chatbots. METHODS Following the scoping review methodology by Arksey and O'Malley and guided by the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist, this study used a two-pronged approach to identify relevant chatbots: (1) searching the iOS and Android App Stores and (2) reviewing scientific literature through a search strategy designed by a librarian. Overall, 36 chatbots were selected based on predefined criteria from both sources. These chatbots were systematically evaluated using a comprehensive framework developed for this study, including chatbot characteristics, user backgrounds, building relational capacity, personalization, interaction models, responses to critical situations, and user experiences. Ten coauthors were responsible for downloading and testing the chatbots, coding their features, and evaluating their performance in simulated conversations. The testing of all chatbot apps was limited to their free-to-use features. RESULTS This review provides an overview of the diversity of health-related chatbots, encompassing categories such as mental health support, physical activity promotion, and behavior change interventions. Chatbots use text, animations, speech, images, and emojis for communication. The findings highlight variations in conversational capabilities, including empathy, humor, and personalization. Notably, concerns regarding safety, particularly in addressing suicidal thoughts, were evident. Approximately 44% (16/36) of the chatbots effectively addressed suicidal thoughts. User experiences and behavioral outcomes demonstrated the potential of chatbots in health interventions, but evidence remains limited. CONCLUSIONS This scoping review underscores the significance of chatbots in health-related applications and offers insights into their features, functionalities, and user experiences. This study contributes to advancing the understanding of chatbots' role in digital health interventions, thus paving the way for more effective and user-centric health promotion strategies. This study informs future research directions, emphasizing the need for rigorous randomized control trials, standardized evaluation metrics, and user-centered design to unlock the full potential of chatbots in enhancing health and well-being. Future research should focus on addressing limitations, exploring real-world user experiences, and implementing robust data security and privacy measures.
Collapse
Affiliation(s)
- Jia Xue
- Factor Inwentash Faculty of Social Work, University of Toronto, Toronto, ON, Canada
- Faculty of Information, University of Toronto, Toronto, ON, Canada
- Artificial Intelligence for Justice Lab, University of Toronto, Toronto, ON, Canada
| | - Bolun Zhang
- Faculty of Information, University of Toronto, Toronto, ON, Canada
- Artificial Intelligence for Justice Lab, University of Toronto, Toronto, ON, Canada
| | - Yaxi Zhao
- Faculty of Information, University of Toronto, Toronto, ON, Canada
- Artificial Intelligence for Justice Lab, University of Toronto, Toronto, ON, Canada
| | - Qiaoru Zhang
- Artificial Intelligence for Justice Lab, University of Toronto, Toronto, ON, Canada
- Faculty of Arts and Science, University of Toronto, Toronto, ON, Canada
| | - Chengda Zheng
- Artificial Intelligence for Justice Lab, University of Toronto, Toronto, ON, Canada
| | - Jielin Jiang
- Artificial Intelligence for Justice Lab, University of Toronto, Toronto, ON, Canada
| | - Hanjia Li
- Artificial Intelligence for Justice Lab, University of Toronto, Toronto, ON, Canada
| | - Nian Liu
- Artificial Intelligence for Justice Lab, University of Toronto, Toronto, ON, Canada
| | - Ziqian Li
- Artificial Intelligence for Justice Lab, University of Toronto, Toronto, ON, Canada
| | - Weiying Fu
- Artificial Intelligence for Justice Lab, University of Toronto, Toronto, ON, Canada
| | - Yingdong Peng
- Artificial Intelligence for Justice Lab, University of Toronto, Toronto, ON, Canada
| | - Judith Logan
- John P Robarts Library, University of Toronto, Toronto, ON, Canada
| | - Jingwen Zhang
- Department of Communication, University of California Davis, Davis, CA, United States
| | - Xiaoling Xiang
- School of Social Work, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
42
|
Cho YM, Rai S, Ungar L, Sedoc J, Guntuku SC. An Integrative Survey on Mental Health Conversational Agents to Bridge Computer Science and Medical Perspectives. PROCEEDINGS OF THE CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING. CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING 2023; 2023:11346-11369. [PMID: 38618627 PMCID: PMC11010238 DOI: 10.18653/v1/2023.emnlp-main.698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Mental health conversational agents (a.k.a. chatbots) are widely studied for their potential to offer accessible support to those experiencing mental health challenges. Previous surveys on the topic primarily consider papers published in either computer science or medicine, leading to a divide in understanding and hindering the sharing of beneficial knowledge between both domains. To bridge this gap, we conduct a comprehensive literature review using the PRISMA framework, reviewing 534 papers published in both computer science and medicine. Our systematic review reveals 136 key papers on building mental health-related conversational agents with diverse characteristics of modeling and experimental design techniques. We find that computer science papers focus on LLM techniques and evaluating response quality using automated metrics with little attention to the application while medical papers use rule-based conversational agents and outcome metrics to measure the health outcomes of participants. Based on our findings on transparency, ethics, and cultural heterogeneity in this review, we provide a few recommendations to help bridge the disciplinary divide and enable the cross-disciplinary development of mental health conversational agents.
Collapse
|
43
|
Wang X, Sanders HM, Liu Y, Seang K, Tran BX, Atanasov AG, Qiu Y, Tang S, Car J, Wang YX, Wong TY, Tham YC, Chung KC. ChatGPT: promise and challenges for deployment in low- and middle-income countries. THE LANCET REGIONAL HEALTH. WESTERN PACIFIC 2023; 41:100905. [PMID: 37731897 PMCID: PMC10507635 DOI: 10.1016/j.lanwpc.2023.100905] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 08/14/2023] [Accepted: 09/03/2023] [Indexed: 09/22/2023]
Abstract
In low- and middle-income countries (LMICs), the fields of medicine and public health grapple with numerous challenges that continue to hinder patients' access to healthcare services. ChatGPT, a publicly accessible chatbot, has emerged as a potential tool in aiding public health efforts in LMICs. This viewpoint details the potential benefits of employing ChatGPT in LMICs to improve medicine and public health encompassing a broad spectrum of domains ranging from health literacy, screening, triaging, remote healthcare support, mental health support, multilingual capabilities, healthcare communication and documentation, medical training and education, and support for healthcare professionals. Additionally, we also share potential concerns and limitations associated with the use of ChatGPT and provide a balanced discussion on the opportunities and challenges of using ChatGPT in LMICs.
Collapse
Affiliation(s)
- Xiaofei Wang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Hayley M. Sanders
- Section of Plastic Surgery, Department of Surgery, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Yuchen Liu
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Kennarey Seang
- Grant Management Office, University of Health Sciences, Phnom Penh, Cambodia
| | - Bach Xuan Tran
- Department of Health Economics, Institute for Preventive Medicine and Public Health, Hanoi Medical University, Hanoi, Vietnam
- Institute of Health Economics and Technology, Hanoi, Vietnam
| | - Atanas G. Atanasov
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
- Institute of Genetics and Animal Biotechnology of the Polish Academy of Sciences, Jastrzebiec, 05-552, Magdalenka, Poland
| | - Yue Qiu
- Institute for Hospital Management, Tsinghua University, Beijing, China
| | - Shenglan Tang
- Duke Global Health Institute, Duke University, Durham, NC, USA
| | - Josip Car
- Centre for Population Health Sciences, Lee Kong Chian School of Medicine, Nanyang Technological University Singapore, Singapore
- Department of Primary Care and Public Health, School of Public Health, Imperial College London, London, United Kingdom
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Ophthalmology and Visual Science Key Lab, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Centre for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Kevin C. Chung
- Section of Plastic Surgery, Department of Surgery, University of Michigan Medical School, Ann Arbor, MI, USA
| |
Collapse
|
44
|
Khawaja Z, Bélisle-Pipon JC. Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots. Front Digit Health 2023; 5:1278186. [PMID: 38026836 PMCID: PMC10663264 DOI: 10.3389/fdgth.2023.1278186] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Artificial intelligence (AI)-powered chatbots have the potential to substantially increase access to affordable and effective mental health services by supplementing the work of clinicians. Their 24/7 availability and accessibility through a mobile phone allow individuals to obtain help whenever and wherever needed, overcoming financial and logistical barriers. Although psychological AI chatbots have the ability to make significant improvements in providing mental health care services, they do not come without ethical and technical challenges. Some major concerns include providing inadequate or harmful support, exploiting vulnerable populations, and potentially producing discriminatory advice due to algorithmic bias. However, it is not always obvious for users to fully understand the nature of the relationship they have with chatbots. There can be significant misunderstandings about the exact purpose of the chatbot, particularly in terms of care expectations, ability to adapt to the particularities of users and responsiveness in terms of the needs and resources/treatments that can be offered. Hence, it is imperative that users are aware of the limited therapeutic relationship they can enjoy when interacting with mental health chatbots. Ignorance or misunderstanding of such limitations or of the role of psychological AI chatbots may lead to a therapeutic misconception (TM) where the user would underestimate the restrictions of such technologies and overestimate their ability to provide actual therapeutic support and guidance. TM raises major ethical concerns that can exacerbate one's mental health contributing to the global mental health crisis. This paper will explore the various ways in which TM can occur particularly through inaccurate marketing of these chatbots, forming a digital therapeutic alliance with them, receiving harmful advice due to bias in the design and algorithm, and the chatbots inability to foster autonomy with patients.
Collapse
|
45
|
Ngũnjiri A, Memiah P, Kimathi R, Wagner FA, Ikahu A, Omanga E, Kweyu E, Ngunu C, Otiso L. Utilizing User Preferences in Designing the AGILE (Accelerating Access to Gender-Based Violence Information and Services Leveraging on Technology Enhanced) Chatbot. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:7018. [PMID: 37947574 PMCID: PMC10647327 DOI: 10.3390/ijerph20217018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 10/13/2023] [Accepted: 10/19/2023] [Indexed: 11/12/2023]
Abstract
INTRODUCTION Technology advancements have enhanced artificial intelligence, leading to a user shift towards virtual assistants, but a human-centered approach is needed to assess for acceptability and effectiveness. The AGILE chatbot is designed in Kenya with features to redefine the response towards gender-based violence (GBV) among vulnerable populations, including adolescents, young women and men, and sexual and gender minorities, to offer accurate and reliable information among users. METHODS We conducted an exploratory qualitative study through focus group discussions (FGDs) targeting 150 participants sampled from vulnerable categories; adolescent girls and boys, young women, young men, and sexual and gender minorities. The FGDs included multiple inquiries to assess knowledge and prior interaction with intelligent conversational assistants to inform the user-centric development of a decision-supportive chatbot and a pilot of the chatbot prototype. Each focus group comprised 9-10 members, and the discussions lasted about two hours to gain qualitative user insights and experiences. We used thematic analysis and drew on grounded theory to analyze the data. RESULTS The analysis resulted in 14 salient themes composed of sexual violence, physical violence, emotional violence, intimate partner violence, female genital mutilation, sexual reproductive health, mental health, help-seeking behaviors/where to seek support, who to talk to, and what information they would like, features of the chatbot, access of chatbot, abuse and HIV, family and community conflicts, and information for self-care. CONCLUSION Adopting a human-centered approach in designing an effective chatbot with as many human features as possible is crucial in increasing utilization, addressing the gaps presented by marginalized/vulnerable populations, and reducing the current GBV epidemic by moving prevention and response services closer to people in need.
Collapse
Affiliation(s)
- Anne Ngũnjiri
- LVCT Health Kenya, Nairobi P.O. Box 19835-00202, Kenya; (A.N.); (R.K.); (A.I.); (E.O.); (L.O.)
| | - Peter Memiah
- Graduate School, University of Maryland, 620 W. Lexington Street, Baltimore, MD 21201, USA
| | - Robert Kimathi
- LVCT Health Kenya, Nairobi P.O. Box 19835-00202, Kenya; (A.N.); (R.K.); (A.I.); (E.O.); (L.O.)
| | - Fernando A. Wagner
- School of Social Work, University of Maryland, 525 W. Redwood Street, Baltimore, MD 21201, USA;
| | - Annrita Ikahu
- LVCT Health Kenya, Nairobi P.O. Box 19835-00202, Kenya; (A.N.); (R.K.); (A.I.); (E.O.); (L.O.)
| | - Eunice Omanga
- LVCT Health Kenya, Nairobi P.O. Box 19835-00202, Kenya; (A.N.); (R.K.); (A.I.); (E.O.); (L.O.)
| | - Emmanuel Kweyu
- Faculty of Information Technology, Strathmore University, Nairobi P.O. Box 59857-00200, Kenya;
| | - Carol Ngunu
- Department of Health, Nairobi City County, Nairobi P.O. Box 30075-00100, Kenya;
| | - Lilian Otiso
- LVCT Health Kenya, Nairobi P.O. Box 19835-00202, Kenya; (A.N.); (R.K.); (A.I.); (E.O.); (L.O.)
| |
Collapse
|
46
|
Alanzi T, Alsalem AA, Alzahrani H, Almudaymigh N, Alessa A, Mulla R, AlQahtani L, Bajonaid R, Alharthi A, Alnahdi O, Alanzi N. AI-Powered Mental Health Virtual Assistants' Acceptance: An Empirical Study on Influencing Factors Among Generations X, Y, and Z. Cureus 2023; 15:e49486. [PMID: 38156169 PMCID: PMC10753156 DOI: 10.7759/cureus.49486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/17/2023] [Indexed: 12/30/2023] Open
Abstract
STUDY PURPOSE This study aims to analyze various influencing factors among generations X (Gen X), Y (Gen Y), and Z (Gen Z) of artificial intelligence (AI)-powered mental health virtual assistants. METHODS A cross-sectional survey design was adopted in this study. The study sample consisted of outpatients diagnosed with various mental health illnesses, such as anxiety, depression, schizophrenia, and behavioral disorders. A survey questionnaire was designed based on the factors (performance expectancy, effort expectancy, social influence, facilitating conditions, and behavioural intention) identified from the unified theory of acceptance and use of the technology model. Ethical approval was received from the Ethics Committee at Imam Abdulrahman Bin Faisal University, Saudi Arabia. RESULTS A total of 506 patients participated in the study, with over 80% having moderate to high experience in using mental health AI assistants. The ANOVA results for performance expectancy (PE), effort expectancy (EE), social influence (SI), facilitating conditions (FC), and behavioral intentions (BI) indicate that there are statistically significant differences (p < 0.05) between the Gen X, Gen Y, and Gen Z participants. CONCLUSION The findings underscore the significance of considering generational differences in attitudes and perceptions, with Gen Y and Gen Z demonstrating more positive attitudes and stronger intentions to use AI mental health virtual assistants, while Gen X appears to be more cautious.
Collapse
Affiliation(s)
- Turki Alanzi
- Department of Health Information Management and Technology, College of Public Health, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | | | - Hessah Alzahrani
- College of Science and Humanities, Shaqra University, Shaqra, SAU
| | | | | | - Raghad Mulla
- College of Medicine, King Abdulaziz University, Jeddah, SAU
| | - Lama AlQahtani
- College of Medicine, Imam Muhammad Ibn Saud Islamic University, Riyadh, SAU
| | | | | | - Omar Alnahdi
- Department of Public Health, Dr. Sulaiman AlHabib Hospital, Alkhobar, SAU
| | - Nouf Alanzi
- Department of Clinical Laboratories Sciences, College of Applied Medical Sciences, Jouf University, Sakakah, SAU
| |
Collapse
|
47
|
Sarkar S, Gaur M, Chen LK, Garg M, Srivastava B. A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement. Front Artif Intell 2023; 6:1229805. [PMID: 37899961 PMCID: PMC10601652 DOI: 10.3389/frai.2023.1229805] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 08/29/2023] [Indexed: 10/31/2023] Open
Abstract
Virtual Mental Health Assistants (VMHAs) continuously evolve to support the overloaded global healthcare system, which receives approximately 60 million primary care visits and 6 million emergency room visits annually. These systems, developed by clinical psychologists, psychiatrists, and AI researchers, are designed to aid in Cognitive Behavioral Therapy (CBT). The main focus of VMHAs is to provide relevant information to mental health professionals (MHPs) and engage in meaningful conversations to support individuals with mental health conditions. However, certain gaps prevent VMHAs from fully delivering on their promise during active communications. One of the gaps is their inability to explain their decisions to patients and MHPs, making conversations less trustworthy. Additionally, VMHAs can be vulnerable in providing unsafe responses to patient queries, further undermining their reliability. In this review, we assess the current state of VMHAs on the grounds of user-level explainability and safety, a set of desired properties for the broader adoption of VMHAs. This includes the examination of ChatGPT, a conversation agent developed on AI-driven models: GPT3.5 and GPT-4, that has been proposed for use in providing mental health services. By harnessing the collaborative and impactful contributions of AI, natural language processing, and the mental health professionals (MHPs) community, the review identifies opportunities for technological progress in VMHAs to ensure their capabilities include explainable and safe behaviors. It also emphasizes the importance of measures to guarantee that these advancements align with the promise of fostering trustworthy conversations.
Collapse
Affiliation(s)
- Surjodeep Sarkar
- Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, United States
| | - Manas Gaur
- Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, United States
| | - Lujie Karen Chen
- Department of Information Systems, University of Maryland, Baltimore County, Baltimore, MD, United States
| | - Muskan Garg
- Department of AI & Informatics, Mayo Clinic, Rochester, MN, United States
| | - Biplav Srivastava
- AI Institute, University of South Carolina, Columbia, SC, United States
| |
Collapse
|
48
|
Shah HA, Househ M. Mapping loneliness through social intelligence analysis: a step towards creating global loneliness map. BMJ Health Care Inform 2023; 30:e100728. [PMID: 37827723 PMCID: PMC10583034 DOI: 10.1136/bmjhci-2022-100728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 09/05/2023] [Indexed: 10/14/2023] Open
Abstract
OBJECTIVES Loneliness is a prevalent global public health concern with complex dynamics requiring further exploration. This study aims to enhance understanding of loneliness dynamics through building towards a global loneliness map using social intelligence analysis. SETTINGS AND DESIGN This paper presents a proof of concept for the global loneliness map, using data collected in October 2022. Twitter posts containing keywords such as 'lonely', 'loneliness', 'alone', 'solitude' and 'isolation' were gathered, resulting in 841 796 tweets from the USA. City-specific data were extracted from these tweets to construct a loneliness map for the country. Sentiment analysis using the valence aware dictionary for sentiment reasoning tool was employed to differentiate metaphorical expressions from meaningful correlations between loneliness and socioeconomic and emotional factors. MEASURES AND RESULTS The sentiment analysis encompassed the USA dataset and city-wise subsets, identifying negative sentiment tweets. Psychosocial linguistic features of these negative tweets were analysed to reveal significant connections between loneliness, socioeconomic aspects and emotional themes. Word clouds depicted topic variations between positively and negatively toned tweets. A frequency list of correlated topics within broader socioeconomic and emotional categories was generated from negative sentiment tweets. Additionally, a comprehensive table displayed top correlated topics for each city. CONCLUSIONS Leveraging social media data provide insights into the multifaceted nature of loneliness. Given its subjectivity, loneliness experiences exhibit variability. This study serves as a proof of concept for an extensive global loneliness map, holding implications for global public health strategies and policy development. Understanding loneliness dynamics on a larger scale can facilitate targeted interventions and support.
Collapse
Affiliation(s)
- Hurmat Ali Shah
- Hamad Bin Khalifa University, College of Science and Engineering, Doha, Ad-Dawhah, Qatar
| | - Mowafa Househ
- Hamad Bin Khalifa University, College of Science and Engineering, Doha, Ad-Dawhah, Qatar
| |
Collapse
|
49
|
Wutz M, Hermes M, Winter V, Köberlein-Neu J. Factors Influencing the Acceptability, Acceptance, and Adoption of Conversational Agents in Health Care: Integrative Review. J Med Internet Res 2023; 25:e46548. [PMID: 37751279 PMCID: PMC10565637 DOI: 10.2196/46548] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 05/10/2023] [Accepted: 07/10/2023] [Indexed: 09/27/2023] Open
Abstract
BACKGROUND Conversational agents (CAs), also known as chatbots, are digital dialog systems that enable people to have a text-based, speech-based, or nonverbal conversation with a computer or another machine based on natural language via an interface. The use of CAs offers new opportunities and various benefits for health care. However, they are not yet ubiquitous in daily practice. Nevertheless, research regarding the implementation of CAs in health care has grown tremendously in recent years. OBJECTIVE This review aims to present a synthesis of the factors that facilitate or hinder the implementation of CAs from the perspectives of patients and health care professionals. Specifically, it focuses on the early implementation outcomes of acceptability, acceptance, and adoption as cornerstones of later implementation success. METHODS We performed an integrative review. To identify relevant literature, a broad literature search was conducted in June 2021 with no date limits and using all fields in PubMed, Cochrane Library, Web of Science, LIVIVO, and PsycINFO. To keep the review current, another search was conducted in March 2022. To identify as many eligible primary sources as possible, we used a snowballing approach by searching reference lists and conducted a hand search. Factors influencing the acceptability, acceptance, and adoption of CAs in health care were coded through parallel deductive and inductive approaches, which were informed by current technology acceptance and adoption models. Finally, the factors were synthesized in a thematic map. RESULTS Overall, 76 studies were included in this review. We identified influencing factors related to 4 core Unified Theory of Acceptance and Use of Technology (UTAUT) and Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) factors (performance expectancy, effort expectancy, facilitating conditions, and hedonic motivation), with most studies underlining the relevance of performance and effort expectancy. To meet the particularities of the health care context, we redefined the UTAUT2 factors social influence, habit, and price value. We identified 6 other influencing factors: perceived risk, trust, anthropomorphism, health issue, working alliance, and user characteristics. Overall, we identified 10 factors influencing acceptability, acceptance, and adoption among health care professionals (performance expectancy, effort expectancy, facilitating conditions, social influence, price value, perceived risk, trust, anthropomorphism, working alliance, and user characteristics) and 13 factors influencing acceptability, acceptance, and adoption among patients (additionally hedonic motivation, habit, and health issue). CONCLUSIONS This review shows manifold factors influencing the acceptability, acceptance, and adoption of CAs in health care. Knowledge of these factors is fundamental for implementation planning. Therefore, the findings of this review can serve as a basis for future studies to develop appropriate implementation strategies. Furthermore, this review provides an empirical test of current technology acceptance and adoption models and identifies areas where additional research is necessary. TRIAL REGISTRATION PROSPERO CRD42022343690; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=343690.
Collapse
Affiliation(s)
- Maximilian Wutz
- Center for Health Economics and Health Services Research, Schumpeter School of Business and Economics, University of Wuppertal, Wuppertal, Germany
| | - Marius Hermes
- Center for Health Economics and Health Services Research, Schumpeter School of Business and Economics, University of Wuppertal, Wuppertal, Germany
| | - Vera Winter
- Center for Health Economics and Health Services Research, Schumpeter School of Business and Economics, University of Wuppertal, Wuppertal, Germany
| | - Juliane Köberlein-Neu
- Center for Health Economics and Health Services Research, Schumpeter School of Business and Economics, University of Wuppertal, Wuppertal, Germany
| |
Collapse
|
50
|
Chua JYX, Choolani M, Chee CYI, Yi H, Chan YH, Lalor JG, Chong YS, Shorey S. 'Parentbot - A Digital healthcare Assistant (PDA)': A mobile application-based perinatal intervention for parents: Development study. PATIENT EDUCATION AND COUNSELING 2023; 114:107805. [PMID: 37245443 DOI: 10.1016/j.pec.2023.107805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 04/23/2023] [Accepted: 05/18/2023] [Indexed: 05/30/2023]
Abstract
OBJECTIVE To describe the development procedure of a mobile application-based parenting support program with integrated chatbot features entitled Parentbot - a Digital healthcare Assistant (PDA) for multi-racial Singaporean parents across the perinatal period. METHODS The PDA development process was guided by the combined information systems research framework with design thinking modes, and Tuckman's model of team development. A user acceptability testing (UAT) process was conducted among 11 adults of child-bearing age. Feedback was obtained using a custom-made evaluation form and the 26-item User Experience Questionnaire. RESULTS The combined information systems research framework with design thinking modes helped researchers to successfully create a PDA prototype based on end-users' needs. Results from the UAT process indicated that the PDA provided participants with an overall positive user experience. Feedback gathered from UAT participants was used to enhance the PDA. CONCLUSION Although the effectiveness of the PDA in improving parental outcomes during the perinatal period is still being evaluated, this paper highlights the key details of developing a mobile application-based parenting intervention which future studies could learn from. PRACTICE IMPLICATIONS Having carefully planned timelines with margins of delays, extra funds to resolve technical issues, team cohesion, and an experienced leader can facilitate intervention development.
Collapse
Affiliation(s)
- Joelle Yan Xin Chua
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Mahesh Choolani
- Department of Obstetrics and Gynaecology, National University Hospital, Singapore
| | | | - Huso Yi
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Yiong Huak Chan
- Biostatistics Unit, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | | | - Yap Seng Chong
- Department of Obstetrics and Gynaecology, National University Hospital, Singapore
| | - Shefaly Shorey
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| |
Collapse
|