1
|
Rahsepar Meadi M, Sillekens T, Metselaar S, van Balkom A, Bernstein J, Batelaan N. Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review. JMIR Ment Health 2025; 12:e60432. [PMID: 39983102 PMCID: PMC11890142 DOI: 10.2196/60432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 12/21/2024] [Accepted: 12/23/2024] [Indexed: 02/23/2025] Open
Abstract
BACKGROUND Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns. OBJECTIVE We aimed to provide a comprehensive overview of ethical considerations surrounding CAI as a therapist for individuals with mental health issues. METHODS We conducted a systematic search across PubMed, Embase, APA PsycINFO, Web of Science, Scopus, the Philosopher's Index, and ACM Digital Library databases. Our search comprised 3 elements: embodied artificial intelligence, ethics, and mental health. We defined CAI as a conversational agent that interacts with a person and uses artificial intelligence to formulate output. We included articles discussing the ethical challenges of CAI functioning in the role of a therapist for individuals with mental health issues. We added additional articles through snowball searching. We included articles in English or Dutch. All types of articles were considered except abstracts of symposia. Screening for eligibility was done by 2 independent researchers (MRM and TS or AvB). An initial charting form was created based on the expected considerations and revised and complemented during the charting process. The ethical challenges were divided into themes. When a concern occurred in more than 2 articles, we identified it as a distinct theme. RESULTS We included 101 articles, of which 95% (n=96) were published in 2018 or later. Most were reviews (n=22, 21.8%) followed by commentaries (n=17, 16.8%). The following 10 themes were distinguished: (1) safety and harm (discussed in 52/101, 51.5% of articles); the most common topics within this theme were suicidality and crisis management, harmful or wrong suggestions, and the risk of dependency on CAI; (2) explicability, transparency, and trust (n=26, 25.7%), including topics such as the effects of "black box" algorithms on trust; (3) responsibility and accountability (n=31, 30.7%); (4) empathy and humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), including themes such as health inequalities due to differences in digital literacy; (6) anthropomorphization and deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy and confidentiality (n=62, 61.4%); and (10) concerns for health care workers' jobs (n=16, 15.8%). Other themes were discussed in 9.9% (n=10) of the identified articles. CONCLUSIONS Our scoping review has comprehensively covered ethical aspects of CAI in mental health care. While certain themes remain underexplored and stakeholders' perspectives are insufficiently represented, this study highlights critical areas for further research. These include evaluating the risks and benefits of CAI in comparison to human therapists, determining its appropriate roles in therapeutic contexts and its impact on care access, and addressing accountability. Addressing these gaps can inform normative analysis and guide the development of ethical guidelines for responsible CAI use in mental health care.
Collapse
Affiliation(s)
- Mehrdad Rahsepar Meadi
- Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Department of Ethics, Law, & Humanities, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Tomas Sillekens
- GGZ Centraal Mental Health Care, Amersfoort, The Netherlands
| | - Suzanne Metselaar
- Department of Ethics, Law, & Humanities, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Anton van Balkom
- Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Justin Bernstein
- Department of Philosophy, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Neeltje Batelaan
- Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Thapa S, Adhikari S. GPT-4o and multimodal large language models as companions for mental wellbeing. Asian J Psychiatr 2024; 99:104157. [PMID: 39053243 DOI: 10.1016/j.ajp.2024.104157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 07/09/2024] [Accepted: 07/10/2024] [Indexed: 07/27/2024]
|
3
|
Omar M, Soffer S, Charney AW, Landi I, Nadkarni GN, Klang E. Applications of large language models in psychiatry: a systematic review. Front Psychiatry 2024; 15:1422807. [PMID: 38979501 PMCID: PMC11228775 DOI: 10.3389/fpsyt.2024.1422807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 06/05/2024] [Indexed: 07/10/2024] Open
Abstract
Background With their unmatched ability to interpret and engage with human language and context, large language models (LLMs) hint at the potential to bridge AI and human cognitive processes. This review explores the current application of LLMs, such as ChatGPT, in the field of psychiatry. Methods We followed PRISMA guidelines and searched through PubMed, Embase, Web of Science, and Scopus, up until March 2024. Results From 771 retrieved articles, we included 16 that directly examine LLMs' use in psychiatry. LLMs, particularly ChatGPT and GPT-4, showed diverse applications in clinical reasoning, social media, and education within psychiatry. They can assist in diagnosing mental health issues, managing depression, evaluating suicide risk, and supporting education in the field. However, our review also points out their limitations, such as difficulties with complex cases and potential underestimation of suicide risks. Conclusion Early research in psychiatry reveals LLMs' versatile applications, from diagnostic support to educational roles. Given the rapid pace of advancement, future investigations are poised to explore the extent to which these models might redefine traditional roles in mental health care.
Collapse
Affiliation(s)
- Mahmud Omar
- Faculty of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Shelly Soffer
- Internal Medicine B, Assuta Medical Center, Ashdod, Israel
- Ben-Gurion University of the Negev, Be'er Sheva, Israel
| | | | - Isotta Landi
- Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Girish N Nadkarni
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Eyal Klang
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| |
Collapse
|
4
|
Garg S, Chauhan A. Artificial intelligence GPT-4: A game changer in the advancement of psychiatric rehabilitation in the new millennium. Asian J Psychiatr 2024; 95:103972. [PMID: 38447287 DOI: 10.1016/j.ajp.2024.103972] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 02/15/2024] [Accepted: 02/17/2024] [Indexed: 03/08/2024]
Affiliation(s)
- Sunny Garg
- Department of Psychiatry, Bhagat Phool Singh Government Medical College for Women Khanpur Kalan, Sonipat, Haryana, India.
| | - Alka Chauhan
- Department of Respiratory Medicine, Ursula Horsman Memorial Hospital, Bada Chauraha, Kanpur Nagar, Uttar Pradesh, India
| |
Collapse
|
5
|
Sakuraya A, Matsumura M, Komatsu S, Imamura K, Iida M, Kawakami N. Statement on use of generative artificial intelligence by adolescents. Asian J Psychiatr 2024; 94:103947. [PMID: 38364747 DOI: 10.1016/j.ajp.2024.103947] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 01/26/2024] [Accepted: 01/31/2024] [Indexed: 02/18/2024]
Affiliation(s)
- Asuka Sakuraya
- Department of Digital Mental Health, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, Japan.
| | - Masayo Matsumura
- Department of Medical VR, Kochi Medical School, Kohasu, Oko-cho, Nankoku-shi, Kochi, Japan; BiPSEE Inc., Shibuya Dogenzaka Tokyu Bldg. #2F-C, 1-10-8 Dogenzaka, Shibuya-ku, Tokyo 150-0043, Japan
| | - Shohei Komatsu
- Department of Medical VR, Kochi Medical School, Kohasu, Oko-cho, Nankoku-shi, Kochi, Japan; BiPSEE Inc., Shibuya Dogenzaka Tokyu Bldg. #2F-C, 1-10-8 Dogenzaka, Shibuya-ku, Tokyo 150-0043, Japan
| | - Kotaro Imamura
- Department of Digital Mental Health, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, Japan
| | - Mako Iida
- Department of Mental Health, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, Japan
| | - Norito Kawakami
- Department of Digital Mental Health, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, Japan
| |
Collapse
|
6
|
Ray PP. Generative AI in psychiatry: A potential companion in the current therapeutic era! Asian J Psychiatr 2024; 94:103929. [PMID: 38350325 DOI: 10.1016/j.ajp.2024.103929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 11/28/2023] [Accepted: 01/16/2024] [Indexed: 02/15/2024]
|
7
|
Liu XQ, Zhang ZR. Potential use of large language models for mitigating students' problematic social media use: ChatGPT as an example. World J Psychiatry 2024; 14:334-341. [PMID: 38617990 PMCID: PMC11008388 DOI: 10.5498/wjp.v14.i3.334] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 01/15/2024] [Accepted: 02/05/2024] [Indexed: 03/19/2024] Open
Abstract
The problematic use of social media has numerous negative impacts on individuals' daily lives, interpersonal relationships, physical and mental health, and more. Currently, there are few methods and tools to alleviate problematic social media, and their potential is yet to be fully realized. Emerging large language models (LLMs) are becoming increasingly popular for providing information and assistance to people and are being applied in many aspects of life. In mitigating problematic social media use, LLMs such as ChatGPT can play a positive role by serving as conversational partners and outlets for users, providing personalized information and resources, monitoring and intervening in problematic social media use, and more. In this process, we should recognize both the enormous potential and endless possibilities of LLMs such as ChatGPT, leveraging their advantages to better address problematic social media use, while also acknowledging the limitations and potential pitfalls of ChatGPT technology, such as errors, limitations in issue resolution, privacy and security concerns, and potential overreliance. When we leverage the advantages of LLMs to address issues in social media usage, we must adopt a cautious and ethical approach, being vigilant of the potential adverse effects that LLMs may have in addressing problematic social media use to better harness technology to serve individuals and society.
Collapse
Affiliation(s)
- Xin-Qiao Liu
- School of Education, Tianjin University, Tianjin 300350, China
| | - Zi-Ru Zhang
- School of Education, Tianjin University, Tianjin 300350, China
| |
Collapse
|
8
|
Nedbal C, Naik N, Castellani D, Gauhar V, Geraghty R, Somani BK. ChatGPT in urology practice: revolutionizing efficiency and patient care with generative artificial intelligence. Curr Opin Urol 2024; 34:98-104. [PMID: 37962176 DOI: 10.1097/mou.0000000000001151] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
PURPOSE OF REVIEW ChatGPT has emerged as a potentially useful tool for healthcare. Its role in urology is in its infancy and has much potential for research, clinical practice and for patient assistance. With this narrative review, we want to draw a picture of what is known about ChatGPT's integration in urology, alongside future promises and challenges. RECENT FINDINGS The use of ChatGPT can ease the administrative work, helping urologists with note-taking and clinical documentation such as discharge summaries and clinical notes. It can improve patient engagement through increasing awareness and facilitating communication, as it has especially been investigated for uro-oncological diseases. Its ability to understand human emotions makes ChatGPT an empathic and thoughtful interactive tool or source for urological patients and their relatives. Currently, its role in clinical diagnosis and treatment decisions is uncertain, as concerns have been raised about misinterpretation, hallucination and out-of-date information. Moreover, a mandatory regulatory process for ChatGPT in urology is yet to be established. SUMMARY ChatGPT has the potential to contribute to precision medicine and tailored practice by its quick, structured responses. However, this will depend on how well information can be obtained by seeking appropriate responses and asking the pertinent questions. The key lies in being able to validate the responses, regulating the information shared and avoiding misuse of the same to protect the data and patient privacy. Its successful integration into mainstream urology needs educational bodies to provide guidelines or best practice recommendations for the same.
Collapse
Affiliation(s)
- Carlotta Nedbal
- Department of Urology, University Hospitals Southampton, NHS Trust, Southampton, UK
- Urology Unit, Azienda Ospedaliero-Universitaria delle Marche, Polytechnic University of Marche, Ancona, Italy
| | - Nitesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Daniele Castellani
- Urology Unit, Azienda Ospedaliero-Universitaria delle Marche, Polytechnic University of Marche, Ancona, Italy
| | - Vineet Gauhar
- Department of Urology, Ng Teng Fong General Hospital, NUHS, Singapore
| | - Robert Geraghty
- Department of Urology, Freeman Hospital, Newcastle-upon-Tyne, UK
| | - Bhaskar Kumar Somani
- Department of Urology, University Hospitals Southampton, NHS Trust, Southampton, UK
| |
Collapse
|
9
|
Garg S. ChatGPT is still struggling to revolutionize mental health policy. Asian J Psychiatr 2024; 93:103906. [PMID: 38217966 DOI: 10.1016/j.ajp.2023.103906] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 12/31/2023] [Indexed: 01/15/2024]
Affiliation(s)
- Sunny Garg
- Department of Psychiatry, Bhagat Phool Singh Government Medical College for Women Khanpur Kalan, Sonipat, Haryana, India.
| |
Collapse
|
10
|
Wei Y, Guo L, Lian C, Chen J. ChatGPT: Opportunities, risks and priorities for psychiatry. Asian J Psychiatr 2023; 90:103808. [PMID: 37898100 DOI: 10.1016/j.ajp.2023.103808] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 10/18/2023] [Accepted: 10/22/2023] [Indexed: 10/30/2023]
Abstract
The advancement of large language models such as ChatGPT, opens new possibilities in psychiatry but also invites scrutiny. This paper examines the potential opportunities, risks, and crucial areas of focus within this area. The active engagement of the mental health community is seen as critical to ensure ethical practice, equal access, and a patient-centric approach.
Collapse
Affiliation(s)
- Yaohui Wei
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Department of Psychiatry and Psychotherapy, University Hospital Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Lei Guo
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Cheng Lian
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jue Chen
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| |
Collapse
|
11
|
Franco D'Souza R, Amanullah S, Mathew M, Surapaneni KM. Appraising the performance of ChatGPT in psychiatry using 100 clinical case vignettes. Asian J Psychiatr 2023; 89:103770. [PMID: 37812998 DOI: 10.1016/j.ajp.2023.103770] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 09/13/2023] [Accepted: 09/18/2023] [Indexed: 10/11/2023]
Abstract
BACKGROUND ChatGPT has emerged as the most advanced and rapidly developing large language chatbot system. With its immense potential ranging from answering a simple query to cracking highly competitive medical exams, ChatGPT continues to impress the scientists and researchers worldwide giving room for more discussions regarding its utility in various fields. One such field of attention is Psychiatry. With suboptimal diagnosis and treatment, assuring mental health and well-being is a challenge in many countries, particularly developing nations. To this regard, we conducted an evaluation to assess the performance of ChatGPT 3.5 in Psychiatry using clinical cases to provide evidence-based information regarding the implication of ChatGPT 3.5 in enhancing mental health and well-being. METHODS ChatGPT 3.5 was used in this experimental study to initiate the conversations and collect responses to clinical vignettes in Psychiatry. Using 100 clinical case vignettes, the replies were assessed by expert faculties from the Department of Psychiatry. There were 100 different psychiatric illnesses represented in the cases. We recorded and assessed the initial ChatGPT 3.5 responses. The evaluation was conducted using the objective of questions that were put forth at the conclusion of the case, and the aim of the questions was divided into 10 categories. The grading was completed by taking the mean value of the scores provided by the evaluators. Graphs and tables were used to represent the grades. RESULTS The evaluation report suggests that ChatGPT 3.5 fared extremely well in Psychiatry by receiving "Grade A" ratings in 61 out of 100 cases, "Grade B" ratings in 31, and "Grade C" ratings in 8. Majority of the queries were concerned with the management strategies, which were followed by diagnosis, differential diagnosis, assessment, investigation, counselling, clinical reasoning, ethical reasoning, prognosis, and request acceptance. ChatGPT 3.5 performed extremely well, especially in generating management strategies followed by diagnoses for different psychiatric conditions. There were no responses which were graded "D" indicating that there were no errors in the diagnosis or response for clinical care. Only a few discrepancies and additional details were missed in a few responses that received a "Grade C" CONCLUSION: It is evident from our study that ChatGPT 3.5 has appreciable knowledge and interpretation skills in Psychiatry. Thus, ChatGPT 3.5 undoubtedly has the potential to transform the field of Medicine and we emphasize its utility in Psychiatry through the finding of our study. However, for any AI model to be successful, assuring the reliability, validation of information, proper guidelines and implementation framework are necessary.
Collapse
Affiliation(s)
- Russell Franco D'Souza
- Professor of Organizational Psychological Medicine, International Institute of Organisational Psychological Medicine, 71 Cleeland Street, Dandenong Victoria, Melbourne, 3175 Australia
| | - Shabbir Amanullah
- Division of Geriatric Psychiatry, Queen's University, 752 King Street West, Postal Bag 603 Kingston, ON K7L7X3
| | - Mary Mathew
- Department of Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Tiger Circle Road, Madhav Nagar, Manipal, Karnataka 576104
| | - Krishna Mohan Surapaneni
- Department of Biochemistry, Panimalar Medical College Hospital & Research Institute, Varadharajapuram, Poonamallee, Chennai - 600 123, Tamil Nadu, India; Departments of Medical Education, Molecular Virology, Research, Clinical Skills & Simulation, Panimalar Medical College Hospital & Research Institute, Varadharajapuram, Poonamallee, Chennai - 600 123, Tamil Nadu, India.
| |
Collapse
|
12
|
Thornton J, Tandon R. Does machine-learning-based prediction of suicide risk actually reduce rates of suicide: A critical examination. Asian J Psychiatr 2023; 88:103769. [PMID: 37741111 DOI: 10.1016/j.ajp.2023.103769] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/25/2023]
Affiliation(s)
- Joseph Thornton
- Department of Psychiatry, University of Florida College of Medicine, Gainesville, FL 32608, USA.
| | - Rajiv Tandon
- Department of Psychiatry, Western Michigan University Homer Stryker MD School of Medicine, Kalamazoo, MI 49048, USA
| |
Collapse
|