1
|
McConnon AD, Nash AJ, Roberts JR, Juni SZ, Derenbecker A, Shanahan P, Waters AJ. Incorporating AI Into Military Behavioral Health: A Narrative Review. Mil Med 2025:usaf162. [PMID: 40327321 DOI: 10.1093/milmed/usaf162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Revised: 02/03/2025] [Accepted: 04/18/2025] [Indexed: 05/07/2025] Open
Abstract
INTRODUCTION Concerns regarding suicide rates and declining mental health among service members highlight the need for impactful approaches to address behavioral health needs of U.S. military populations and to improve force readiness. Research in civilian populations has revealed that artificial intelligence and machine learning (AI/ML) have the promise to advance behavioral health care in the following 6 domains: Education and Training, Screening and Assessment, Diagnosis, Treatment, Prognosis, and Clinical Documentation and Administrative Tasks. MATERIALS AND METHODS We conducted a narrative review of research conducted in U.S. military populations, published between 2019 and 2024, that involved AI/ML in behavioral health. Studies were extracted from Embase, PubMed, PsycInfo, and Defense Technical Information Center. Nine studies were considered appropriate for the review. RESULTS Compared to research in civilian populations, there has been much less research in U.S. military populations regarding the use of AI/ML in behavioral health. The studies selected using ML have shown promise for screening and assessment, such as predicting negative mental health outcomes in military populations. ML has also been applied to diagnosis as well as prognosis, with initial positive results. More research is needed to validate the results of the studies reviewed. CONCLUSIONS There is potential for AI/ML to be applied more extensively to military behavioral health, including education/training, treatment, and clinical documentation/administrative tasks. The article describes challenges for further integration of AI into military behavioral health, considering perspectives of service members, providers, and system-level infrastructure.
Collapse
Affiliation(s)
- Ann D McConnon
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| | - Airyn J Nash
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| | - John Ray Roberts
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| | - Shmuel Z Juni
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| | - Ashley Derenbecker
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| | - Patrice Shanahan
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| | - Andrew J Waters
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, United States
| |
Collapse
|
2
|
Bouguettaya A, Team V, Stuart EM, Aboujaoude E. AI-driven report-generation tools in mental healthcare: A review of commercial tools. Gen Hosp Psychiatry 2025; 94:150-158. [PMID: 40088857 DOI: 10.1016/j.genhosppsych.2025.02.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/10/2024] [Revised: 02/21/2025] [Accepted: 02/21/2025] [Indexed: 03/17/2025]
Abstract
Artificial intelligence (AI) systems are increasingly being integrated in clinical care, including for AI-powered note-writing. We aimed to develop and apply a scale for assessing mental health electronic health records (EHRs) that use large language models (LLMs) for note-writing, focusing on their features, security, and ethics. The assessment involved analyzing product information and directly querying vendors about their systems. On their websites, the majority of vendors provided comprehensive information on data protection, privacy measures, multi-platform availability, patient access features, software update history, and Meaningful Use compliance. Most products clearly indicated the LLM's capabilities in creating customized reports or functioning as a co-pilot. However, critical information was often absent, including details on LLM training methodologies, the specific LLM used, bias correction techniques, and methods for evaluating the evidence base. The lack of transparency regarding LLM specifics and bias mitigation strategies raises concerns about the ethical implementation and reliability of these systems in clinical practice. While LLM-enhanced EHRs show promise in alleviating the documentation burden for mental health professionals, there is a pressing need for greater transparency and standardization in reporting LLM-related information. We propose recommendations for the future development and implementation of these systems to ensure they meet the highest standards of security, ethics, and clinical care.
Collapse
Affiliation(s)
- Ayoub Bouguettaya
- Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, United States; School of Nursing and Midwifery, Monash University, Melbourne, Victoria, Australia
| | - Victoria Team
- School of Nursing and Midwifery, Monash University, Melbourne, Victoria, Australia
| | - Elizabeth M Stuart
- Jonathan Jaques Children's Cancer Institute, Miller Children's & Women's Hospital Long Beach, Long Beach, CA, United States
| | - Elias Aboujaoude
- Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, United States; Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States.
| |
Collapse
|
3
|
Cecil J, Kleine AK, Lermer E, Gaube S. Mental health practitioners' perceptions and adoption intentions of AI-enabled technologies: an international mixed-methods study. BMC Health Serv Res 2025; 25:556. [PMID: 40241059 PMCID: PMC12001504 DOI: 10.1186/s12913-025-12715-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Accepted: 04/07/2025] [Indexed: 04/18/2025] Open
Abstract
BACKGROUND As mental health disorders continue to surge, exceeding the capacity of available therapeutic resources, the emergence of technologies enabled by artificial intelligence (AI) offers promising solutions for supporting and delivering patient care. However, there is limited research on mental health practitioners' understanding, familiarity, and adoption intentions regarding these AI technologies. We, therefore, examined to what extent practitioners' characteristics are associated with their learning and use intentions of AI technologies in four application domains (diagnostics, treatment, feedback, and practice management). These characteristics include medical AI readiness with its subdimensions, AI anxiety with its subdimensions, technology self-efficacy, affinity for technology interaction, and professional identification. METHODS Mixed-methods data from N = 392 German and US practitioners, encompassing psychotherapists (in training), psychiatrists, and clinical psychologists, was analyzed. A deductive thematic approach was employed to evaluate mental health practitioners' understanding and familiarity with AI technologies. Additionally, structural equation modeling (SEM) was used to examine the relationship between practitioners' characteristics and their adoption intentions for different technologies. RESULTS Qualitative analysis unveiled a substantial gap in familiarity with AI applications in mental healthcare among practitioners. While some practitioner characteristics were only associated with specific AI application areas (e.g., cognitive readiness with learning intentions for feedback tools), we found that learning intention, ethical knowledge, and affinity for technology interaction were relevant across all four application areas, underscoring their relevance in the adoption of AI technologies in mental healthcare. CONCLUSION In conclusion, this pre-registered study underscores the importance of recognizing the interplay between diverse factors for training opportunities and consequently, a streamlined implementation of AI-enabled technologies in mental healthcare.
Collapse
Affiliation(s)
- Julia Cecil
- Department of Psychology, LMU Center for Leadership and People Management, LMU Munich, Geschwister-Scholl-Platz 1, Munich, 80539, Germany.
| | - Anne-Kathrin Kleine
- Department of Psychology, LMU Center for Leadership and People Management, LMU Munich, Geschwister-Scholl-Platz 1, Munich, 80539, Germany
| | - Eva Lermer
- Department of Psychology, LMU Center for Leadership and People Management, LMU Munich, Geschwister-Scholl-Platz 1, Munich, 80539, Germany
- Department of Business Psychology, Technical University of Applied Sciences Augsburg, An der Hochschule 1, Augsburg, 86161, Germany
| | - Susanne Gaube
- UCL Global Business School for Health, University College London, 7 Sidings St, London, E20 2 AE, UK
| |
Collapse
|
4
|
Cruz-Gonzalez P, He AWJ, Lam EP, Ng IMC, Li MW, Hou R, Chan JNM, Sahni Y, Vinas Guasch N, Miller T, Lau BWM, Sánchez Vidaña DI. Artificial intelligence in mental health care: a systematic review of diagnosis, monitoring, and intervention applications. Psychol Med 2025; 55:e18. [PMID: 39911020 PMCID: PMC12017374 DOI: 10.1017/s0033291724003295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 10/26/2024] [Accepted: 11/26/2024] [Indexed: 02/07/2025]
Abstract
Artificial intelligence (AI) has been recently applied to different mental health illnesses and healthcare domains. This systematic review presents the application of AI in mental health in the domains of diagnosis, monitoring, and intervention. A database search (CCTR, CINAHL, PsycINFO, PubMed, and Scopus) was conducted from inception to February 2024, and a total of 85 relevant studies were included according to preestablished inclusion criteria. The AI methods most frequently used were support vector machine and random forest for diagnosis, machine learning for monitoring, and AI chatbot for intervention. AI tools appeared to be accurate in detecting, classifying, and predicting the risk of mental health conditions as well as predicting treatment response and monitoring the ongoing prognosis of mental health disorders. Future directions should focus on developing more diverse and robust datasets and on enhancing the transparency and interpretability of AI models to improve clinical practice.
Collapse
Affiliation(s)
- Pablo Cruz-Gonzalez
- Rehabilitation Research Institute of Singapore, Nanyang Technological University, Singapore, Singapore
| | - Aaron Wan-Jia He
- School of Public Health, LKS Faculty of Medicine, The University of Hong Kong, Hong Kong, Hong Kong
| | - Elly PoPo Lam
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Ingrid Man Ching Ng
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Mandy Wingman Li
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Rangchun Hou
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Jackie Ngai-Man Chan
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Yuvraj Sahni
- Department of Building Environment and Energy Engineering, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Nestor Vinas Guasch
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Tiev Miller
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Benson Wui-Man Lau
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
- Mental Health Research Center, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Dalinda Isabel Sánchez Vidaña
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
- Mental Health Research Center, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| |
Collapse
|
5
|
Hwang Y, Wu Y. The influence of generative artificial intelligence on creative cognition of design students: a chain mediation model of self-efficacy and anxiety. Front Psychol 2025; 15:1455015. [PMID: 39931512 PMCID: PMC11808137 DOI: 10.3389/fpsyg.2024.1455015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Accepted: 12/27/2024] [Indexed: 02/13/2025] Open
Abstract
Introduction This study investigated the role of generative Artificial Intelligence (AI) in enhancing the creative cognition of design students, examining the mediating effects of self-efficacy and anxiety reduction. Methods A quantitative approach was employed, collecting data through online surveys from 121 design students at universities in southern China. The study utilized scales for AI knowledge and perception, self-efficacy, anxiety, and creative cognition, adapted from previous studies and evaluated on 5-point Likert scales. Data analysis was conducted using SPSS 24.0 for exploratory factor analysis and PROCESS v3.5 for mediation analysis. Results The findings confirmed that AI positively impacted students' innovative thinking (*β* = 0.610, *p* < 0.001). Self-efficacy (standardized *β* = 0.256, 95% CI [0.140, 0.418], *p* < 0.001) and anxiety reduction (standardized *β* = 0.093, 95% CI [0.018, 0.195], *p* < 0.05) positively mediated the relationship between generative AI and creative cognition. Additionally, a serial mediation effect through self-efficacy and anxiety reduction was observed (standardized *β* = 0.053, 95% CI [0.012, 0.114], *p* < 0.05). Discussion Our empirical analysis demonstrates that AI positively affects design students' innovative thinking, with self-efficacy and anxiety reduction serving as significant mediators. These findings provide valuable insights for educators and policymakers, suggesting that AI-integrated design curricula can significantly foster creative cognition, promote academic achievement, and enhance designer capabilities. Understanding AI's impact on students' creative processes is crucial for developing effective teaching strategies in today's evolving educational landscape.
Collapse
Affiliation(s)
| | - Yi Wu
- School of International Communication and Arts, Hainan University, Haikou, China
| |
Collapse
|
6
|
Levkovich I. Is Artificial Intelligence the Next Co-Pilot for Primary Care in Diagnosing and Recommending Treatments for Depression? Med Sci (Basel) 2025; 13:8. [PMID: 39846703 PMCID: PMC11755475 DOI: 10.3390/medsci13010008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2024] [Revised: 12/19/2024] [Accepted: 01/06/2025] [Indexed: 01/24/2025] Open
Abstract
Depression poses significant challenges to global healthcare systems and impacts the quality of life of individuals and their family members. Recent advancements in artificial intelligence (AI) have had a transformative impact on the diagnosis and treatment of depression. These innovations have the potential to significantly enhance clinical decision-making processes and improve patient outcomes in healthcare settings. AI-powered tools can analyze extensive patient data-including medical records, genetic information, and behavioral patterns-to identify early warning signs of depression, thereby enhancing diagnostic accuracy. By recognizing subtle indicators that traditional assessments may overlook, these tools enable healthcare providers to make timely and precise diagnostic decisions that are crucial in preventing the onset or escalation of depressive episodes. In terms of treatment, AI algorithms can assist in personalizing therapeutic interventions by predicting the effectiveness of various approaches for individual patients based on their unique characteristics and medical history. This includes recommending tailored treatment plans that consider the patient's specific symptoms. Such personalized strategies aim to optimize therapeutic outcomes and improve the overall efficiency of healthcare. This theoretical review uniquely synthesizes current evidence on AI applications in primary care depression management, offering a comprehensive analysis of both diagnostic and treatment personalization capabilities. Alongside these advancements, we also address the conflicting findings in the field and the presence of biases that necessitate important limitations.
Collapse
Affiliation(s)
- Inbar Levkovich
- Faculty of Education, Tel-Hai Academic College, Upper Galilee 2208, Israel
| |
Collapse
|
7
|
Wilhelm C, Steckelberg A, Rebitschek FG. Benefits and harms associated with the use of AI-related algorithmic decision-making systems by healthcare professionals: a systematic review. THE LANCET REGIONAL HEALTH. EUROPE 2025; 48:101145. [PMID: 39687669 PMCID: PMC11648885 DOI: 10.1016/j.lanepe.2024.101145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Revised: 11/06/2024] [Accepted: 11/08/2024] [Indexed: 12/18/2024]
Abstract
Background Despite notable advancements in artificial intelligence (AI) that enable complex systems to perform certain tasks more accurately than medical experts, the impact on patient-relevant outcomes remains uncertain. To address this gap, this systematic review assesses the benefits and harms associated with AI-related algorithmic decision-making (ADM) systems used by healthcare professionals, compared to standard care. Methods In accordance with the PRISMA guidelines, we included interventional and observational studies published as peer-reviewed full-text articles that met the following criteria: human patients; interventions involving algorithmic decision-making systems, developed with and/or utilizing machine learning (ML); and outcomes describing patient-relevant benefits and harms that directly affect health and quality of life, such as mortality and morbidity. Studies that did not undergo preregistration, lacked a standard-of-care control, or pertained to systems that assist in the execution of actions (e.g., in robotics) were excluded. We searched MEDLINE, EMBASE, IEEE Xplore, and Google Scholar for studies published in the past decade up to 31 March 2024. We assessed risk of bias using Cochrane's RoB 2 and ROBINS-I tools, and reporting transparency with CONSORT-AI and TRIPOD-AI. Two researchers independently managed the processes and resolved conflicts through discussion. This review has been registered with PROSPERO (CRD42023412156) and the study protocol has been published. Findings Out of 2,582 records identified after deduplication, 18 randomized controlled trials (RCTs) and one cohort study met the inclusion criteria, covering specialties such as psychiatry, oncology, and internal medicine. Collectively, the studies included a median of 243 patients (IQR 124-828), with a median of 50.5% female participants (range 12.5-79.0, IQR 43.6-53.6) across intervention and control groups. Four studies were classified as having low risk of bias, seven showed some concerns, and another seven were assessed as having high or serious risk of bias. Reporting transparency varied considerably: six studies showed high compliance, four moderate, and five low compliance with CONSORT-AI or TRIPOD-AI. Twelve studies (63%) reported patient-relevant benefits. Of those with low risk of bias, interventions reduced length of stay in hospital and intensive care unit (10.3 vs. 13.0 days, p = 0.042; 6.3 vs. 8.4 days, p = 0.030), in-hospital mortality (9.0% vs. 21.3%, p = 0.018), and depression symptoms in non-complex cases (45.1% vs. 52.3%, p = 0.03). However, harms were frequently underreported, with only eight studies (42%) documenting adverse events. No study reported an increase in adverse events as a result of the interventions. Interpretation The current evidence on AI-related ADM systems provides limited insights into patient-relevant outcomes. Our findings underscore the essential need for rigorous evaluations of clinical benefits, reinforced compliance with methodological standards, and balanced consideration of both benefits and harms to ensure meaningful integration into healthcare practice. Funding This study did not receive any funding.
Collapse
Affiliation(s)
- Christoph Wilhelm
- International Graduate Academy (InGrA), Institute of Health and Nursing Science, Medical Faculty, Martin Luther University Halle-Wittenberg, Magdeburger Str. 8, Halle (Saale) 06112, Germany
- Harding Center for Risk Literacy, Faculty of Health Sciences Brandenburg, University of Potsdam, Virchowstr. 2, Potsdam 14482, Germany
| | - Anke Steckelberg
- Institute of Health and Nursing Science, Medical Faculty, Martin Luther University Halle-Wittenberg, Magdeburger Str. 8, Halle (Saale) 06112, Germany
| | - Felix G. Rebitschek
- Harding Center for Risk Literacy, Faculty of Health Sciences Brandenburg, University of Potsdam, Virchowstr. 2, Potsdam 14482, Germany
- Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| |
Collapse
|
8
|
Bahar L, Rego SA, Sadeh-Sharvit S. Detecting climate anxiety in therapy through natural language processing. Sci Rep 2024; 14:25976. [PMID: 39472482 PMCID: PMC11522639 DOI: 10.1038/s41598-024-75269-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2024] [Accepted: 10/03/2024] [Indexed: 11/02/2024] Open
Abstract
A well-documented consequence of global warming is increased psychological distress and climate anxiety, but data gaps limit action. While climate anxiety garners attention, its expression in therapy remains unexplored. Natural language processing (NLP) models can identify climate discussions in therapy, aiding therapists and informing training. This study analyzed 32,542 therapy sessions provided by 849 therapists to 7,916 clients in U.S. behavioral health programs between July 2020 and December 2022, yielding 1,722,273 labeled therapist-client micro-dialogues. Climate- and weather-related topics constituted a mere 0.3% of the sessions. Clients exhibiting higher levels of depressive or anxiety symptoms were less likely to discuss weather and climate compared to those with mild or no symptoms. Findings suggest that although global warming is known to impact mental health, these issues are not yet adequately addressed in psychotherapy. This study suggests a potential gap between the documented mental health concerns associated with climate change and their representation in psychotherapy. NLP models can provide valuable feedback to therapists and assist in identifying key moments and conversational topics to inform training and improve the effectiveness of therapy sessions.
Collapse
Affiliation(s)
| | - Simon A Rego
- Montefiore Medical Center/Albert Einstein College of Medicine, Bronx, NY, USA
| | - Shiri Sadeh-Sharvit
- Eleos Health, Needham, MA, USA.
- Center for m2Health, Palo Alto University, 1791 Arastradero Rd, 94304, Palo Alto, CA, USA.
| |
Collapse
|
9
|
Das KP, Gavade P. A review on the efficacy of artificial intelligence for managing anxiety disorders. Front Artif Intell 2024; 7:1435895. [PMID: 39479229 PMCID: PMC11523650 DOI: 10.3389/frai.2024.1435895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 09/16/2024] [Indexed: 11/02/2024] Open
Abstract
Anxiety disorders are psychiatric conditions characterized by prolonged and generalized anxiety experienced by individuals in response to various events or situations. At present, anxiety disorders are regarded as the most widespread psychiatric disorders globally. Medication and different types of psychotherapies are employed as the primary therapeutic modalities in clinical practice for the treatment of anxiety disorders. However, combining these two approaches is known to yield more significant benefits than medication alone. Nevertheless, there is a lack of resources and a limited availability of psychotherapy options in underdeveloped areas. Psychotherapy methods encompass relaxation techniques, controlled breathing exercises, visualization exercises, controlled exposure exercises, and cognitive interventions such as challenging negative thoughts. These methods are vital in the treatment of anxiety disorders, but executing them proficiently can be demanding. Moreover, individuals with distinct anxiety disorders are prescribed medications that may cause withdrawal symptoms in some instances. Additionally, there is inadequate availability of face-to-face psychotherapy and a restricted capacity to predict and monitor the health, behavioral, and environmental aspects of individuals with anxiety disorders during the initial phases. In recent years, there has been notable progress in developing and utilizing artificial intelligence (AI) based applications and environments to improve the precision and sensitivity of diagnosing and treating various categories of anxiety disorders. As a result, this study aims to establish the efficacy of AI-enabled environments in addressing the existing challenges in managing anxiety disorders, reducing reliance on medication, and investigating the potential advantages, issues, and opportunities of integrating AI-assisted healthcare for anxiety disorders and enabling personalized therapy.
Collapse
Affiliation(s)
- K. P. Das
- Department of Computer Science, Christ University, Bengaluru, India
| | - P. Gavade
- Independent Practitioner, San Francisco, CA, United States
| |
Collapse
|
10
|
Singh S, Gambill JL, Attalla M, Fatima R, Gill AR, Siddiqui HF. Evaluating the Clinical Validity and Reliability of Artificial Intelligence-Enabled Diagnostic Tools in Neuropsychiatric Disorders. Cureus 2024; 16:e71651. [PMID: 39553014 PMCID: PMC11567685 DOI: 10.7759/cureus.71651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/16/2024] [Indexed: 11/19/2024] Open
Abstract
Neuropsychiatric disorders (NPDs) pose a substantial burden on the healthcare system. The major challenge in diagnosing NPDs is the subjective assessment by the physician which can lead to inaccurate and delayed diagnosis. Recent studies have depicted that the integration of artificial intelligence (AI) in neuropsychiatry could potentially revolutionize the field by precisely diagnosing complex neurological and mental health disorders in a timely fashion and providing individualized management strategies. In this narrative review, the authors have examined the current status of AI tools in assessing neuropsychiatric disorders and evaluated their validity and reliability in the existing literature. The analysis of various datasets including MRI scans, EEG, facial expressions, social media posts, texts, and laboratory samples in the accurate diagnosis of neuropsychiatric conditions using machine learning has been profoundly explored in this article. The recent trials and tribulations in various neuropsychiatric disorders encouraging future scope in the utility and application of AI have been discussed. Overall machine learning has proved to be feasible and applicable in the field of neuropsychiatry and it is about time that research translates to clinical settings for favorable patient outcomes. Future trials should focus on presenting higher quality evidence for superior adaptability and establish guidelines for healthcare providers to maintain standards.
Collapse
Affiliation(s)
- Satneet Singh
- Psychiatry, Hampshire and Isle of Wight Healthcare NHS Foundation Trust, Southampton, GBR
| | | | - Mary Attalla
- Medicine, Saba University School of Medicine, The Bottom, NLD
| | - Rida Fatima
- Mental Health, Cwm Taf Morgannwg University Health Board, Pontyclun, GBR
| | - Amna R Gill
- Psychiatry, HSE (Health Service Executive) Ireland, Dublin, IRL
| | - Humza F Siddiqui
- Internal Medicine, Jinnah Postgraduate Medical Centre, Karachi, PAK
| |
Collapse
|
11
|
Joseph AP, Babu A. Transference and the psychological interplay in AI-enhanced mental healthcare. Front Psychiatry 2024; 15:1460469. [PMID: 39224481 PMCID: PMC11366565 DOI: 10.3389/fpsyt.2024.1460469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Accepted: 07/24/2024] [Indexed: 09/04/2024] Open
Affiliation(s)
- Akhil P. Joseph
- School of Social Work, Marian College Kuttikkanam Autonomous, Kuttikkanam, India
- Department of Sociology & Social Work, Christ (Deemed to be University), Bengaluru, India
| | - Anithamol Babu
- School of Social Work, Marian College Kuttikkanam Autonomous, Kuttikkanam, India
- School of Social Work, Tata Institute of Social Sciences, Jalukbari, India
| |
Collapse
|
12
|
Alhuwaydi AM. Exploring the Role of Artificial Intelligence in Mental Healthcare: Current Trends and Future Directions - A Narrative Review for a Comprehensive Insight. Risk Manag Healthc Policy 2024; 17:1339-1348. [PMID: 38799612 PMCID: PMC11127648 DOI: 10.2147/rmhp.s461562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 05/10/2024] [Indexed: 05/29/2024] Open
Abstract
Mental health is an essential component of the health and well-being of a person and community, and it is critical for the individual, society, and socio-economic development of any country. Mental healthcare is currently in the health sector transformation era, with emerging technologies such as artificial intelligence (AI) reshaping the screening, diagnosis, and treatment modalities of psychiatric illnesses. The present narrative review is aimed at discussing the current landscape and the role of AI in mental healthcare, including screening, diagnosis, and treatment. Furthermore, this review attempted to highlight the key challenges, limitations, and prospects of AI in providing mental healthcare based on existing works of literature. The literature search for this narrative review was obtained from PubMed, Saudi Digital Library (SDL), Google Scholar, Web of Science, and IEEE Xplore, and we included only English-language articles published in the last five years. Keywords used in combination with Boolean operators ("AND" and "OR") were the following: "Artificial intelligence", "Machine learning", Deep learning", "Early diagnosis", "Treatment", "interventions", "ethical consideration", and "mental Healthcare". Our literature review revealed that, equipped with predictive analytics capabilities, AI can improve treatment planning by predicting an individual's response to various interventions. Predictive analytics, which uses historical data to formulate preventative interventions, aligns with the move toward individualized and preventive mental healthcare. In the screening and diagnostic domains, a subset of AI, such as machine learning and deep learning, has been proven to analyze various mental health data sets and predict the patterns associated with various mental health problems. However, limited studies have evaluated the collaboration between healthcare professionals and AI in delivering mental healthcare, as these sensitive problems require empathy, human connections, and holistic, personalized, and multidisciplinary approaches. Ethical issues, cybersecurity, a lack of data analytics diversity, cultural sensitivity, and language barriers remain concerns for implementing this futuristic approach in mental healthcare. Considering these sensitive problems require empathy, human connections, and holistic, personalized, and multidisciplinary approaches, it is imperative to explore these aspects. Therefore, future comparative trials with larger sample sizes and data sets are warranted to evaluate different AI models used in mental healthcare across regions to fill the existing knowledge gaps.
Collapse
Affiliation(s)
- Ahmed M Alhuwaydi
- Department of Internal Medicine, Division of Psychiatry, College of Medicine, Jouf University, Sakaka, Saudi Arabia
| |
Collapse
|
13
|
Spinrad A, Taylor CB, Ruzek JI, Jefroykin S, Friedlander T, Feleke I, Lev-Ari H, Szapiro N, Sadeh-Sharvit S. Action recommendations review in community-based therapy and depression and anxiety outcomes: a machine learning approach. BMC Psychiatry 2024; 24:133. [PMID: 38365635 PMCID: PMC10870574 DOI: 10.1186/s12888-024-05570-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 01/30/2024] [Indexed: 02/18/2024] Open
Abstract
BACKGROUND While the positive impact of homework completion on symptom alleviation is well-established, the pivotal role of therapists in reviewing these assignments has been under-investigated. This study examined therapists' practice of assigning and reviewing action recommendations in therapy sessions, and how it correlates with patients' depression and anxiety outcomes. METHODS We analyzed 2,444 therapy sessions from community-based behavioral health programs. Machine learning models and natural language processing techniques were deployed to discern action recommendations and their subsequent reviews. The extent of the review was quantified by measuring the proportion of session dialogues reviewing action recommendations, a metric we refer to as "review percentage". Using Generalized Estimating Equations modeling, we evaluated the correlation between this metric and changes in clients' depression and anxiety scores. RESULTS Our models achieved 76% precision in capturing action recommendations and 71.1% in reviewing them. Using these models, we found that therapists typically provided clients with one to eight action recommendations per session to engage in outside therapy. However, only half of the sessions included a review of previously assigned action recommendations. We identified a significant interaction between the initial depression score and the review percentage (p = 0.045). When adjusting for this relationship, the review percentage was positively and significantly associated with a reduction in depression score (p = 0.032). This suggests that more frequent review of action recommendations in therapy relates to greater improvement in depression symptoms. Further analyses highlighted this association for mild depression (p = 0.024), but not for anxiety or moderate to severe depression. CONCLUSIONS An observed positive association exists between therapists' review of previous sessions' action recommendations and improved treatment outcomes among clients with mild depression, highlighting the possible advantages of consistently revisiting therapeutic homework in real-world therapy settings. Results underscore the importance of developing effective strategies to help therapists maintain continuity between therapy sessions, potentially enhancing the impact of therapy.
Collapse
Affiliation(s)
- Amit Spinrad
- Eleos Health, 117 Kendrick Street, Suite 300, Needham, MA, 02494, USA.
| | - C Barr Taylor
- Center for m2Health, Palo Alto University, Palo Alto, CA, USA
- Department of Psychiatry, Stanford Medical Center, Stanford, CA, USA
| | - Josef I Ruzek
- Center for m2Health, Palo Alto University, Palo Alto, CA, USA
- Department of Psychiatry, Stanford Medical Center, Stanford, CA, USA
| | - Samuel Jefroykin
- Eleos Health, 117 Kendrick Street, Suite 300, Needham, MA, 02494, USA
| | - Tamar Friedlander
- Eleos Health, 117 Kendrick Street, Suite 300, Needham, MA, 02494, USA
| | - Israela Feleke
- Eleos Health, 117 Kendrick Street, Suite 300, Needham, MA, 02494, USA
| | - Hila Lev-Ari
- Eleos Health, 117 Kendrick Street, Suite 300, Needham, MA, 02494, USA
| | - Natalia Szapiro
- Eleos Health, 117 Kendrick Street, Suite 300, Needham, MA, 02494, USA
| | - Shiri Sadeh-Sharvit
- Eleos Health, 117 Kendrick Street, Suite 300, Needham, MA, 02494, USA
- Center for m2Health, Palo Alto University, Palo Alto, CA, USA
| |
Collapse
|