51
|
Rogan J, Bucci S, Firth J. Health Care Professionals' Views on the Use of Passive Sensing, AI, and Machine Learning in Mental Health Care: Systematic Review With Meta-Synthesis. JMIR Ment Health 2024; 11:e49577. [PMID: 38261403 PMCID: PMC10848143 DOI: 10.2196/49577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/30/2023] [Accepted: 11/01/2023] [Indexed: 01/24/2024] Open
Abstract
BACKGROUND Mental health difficulties are highly prevalent worldwide. Passive sensing technologies and applied artificial intelligence (AI) methods can provide an innovative means of supporting the management of mental health problems and enhancing the quality of care. However, the views of stakeholders are important in understanding the potential barriers to and facilitators of their implementation. OBJECTIVE This study aims to review, critically appraise, and synthesize qualitative findings relating to the views of mental health care professionals on the use of passive sensing and AI in mental health care. METHODS A systematic search of qualitative studies was performed using 4 databases. A meta-synthesis approach was used, whereby studies were analyzed using an inductive thematic analysis approach within a critical realist epistemological framework. RESULTS Overall, 10 studies met the eligibility criteria. The 3 main themes were uses of passive sensing and AI in clinical practice, barriers to and facilitators of use in practice, and consequences for service users. A total of 5 subthemes were identified: barriers, facilitators, empowerment, risk to well-being, and data privacy and protection issues. CONCLUSIONS Although clinicians are open-minded about the use of passive sensing and AI in mental health care, important factors to consider are service user well-being, clinician workloads, and therapeutic relationships. Service users and clinicians must be involved in the development of digital technologies and systems to ensure ease of use. The development of, and training in, clear policies and guidelines on the use of passive sensing and AI in mental health care, including risk management and data security procedures, will also be key to facilitating clinician engagement. The means for clinicians and service users to provide feedback on how the use of passive sensing and AI in practice is being received should also be considered. TRIAL REGISTRATION PROSPERO International Prospective Register of Systematic Reviews CRD42022331698; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=331698.
Collapse
Affiliation(s)
- Jessica Rogan
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| | - Sandra Bucci
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| | - Joseph Firth
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
52
|
Singh V, Sarkar S, Gaur V, Grover S, Singh OP. Clinical Practice Guidelines on using artificial intelligence and gadgets for mental health and well-being. Indian J Psychiatry 2024; 66:S414-S419. [PMID: 38445270 PMCID: PMC10911327 DOI: 10.4103/indianjpsychiatry.indianjpsychiatry_926_23] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 12/12/2023] [Accepted: 12/18/2023] [Indexed: 03/07/2024] Open
Affiliation(s)
- Vipul Singh
- Department of Psychiatry, Government Medical College, Kannauj, Uttar Pradesh, India
| | - Sharmila Sarkar
- Department of Psychiatry, Calcutta National Medical College, Kolkata, West Bengal, India
| | - Vikas Gaur
- Department of Psychiatry, Jaipur National University Institute for Medical Sciences and Research Centre, Jaipur, Rajasthan, India
| | - Sandeep Grover
- Department of Psychiatry, Post Graduate Institute of Medical Education and Research, Chandigarh, India
| | - Om Prakash Singh
- Department of Psychiatry, Midnapore Medical College, Midnapore, West Bengal, India E-mail:
| |
Collapse
|
53
|
Irmak-Yazicioglu MB, Arslan A. Navigating the Intersection of Technology and Depression Precision Medicine. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1456:401-426. [PMID: 39261440 DOI: 10.1007/978-981-97-4402-2_20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
This chapter primarily focuses on the progress in depression precision medicine with specific emphasis on the integrative approaches that include artificial intelligence and other data, tools, and technologies. After the description of the concept of precision medicine and a comparative introduction to depression precision medicine with cancer and epilepsy, new avenues of depression precision medicine derived from integrated artificial intelligence and other sources will be presented. Additionally, less advanced areas, such as comorbidity between depression and cancer, will be examined.
Collapse
Affiliation(s)
| | - Ayla Arslan
- Department of Molecular Biology and Genetics, Üsküdar University, İstanbul, Türkiye.
| |
Collapse
|
54
|
Zhang M, Scandiffio J, Younus S, Jeyakumar T, Karsan I, Charow R, Salhia M, Wiljer D. The Adoption of AI in Mental Health Care-Perspectives From Mental Health Professionals: Qualitative Descriptive Study. JMIR Form Res 2023; 7:e47847. [PMID: 38060307 PMCID: PMC10739240 DOI: 10.2196/47847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 10/08/2023] [Accepted: 10/11/2023] [Indexed: 12/08/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is transforming the mental health care environment. AI tools are increasingly accessed by clients and service users. Mental health professionals must be prepared not only to use AI but also to have conversations about it when delivering care. Despite the potential for AI to enable more efficient and reliable and higher-quality care delivery, there is a persistent gap among mental health professionals in the adoption of AI. OBJECTIVE A needs assessment was conducted among mental health professionals to (1) understand the learning needs of the workforce and their attitudes toward AI and (2) inform the development of AI education curricula and knowledge translation products. METHODS A qualitative descriptive approach was taken to explore the needs of mental health professionals regarding their adoption of AI through semistructured interviews. To reach maximum variation sampling, mental health professionals (eg, psychiatrists, mental health nurses, educators, scientists, and social workers) in various settings across Ontario (eg, urban and rural, public and private sector, and clinical and research) were recruited. RESULTS A total of 20 individuals were recruited. Participants included practitioners (9/20, 45% social workers and 1/20, 5% mental health nurses), educator scientists (5/20, 25% with dual roles as professors/lecturers and researchers), and practitioner scientists (3/20, 15% with dual roles as researchers and psychiatrists and 2/20, 10% with dual roles as researchers and mental health nurses). Four major themes emerged: (1) fostering practice change and building self-efficacy to integrate AI into patient care; (2) promoting system-level change to accelerate the adoption of AI in mental health; (3) addressing the importance of organizational readiness as a catalyst for AI adoption; and (4) ensuring that mental health professionals have the education, knowledge, and skills to harness AI in optimizing patient care. CONCLUSIONS AI technologies are starting to emerge in mental health care. Although many digital tools, web-based services, and mobile apps are designed using AI algorithms, mental health professionals have generally been slower in the adoption of AI. As indicated by this study's findings, the implications are 3-fold. At the individual level, digital professionals must see the value in digitally compassionate tools that retain a humanistic approach to care. For mental health professionals, resistance toward AI adoption must be acknowledged through educational initiatives to raise awareness about the relevance, practicality, and benefits of AI. At the organizational level, digital professionals and leaders must collaborate on governance and funding structures to promote employee buy-in. At the societal level, digital and mental health professionals should collaborate in the creation of formal AI training programs specific to mental health to address knowledge gaps. This study promotes the design of relevant and sustainable education programs to support the adoption of AI within the mental health care sphere.
Collapse
Affiliation(s)
| | | | | | - Tharshini Jeyakumar
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
| | | | - Rebecca Charow
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
| | - Mohammad Salhia
- Rotman School of Management, University of Toronto, Toronto, ON, Canada
| | - David Wiljer
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
- Department of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
55
|
Pozuelo JR, Moffett BD, Davis M, Stein A, Cohen H, Craske MG, Maritze M, Makhubela P, Nabulumba C, Sikoti D, Kahn K, Sodi T, van Heerden A, O'Mahen HA. User-Centered Design of a Gamified Mental Health App for Adolescents in Sub-Saharan Africa: Multicycle Usability Testing Study. JMIR Form Res 2023; 7:e51423. [PMID: 38032691 PMCID: PMC10722378 DOI: 10.2196/51423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/29/2023] [Accepted: 10/30/2023] [Indexed: 12/01/2023] Open
Abstract
BACKGROUND There is an urgent need for scalable psychological treatments to address adolescent depression in low-resource settings. Digital mental health interventions have many potential advantages, but few have been specifically designed for or rigorously evaluated with adolescents in sub-Saharan Africa. OBJECTIVE This study had 2 main objectives. The first was to describe the user-centered development of a smartphone app that delivers behavioral activation (BA) to treat depression among adolescents in rural South Africa and Uganda. The second was to summarize the findings from multicycle usability testing. METHODS An iterative user-centered agile design approach was used to co-design the app to ensure that it was engaging, culturally relevant, and usable for the target populations. An array of qualitative methods, including focus group discussions, in-depth individual interviews, participatory workshops, usability testing, and extensive expert consultation, was used to iteratively refine the app throughout each phase of development. RESULTS A total of 160 adolescents from rural South Africa and Uganda were involved in the development process. The app was built to be consistent with the principles of BA and supported by brief weekly phone calls from peer mentors who would help users overcome barriers to engagement. Drawing on the findings of the formative work, we applied a narrative game format to develop the Kuamsha app. This approach taught the principles of BA using storytelling techniques and game design elements. The stories were developed collaboratively with adolescents from the study sites and included decision points that allowed users to shape the narrative, character personalization, in-app points, and notifications. Each story consists of 6 modules ("episodes") played in sequential order, and each covers different BA skills. Between modules, users were encouraged to work on weekly activities and report on their progress and mood as they completed these activities. The results of the multicycle usability testing showed that the Kuamsha app was acceptable in terms of usability and engagement. CONCLUSIONS The Kuamsha app uniquely delivered BA for adolescent depression via an interactive narrative game format tailored to the South African and Ugandan contexts. Further studies are currently underway to examine the intervention's feasibility, acceptability, and efficacy in reducing depressive symptoms.
Collapse
Affiliation(s)
- Julia R Pozuelo
- Department of Global Health and Social Medicine, Harvard Medical School, Harvard University, Boston, MA, United States
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
- MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | - Bianca D Moffett
- MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | | | - Alan Stein
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
- MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
- Africa Health Research Institute, KwaZulu Natal, South Africa
| | - Halley Cohen
- Lincoln College, University of Oxford, Oxford, United Kingdom
| | - Michelle G Craske
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, Los Angeles, CA, United States
| | - Meriam Maritze
- MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | - Princess Makhubela
- MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | | | | | - Kathleen Kahn
- MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
- Umeå Centre for Global Health Research, Division of Epidemiology and Global Health, Department of Public Health and Clinical Medicine, Umeå University, Umeå, Sweden
| | - Tholene Sodi
- SAMRC-DSI/NRF-UL SARChI Research Chair in Mental Health and Society, University of Limpopo, Limpopo, South Africa
| | - Alastair van Heerden
- Center for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
- SAMRC/Wits Developmental Pathways for Health Research Unit, Department of Paediatrics, School of Clinical Medicine, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | - Heather A O'Mahen
- Mood Disorders Centre, Department of Psychology, University of Exeter, Exeter, United Kingdom
| |
Collapse
|
56
|
Wilhelmy S, Giupponi G, Groß D, Eisendle K, Conca A. A shift in psychiatry through AI? Ethical challenges. Ann Gen Psychiatry 2023; 22:43. [PMID: 37919759 PMCID: PMC10623776 DOI: 10.1186/s12991-023-00476-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 10/24/2023] [Indexed: 11/04/2023] Open
Abstract
The digital transformation has made its way into many areas of society, including medicine. While AI-based systems are widespread in medical disciplines, their use in psychiatry is progressing more slowly. However, they promise to revolutionize psychiatric practice in terms of prevention options, diagnostics, or even therapy. Psychiatry is in the midst of this digital transformation, so the question is no longer "whether" to use technology, but "how" we can use it to achieve goals of progress or improvement. The aim of this article is to argue that this revolution brings not only new opportunities but also new ethical challenges for psychiatry, especially with regard to safety, responsibility, autonomy, or transparency. As an example, the relationship between doctor and patient in psychiatry will be addressed, in which digitization is also leading to ethically relevant changes. Ethical reflection on the use of AI systems offers the opportunity to accompany these changes carefully in order to take advantage of the benefits that this change brings. The focus should therefore always be on balancing what is technically possible with what is ethically necessary.
Collapse
Affiliation(s)
- Saskia Wilhelmy
- Institute for History, Theory and Ethics in Medicine, University Hospital, RWTH Aachen University, Wendlingweg 2, 5074, Aachen, Germany.
| | - Giancarlo Giupponi
- Academic Teaching Department of Psychiatry, Central Hospital, Sanitary Agency of South Tyrol, Via Lorenz Böhler 5, 39100, Bolzano, Italy
| | - Dominik Groß
- Institute for History, Theory and Ethics in Medicine, University Hospital, RWTH Aachen University, Wendlingweg 2, 5074, Aachen, Germany
| | - Klaus Eisendle
- Institute of General Practice and Public Health, Provincial College for Health Professions Claudiana, Lorenz-Böhler-Straße 13, 39100, Bolzano, Italy
| | - Andreas Conca
- Academic Teaching Department of Psychiatry, Central Hospital, Sanitary Agency of South Tyrol, Via Lorenz Böhler 5, 39100, Bolzano, Italy
| |
Collapse
|
57
|
Alanzi T, Alotaibi R, Alajmi R, Bukhamsin Z, Fadaq K, AlGhamdi N, Bu Khamsin N, Alzahrani L, Abdullah R, Alsayer R, Al Muarfaj AM, Alanzi N. Barriers and Facilitators of Artificial Intelligence in Family Medicine: An Empirical Study With Physicians in Saudi Arabia. Cureus 2023; 15:e49419. [PMID: 38149160 PMCID: PMC10750222 DOI: 10.7759/cureus.49419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/23/2023] [Indexed: 12/28/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is a novel technology that has been widely acknowledged for its potential to improve the processes' efficiency across industries. However, its barriers and facilitators in healthcare are not completely understood due to its novel nature. STUDY PURPOSE The purpose of this study is to explore the intricate landscape of AI use in family medicine, aiming to uncover the factors that either hinder or enable its successful adoption. METHODS A cross-sectional survey design is adopted in this study. The questionnaire included 10 factors (performance expectancy, effort expectancy, social influence, facilitating conditions, behavioral intention, trust, perceived privacy risk, personal innovativeness, ethical concerns, and facilitators) affecting the acceptance of AI. A total of 157 family physicians participated in the online survey. RESULTS Effort expectancy (μ = 3.85) and facilitating conditions (μ = 3.77) were identified to be strong influence factors. Access to data (μ = 4.33), increased computing power (μ = 3.92), and telemedicine (μ = 3.78) were identified as major facilitators; regulatory support (μ = 2.29) and interoperability standards (μ = 2.71) were identified as barriers along with privacy and ethical concerns. Younger individuals tend to have more positive attitudes and expectations toward AI-enabled assistants compared to older participants (p < .05). Perceived privacy risk is negatively correlated with all factors. CONCLUSION Although there are various barriers and concerns regarding the use of AI in healthcare, the preference for AI use in healthcare, especially family medicine, is increasing.
Collapse
Affiliation(s)
- Turki Alanzi
- Department of Health Information Management and Technology, College of Public Health, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Raghad Alotaibi
- Department of Family Medicine, King Fahad Medical City, Riyadh, SAU
| | - Rahaf Alajmi
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Zainab Bukhamsin
- College of Clinical Pharmacy, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Khadija Fadaq
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Nouf AlGhamdi
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | | | | | - Ruya Abdullah
- Faculty of Medicine, Ibn Sina National College, Jeddah, SAU
| | - Razan Alsayer
- College of Medicine, Northern Border University, Arar, SAU
| | - Afrah M Al Muarfaj
- Department of Health Affairs, General Directorate of Health Affairs in Assir Region, Ministry of Health, Abha, SAU
| | - Nouf Alanzi
- Department of Clinical Laboratory Sciences, College of Applied Medical Sciences, Jouf University, Sakakah, SAU
| |
Collapse
|
58
|
Yu J, Shen N, Conway S, Hiebert M, Lai-Zhao B, McCann M, Mehta RR, Miranda M, Putterman C, Santisteban JA, Thomson N, Young C, Chiuccariello L, Hunter K, Hill S. A holistic approach to integrating patient, family, and lived experience voices in the development of the BrainHealth Databank: a digital learning health system to enable artificial intelligence in the clinic. FRONTIERS IN HEALTH SERVICES 2023; 3:1198195. [PMID: 37927443 PMCID: PMC10625404 DOI: 10.3389/frhs.2023.1198195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 10/04/2023] [Indexed: 11/07/2023]
Abstract
Artificial intelligence, machine learning, and digital health innovations have tremendous potential to advance patient-centred, data-driven mental healthcare. To enable the clinical application of such innovations, the Krembil Centre for Neuroinformatics at the Centre for Addiction and Mental Health, Canada's largest mental health hospital, embarked on a journey to co-create a digital learning health system called the BrainHealth Databank (BHDB). Working with clinicians, scientists, and administrators alongside patients, families, and persons with lived experience (PFLE), this hospital-wide team has adopted a systems approach that integrates clinical and research data and practices to improve care and accelerate research. PFLE engagement was intentional and initiated at the conception stage of the BHDB to help ensure the initiative would achieve its goal of understanding the community's needs while improving patient care and experience. The BHDB team implemented an evolving, dynamic strategy to support continuous and active PFLE engagement in all aspects of the BHDB that has and will continue to impact patients and families directly. We describe PFLE consultation, co-design, and partnership in various BHDB activities and projects. In all three examples, we discuss the factors contributing to successful PFLE engagement, share lessons learned, and highlight areas for growth and improvement. By sharing how the BHDB navigated and fostered PFLE engagement, we hope to motivate and inspire the health informatics community to collectively chart their paths in PFLE engagement to support advancements in digital health and artificial intelligence.
Collapse
Affiliation(s)
- Joanna Yu
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, ON, Canada
- Health and Technology, Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| | - Nelson Shen
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
- AMS Healthcare, Toronto, ON, Canada
| | - Susan Conway
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Melissa Hiebert
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Benson Lai-Zhao
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Miriam McCann
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Rohan R. Mehta
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Morena Miranda
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Connie Putterman
- Centre for Addictions and Mental Health, Toronto, ON, Canada
- CanChild, Hamilton, ON, Canada
- CHILD-BRIGHT Network, Montreal, QC, Canada
- Kids Brain Health Network, Burnaby, ON, Canada
- Province of Ontario Neurodevelopmental (POND) Network, Toronto, ON, Canada
| | - Jose Arturo Santisteban
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, ON, Canada
| | - Nicole Thomson
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
| | - Courtney Young
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | | | - Kimberly Hunter
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Sean Hill
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, ON, Canada
- Health and Technology, Vector Institute for Artificial Intelligence, Toronto, ON, Canada
- Department of Psychiatry, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
59
|
Nashwan AJ, Gharib S, Alhadidi M, El-Ashry AM, Alamgir A, Al-Hassan M, Khedr MA, Dawood S, Abufarsakh B. Harnessing Artificial Intelligence: Strategies for Mental Health Nurses in Optimizing Psychiatric Patient Care. Issues Ment Health Nurs 2023; 44:1020-1034. [PMID: 37850937 DOI: 10.1080/01612840.2023.2263579] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
This narrative review explores the transformative impact of Artificial Intelligence (AI) on mental health nursing, particularly in enhancing psychiatric patient care. AI technologies present new strategies for early detection, risk assessment, and improving treatment adherence in mental health. They also facilitate remote patient monitoring, bridge geographical gaps, and support clinical decision-making. The evolution of virtual mental health assistants and AI-enhanced therapeutic interventions are also discussed. These technological advancements reshape the nurse-patient interactions while ensuring personalized, efficient, and high-quality care. The review also addresses AI's ethical and responsible use in mental health nursing, emphasizing patient privacy, data security, and the balance between human interaction and AI tools. As AI applications in mental health care continue to evolve, this review encourages continued innovation while advocating for responsible implementation, thereby optimally leveraging the potential of AI in mental health nursing.
Collapse
Affiliation(s)
- Abdulqadir J Nashwan
- Nursing Department, Hamad Medical Corporation, Doha, Qatar
- Department of Public Health, College of Health Sciences, QU Health, Qatar University, Doha, Qatar
| | - Suzan Gharib
- Nursing Department, Al-Khaldi Hospital, Amman, Jordan
| | - Majdi Alhadidi
- Psychiatric & Mental Health Nursing, Faculty of Nursing, Al-Zaytoonah University of Jordan, Amman, Jordan
| | | | | | | | | | - Shaimaa Dawood
- Faculty of Nursing, Alexandria University, Alexandria, Egypt
| | | |
Collapse
|
60
|
Jin KW, Li Q, Xie Y, Xiao G. Artificial intelligence in mental healthcare: an overview and future perspectives. Br J Radiol 2023; 96:20230213. [PMID: 37698582 PMCID: PMC10546438 DOI: 10.1259/bjr.20230213] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 08/31/2023] [Accepted: 09/03/2023] [Indexed: 09/13/2023] Open
Abstract
Artificial intelligence is disrupting the field of mental healthcare through applications in computational psychiatry, which leverages quantitative techniques to inform our understanding, detection, and treatment of mental illnesses. This paper provides an overview of artificial intelligence technologies in modern mental healthcare and surveys recent advances made by researchers, focusing on the nascent field of digital psychiatry. We also consider the ethical implications of artificial intelligence playing a greater role in mental healthcare.
Collapse
Affiliation(s)
| | - Qiwei Li
- Department of Mathemaical Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | | | | |
Collapse
|
61
|
Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, Aldairem A, Alrashed M, Bin Saleh K, Badreldin HA, Al Yami MS, Al Harbi S, Albekairy AM. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC MEDICAL EDUCATION 2023; 23:689. [PMID: 37740191 PMCID: PMC10517477 DOI: 10.1186/s12909-023-04698-z] [Citation(s) in RCA: 542] [Impact Index Per Article: 271.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 09/19/2023] [Indexed: 09/24/2023]
Abstract
INTRODUCTION Healthcare systems are complex and challenging for all stakeholders, but artificial intelligence (AI) has transformed various fields, including healthcare, with the potential to improve patient care and quality of life. Rapid AI advancements can revolutionize healthcare by integrating it into clinical practice. Reporting AI's role in clinical practice is crucial for successful implementation by equipping healthcare providers with essential knowledge and tools. RESEARCH SIGNIFICANCE This review article provides a comprehensive and up-to-date overview of the current state of AI in clinical practice, including its potential applications in disease diagnosis, treatment recommendations, and patient engagement. It also discusses the associated challenges, covering ethical and legal considerations and the need for human expertise. By doing so, it enhances understanding of AI's significance in healthcare and supports healthcare organizations in effectively adopting AI technologies. MATERIALS AND METHODS The current investigation analyzed the use of AI in the healthcare system with a comprehensive review of relevant indexed literature, such as PubMed/Medline, Scopus, and EMBASE, with no time constraints but limited to articles published in English. The focused question explores the impact of applying AI in healthcare settings and the potential outcomes of this application. RESULTS Integrating AI into healthcare holds excellent potential for improving disease diagnosis, treatment selection, and clinical laboratory testing. AI tools can leverage large datasets and identify patterns to surpass human performance in several healthcare aspects. AI offers increased accuracy, reduced costs, and time savings while minimizing human errors. It can revolutionize personalized medicine, optimize medication dosages, enhance population health management, establish guidelines, provide virtual health assistants, support mental health care, improve patient education, and influence patient-physician trust. CONCLUSION AI can be used to diagnose diseases, develop personalized treatment plans, and assist clinicians with decision-making. Rather than simply automating tasks, AI is about developing technologies that can enhance patient care across healthcare settings. However, challenges related to data privacy, bias, and the need for human expertise must be addressed for the responsible and effective implementation of AI in healthcare.
Collapse
Affiliation(s)
- Shuroug A Alowais
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia.
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia.
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia.
| | - Sahar S Alghamdi
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
- Department of Pharmaceutical Sciences, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Nada Alsuhebany
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Tariq Alqahtani
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
- Department of Pharmaceutical Sciences, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Abdulrahman I Alshaya
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Sumaya N Almohareb
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Atheer Aldairem
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Mohammed Alrashed
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Khalid Bin Saleh
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Hisham A Badreldin
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Majed S Al Yami
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Shmeylan Al Harbi
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Abdulkareem M Albekairy
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| |
Collapse
|
62
|
Espejo G, Reiner W, Wenzinger M. Exploring the Role of Artificial Intelligence in Mental Healthcare: Progress, Pitfalls, and Promises. Cureus 2023; 15:e44748. [PMID: 37809254 PMCID: PMC10556257 DOI: 10.7759/cureus.44748] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/05/2023] [Indexed: 10/10/2023] Open
Abstract
The rise of artificial intelligence (AI) heralds a significant revolution in healthcare, particularly in mental health. AI's potential spans diagnostic algorithms, data analysis from diverse sources, and real-time patient monitoring. It is essential for clinicians to remain informed about AI's progress and limitations. The inherent complexity of mental disorders, limited objective data, and retrospective studies pose challenges to the application of AI. Privacy concerns, bias, and the risk of AI replacing human care also loom. Regulatory oversight and physician involvement are needed for equitable AI implementation. AI integration and use in psychotherapy and other services are on the horizon. Patient trust, feasibility, clinical efficacy, and clinician acceptance are prerequisites. In the future, governing bodies must decide on AI ownership, governance, and integration approaches. While AI can enhance clinical decision-making and efficiency, it might also exacerbate moral dilemmas, autonomy loss, and issues regarding the scope of practice. Striking a balance between AI's strengths and limitations involves utilizing AI as a validated clinical supplement under medical supervision, necessitating active clinician involvement in AI research, ethics, and regulation. AI's trajectory must align with optimizing mental health treatment and upholding compassionate care.
Collapse
Affiliation(s)
- Gemma Espejo
- Psychiatry and Behavioral Sciences, University of California, Irvine School of Medicine, Irvine, USA
| | - Wade Reiner
- Psychiatry, University of Washington, Seattle, USA
| | | |
Collapse
|
63
|
Hadar-Shoval D, Elyoseph Z, Lvovsky M. The plasticity of ChatGPT's mentalizing abilities: personalization for personality structures. Front Psychiatry 2023; 14:1234397. [PMID: 37720897 PMCID: PMC10503434 DOI: 10.3389/fpsyt.2023.1234397] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 08/22/2023] [Indexed: 09/19/2023] Open
Abstract
This study evaluated the potential of ChatGPT, a large language model, to generate mentalizing-like abilities that are tailored to a specific personality structure and/or psychopathology. Mentalization is the ability to understand and interpret one's own and others' mental states, including thoughts, feelings, and intentions. Borderline Personality Disorder (BPD) and Schizoid Personality Disorder (SPD) are characterized by distinct patterns of emotional regulation. Individuals with BPD tend to experience intense and unstable emotions, while individuals with SPD tend to experience flattened or detached emotions. We used ChatGPT's free version 23.3 and assessed the extent to which its responses akin to emotional awareness (EA) were customized to the distinctive personality structure-character characterized by Borderline Personality Disorder (BPD) and Schizoid Personality Disorder (SPD), employing the Levels of Emotional Awareness Scale (LEAS). ChatGPT was able to accurately describe the emotional reactions of individuals with BPD as more intense, complex, and rich than those with SPD. This finding suggests that ChatGPT can generate mentalizing-like responses consistent with a range of psychopathologies in line with clinical and theoretical knowledge. However, the study also raises concerns regarding the potential for stigmas or biases related to mental diagnoses to impact the validity and usefulness of chatbot-based clinical interventions. We emphasize the need for the responsible development and deployment of chatbot-based interventions in mental health, which considers diverse theoretical frameworks.
Collapse
Affiliation(s)
- Dorit Hadar-Shoval
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Zohar Elyoseph
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
- Educational Psychology Department, Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Maya Lvovsky
- Educational Psychology Department, Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| |
Collapse
|
64
|
Elyoseph Z, Levkovich I. Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment. Front Psychiatry 2023; 14:1213141. [PMID: 37593450 PMCID: PMC10427505 DOI: 10.3389/fpsyt.2023.1213141] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/19/2023] [Indexed: 08/19/2023] Open
Abstract
ChatGPT, an artificial intelligence language model developed by OpenAI, holds the potential for contributing to the field of mental health. Nevertheless, although ChatGPT theoretically shows promise, its clinical abilities in suicide prevention, a significant mental health concern, have yet to be demonstrated. To address this knowledge gap, this study aims to compare ChatGPT's assessments of mental health indicators to those of mental health professionals in a hypothetical case study that focuses on suicide risk assessment. Specifically, ChatGPT was asked to evaluate a text vignette describing a hypothetical patient with varying levels of perceived burdensomeness and thwarted belongingness. The ChatGPT assessments were compared to the norms of mental health professionals. The results indicated that ChatGPT rated the risk of suicide attempts lower than did the mental health professionals in all conditions. Furthermore, ChatGPT rated mental resilience lower than the norms in most conditions. These results imply that gatekeepers, patients or even mental health professionals who rely on ChatGPT for evaluating suicidal risk or as a complementary tool to improve decision-making may receive an inaccurate assessment that underestimates the actual suicide risk.
Collapse
Affiliation(s)
- Zohar Elyoseph
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Inbar Levkovich
- Faculty of Graduate Studies, Oranim Academic College, Kiryat Tiv'on, Israel
| |
Collapse
|
65
|
Levis M, Levy J, Dufort V, Russ CJ, Shiner B. Dynamic suicide topic modelling: Deriving population-specific, psychosocial and time-sensitive suicide risk variables from Electronic Health Record psychotherapy notes. Clin Psychol Psychother 2023; 30:795-810. [PMID: 36797651 PMCID: PMC11172400 DOI: 10.1002/cpp.2842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 02/14/2023] [Indexed: 02/18/2023]
Abstract
In the machine learning subfield of natural language processing, a topic model is a type of unsupervised method that is used to uncover abstract topics within a corpus of text. Dynamic topic modelling (DTM) is used for capturing change in these topics over time. The study deploys DTM on corpus of electronic health record psychotherapy notes. This retrospective study examines whether DTM helps distinguish closely matched patients that did and did not die by suicide. Cohort consists of United States Department of Veterans Affairs (VA) patients diagnosed with Posttraumatic Stress Disorder (PTSD) between 2004 and 2013. Each case (those who died by suicide during the year following diagnosis) was matched with five controls (those who remained alive) that shared psychotherapists and had similar suicide risk based on VA's suicide prediction algorithm. Cohort was restricted to patients who received psychotherapy for 9+ months after initial PTSD diagnoses (cases = 77; controls = 362). For cases, psychotherapy notes from diagnosis until death were examined. For controls, psychotherapy notes from diagnosis until matched case's death date were examined. A Python-based DTM algorithm was utilized. Derived topics identified population-specific themes, including PTSD, psychotherapy, medication, communication and relationships. Control topics changed significantly more over time than case topics. Topic differences highlighted engagement, expressivity and therapeutic alliance. This study strengthens groundwork for deriving population-specific, psychosocial and time-sensitive suicide risk variables.
Collapse
Affiliation(s)
- Maxwell Levis
- White River Junction VA Medical Center, Hartford, Vermont, USA
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Joshua Levy
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Vincent Dufort
- White River Junction VA Medical Center, Hartford, Vermont, USA
| | - Carey J. Russ
- White River Junction VA Medical Center, Hartford, Vermont, USA
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Brian Shiner
- White River Junction VA Medical Center, Hartford, Vermont, USA
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
- National Center for PTSD Executive Division, Hartford, Vermont, USA
| |
Collapse
|
66
|
周 德, 金 益, 陈 瑛. [The application scenarios study on the intervention of cognitive decline in elderly population using metaverse technology]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2023; 40:573-581. [PMID: 37380399 PMCID: PMC10307614 DOI: 10.7507/1001-5515.202208092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 04/02/2023] [Indexed: 06/30/2023]
Abstract
China is facing the peak of an ageing population, and there is an increase in demand for intelligent healthcare services for the elderly. The metaverse, as a new internet social communication space, has shown infinite potential for application. This paper focuses on the application of the metaverse in medicine in the intervention of cognitive decline in the elderly population. The problems in assessment and intervention of cognitive decline in the elderly group were analyzed. The basic data required to construct the metaverse in medicine was introduced. Moreover, it is demonstrated that the elderly users can conduct self-monitoring, experience immersive self-healing and health-care through the metaverse in medicine technology. Furthermore, we proposed that it is feasible that the metaverse in medicine has obvious advantages in prediction and diagnosis, prevention and rehabilitation, as well as assisting patients with cognitive decline. Risks for its application were pointed out as well. The metaverse in medicine technology solves the problem of non-face-to-face social communication for elderly users, which may help to reconstruct the social medical system and service mode for the elderly population.
Collapse
Affiliation(s)
- 德富 周
- 苏州市职业大学 江苏省现代企业信息化应用支撑软件工程技术研发中心(江苏苏州 215104)Suzhou Vocational University Jiangsu Province Support Software Engineering R&D Center for Modern Information Technology Application in Enterprise, Suzhou, Jiangsu 215104, P. R. China
| | - 益 金
- 苏州市职业大学 江苏省现代企业信息化应用支撑软件工程技术研发中心(江苏苏州 215104)Suzhou Vocational University Jiangsu Province Support Software Engineering R&D Center for Modern Information Technology Application in Enterprise, Suzhou, Jiangsu 215104, P. R. China
| | - 瑛 陈
- 苏州市职业大学 江苏省现代企业信息化应用支撑软件工程技术研发中心(江苏苏州 215104)Suzhou Vocational University Jiangsu Province Support Software Engineering R&D Center for Modern Information Technology Application in Enterprise, Suzhou, Jiangsu 215104, P. R. China
| |
Collapse
|
67
|
Leung YW, Ng S, Duan L, Lam C, Chan K, Gancarz M, Rennie H, Trachtenberg L, Chan KP, Adikari A, Fang L, Gratzer D, Hirst G, Wong J, Esplen MJ. Therapist Feedback and Implications on Adoption of an Artificial Intelligence-Based Co-Facilitator for Online Cancer Support Groups: Mixed Methods Single-Arm Usability Study. JMIR Cancer 2023; 9:e40113. [PMID: 37294610 PMCID: PMC10334721 DOI: 10.2196/40113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 12/19/2022] [Accepted: 02/28/2023] [Indexed: 03/06/2023] Open
Abstract
BACKGROUND The recent onset of the COVID-19 pandemic and the social distancing requirement have created an increased demand for virtual support programs. Advances in artificial intelligence (AI) may offer novel solutions to management challenges such as the lack of emotional connections within virtual group interventions. Using typed text from online support groups, AI can help identify the potential risk of mental health concerns, alert group facilitator(s), and automatically recommend tailored resources while monitoring patient outcomes. OBJECTIVE The aim of this mixed methods, single-arm study was to evaluate the feasibility, acceptability, validity, and reliability of an AI-based co-facilitator (AICF) among CancerChatCanada therapists and participants to monitor online support group participants' distress through a real-time analysis of texts posted during the support group sessions. Specifically, AICF (1) generated participant profiles with discussion topic summaries and emotion trajectories for each session, (2) identified participant(s) at risk for increased emotional distress and alerted the therapist for follow-up, and (3) automatically suggested tailored recommendations based on participant needs. Online support group participants consisted of patients with various types of cancer, and the therapists were clinically trained social workers. METHODS Our study reports on the mixed methods evaluation of AICF, including therapists' opinions as well as quantitative measures. AICF's ability to detect distress was evaluated by the patient's real-time emoji check-in, the Linguistic Inquiry and Word Count software, and the Impact of Event Scale-Revised. RESULTS Although quantitative results showed only some validity of AICF's ability in detecting distress, the qualitative results showed that AICF was able to detect real-time issues that are amenable to treatment, thus allowing therapists to be more proactive in supporting every group member on an individual basis. However, therapists are concerned about the ethical liability of AICF's distress detection function. CONCLUSIONS Future works will look into wearable sensors and facial cues by using videoconferencing to overcome the barriers associated with text-based online support groups. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) RR2-10.2196/21453.
Collapse
Affiliation(s)
- Yvonne W Leung
- de Souza Institute, University Health Network, Toronto, ON, Canada
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- College of Professional Studies, Northeastern University, Toronto, ON, Canada
| | - Steve Ng
- de Souza Institute, University Health Network, Toronto, ON, Canada
| | - Lauren Duan
- de Souza Institute, University Health Network, Toronto, ON, Canada
| | - Claire Lam
- de Souza Institute, University Health Network, Toronto, ON, Canada
| | - Kenith Chan
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Mathew Gancarz
- de Souza Institute, University Health Network, Toronto, ON, Canada
| | - Heather Rennie
- de Souza Institute, University Health Network, Toronto, ON, Canada
- BC Cancer Agency, Vancouver, BC, Canada
| | - Lianne Trachtenberg
- de Souza Institute, University Health Network, Toronto, ON, Canada
- Centre for Psychology and Emotional Health, Toronto, ON, Canada
| | - Kai P Chan
- de Souza Institute, University Health Network, Toronto, ON, Canada
| | - Achini Adikari
- Centre for Data Analytics and Cognition, La Trobe University, Melbourne, Australia
| | - Lin Fang
- Factor-Inwentash Faculty of Social Work, University of Toronto, Toronto, ON, Canada
| | - David Gratzer
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Centre for Addiction and Mental Health, Toronto, ON, Canada
| | - Graeme Hirst
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
| | - Jiahui Wong
- de Souza Institute, University Health Network, Toronto, ON, Canada
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Mary Jane Esplen
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
68
|
Elyoseph Z, Hadar-Shoval D, Asraf K, Lvovsky M. ChatGPT outperforms humans in emotional awareness evaluations. Front Psychol 2023; 14:1199058. [PMID: 37303897 PMCID: PMC10254409 DOI: 10.3389/fpsyg.2023.1199058] [Citation(s) in RCA: 61] [Impact Index Per Article: 30.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 05/11/2023] [Indexed: 06/13/2023] Open
Abstract
The artificial intelligence chatbot, ChatGPT, has gained widespread attention for its ability to perform natural language processing tasks and has the fastest-growing user base in history. Although ChatGPT has successfully generated theoretical information in multiple fields, its ability to identify and describe emotions is still unknown. Emotional awareness (EA), the ability to conceptualize one's own and others' emotions, is considered a transdiagnostic mechanism for psychopathology. This study utilized the Levels of Emotional Awareness Scale (LEAS) as an objective, performance-based test to analyze ChatGPT's responses to twenty scenarios and compared its EA performance with that of the general population norms, as reported by a previous study. A second examination was performed one month later to measure EA improvement over time. Finally, two independent licensed psychologists evaluated the fit-to-context of ChatGPT's EA responses. In the first examination, ChatGPT demonstrated significantly higher performance than the general population on all the LEAS scales (Z score = 2.84). In the second examination, ChatGPT's performance significantly improved, almost reaching the maximum possible LEAS score (Z score = 4.26). Its accuracy levels were also extremely high (9.7/10). The study demonstrated that ChatGPT can generate appropriate EA responses, and that its performance may improve significantly over time. The study has theoretical and clinical implications, as ChatGPT can be used as part of cognitive training for clinical populations with EA impairments. In addition, ChatGPT's EA-like abilities may facilitate psychiatric diagnosis and assessment and be used to enhance emotional language. Further research is warranted to better understand the potential benefits and risks of ChatGPT and refine it to promote mental health.
Collapse
Affiliation(s)
- Zohar Elyoseph
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, England
| | - Dorit Hadar-Shoval
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Kfir Asraf
- Psychology Department, Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Maya Lvovsky
- Psychology Department, Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| |
Collapse
|
69
|
Andrew J, Rudra M, Eunice J, Belfin RV. Artificial intelligence in adolescents mental health disorder diagnosis, prognosis, and treatment. Front Public Health 2023; 11:1110088. [PMID: 37064712 PMCID: PMC10102508 DOI: 10.3389/fpubh.2023.1110088] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 03/20/2023] [Indexed: 04/03/2023] Open
Affiliation(s)
- J. Andrew
- Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
- *Correspondence: J. Andrew
| | - Madhuria Rudra
- Electronics and Communication Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Jennifer Eunice
- Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India
| | - R. V. Belfin
- BRIC, School of Medicine, University of North Carolina, Chapel Hill, NC, United States
| |
Collapse
|
70
|
Abstract
People with psychotic disorders can show marked interindividual variations in the onset of illness, responses to treatment and relapse, but they receive broadly similar clinical care. Precision psychiatry is an approach that aims to stratify people with a given disorder according to different clinical outcomes and tailor treatment to their individual needs. At present, interindividual differences in outcomes of psychotic disorders are difficult to predict on the basis of clinical assessment alone. Therefore, current research in psychosis seeks to build models that predict outcomes by integrating clinical information with a range of biological measures. Here, we review recent progress in the application of precision psychiatry to psychotic disorders and consider the challenges associated with implementing this approach in clinical practice.
Collapse
|
71
|
Morrow E, Zidaru T, Ross F, Mason C, Patel KD, Ream M, Stockley R. Artificial intelligence technologies and compassion in healthcare: A systematic scoping review. Front Psychol 2023; 13:971044. [PMID: 36733854 PMCID: PMC9887144 DOI: 10.3389/fpsyg.2022.971044] [Citation(s) in RCA: 46] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 12/05/2022] [Indexed: 01/18/2023] Open
Abstract
Background Advances in artificial intelligence (AI) technologies, together with the availability of big data in society, creates uncertainties about how these developments will affect healthcare systems worldwide. Compassion is essential for high-quality healthcare and research shows how prosocial caring behaviors benefit human health and societies. However, the possible association between AI technologies and compassion is under conceptualized and underexplored. Objectives The aim of this scoping review is to provide a comprehensive depth and a balanced perspective of the emerging topic of AI technologies and compassion, to inform future research and practice. The review questions were: How is compassion discussed in relation to AI technologies in healthcare? How are AI technologies being used to enhance compassion in healthcare? What are the gaps in current knowledge and unexplored potential? What are the key areas where AI technologies could support compassion in healthcare? Materials and methods A systematic scoping review following five steps of Joanna Briggs Institute methodology. Presentation of the scoping review conforms with PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews). Eligibility criteria were defined according to 3 concept constructs (AI technologies, compassion, healthcare) developed from the literature and informed by medical subject headings (MeSH) and key words for the electronic searches. Sources of evidence were Web of Science and PubMed databases, articles published in English language 2011-2022. Articles were screened by title/abstract using inclusion/exclusion criteria. Data extracted (author, date of publication, type of article, aim/context of healthcare, key relevant findings, country) was charted using data tables. Thematic analysis used an inductive-deductive approach to generate code categories from the review questions and the data. A multidisciplinary team assessed themes for resonance and relevance to research and practice. Results Searches identified 3,124 articles. A total of 197 were included after screening. The number of articles has increased over 10 years (2011, n = 1 to 2021, n = 47 and from Jan-Aug 2022 n = 35 articles). Overarching themes related to the review questions were: (1) Developments and debates (7 themes) Concerns about AI ethics, healthcare jobs, and loss of empathy; Human-centered design of AI technologies for healthcare; Optimistic speculation AI technologies will address care gaps; Interrogation of what it means to be human and to care; Recognition of future potential for patient monitoring, virtual proximity, and access to healthcare; Calls for curricula development and healthcare professional education; Implementation of AI applications to enhance health and wellbeing of the healthcare workforce. (2) How AI technologies enhance compassion (10 themes) Empathetic awareness; Empathetic response and relational behavior; Communication skills; Health coaching; Therapeutic interventions; Moral development learning; Clinical knowledge and clinical assessment; Healthcare quality assessment; Therapeutic bond and therapeutic alliance; Providing health information and advice. (3) Gaps in knowledge (4 themes) Educational effectiveness of AI-assisted learning; Patient diversity and AI technologies; Implementation of AI technologies in education and practice settings; Safety and clinical effectiveness of AI technologies. (4) Key areas for development (3 themes) Enriching education, learning and clinical practice; Extending healing spaces; Enhancing healing relationships. Conclusion There is an association between AI technologies and compassion in healthcare and interest in this association has grown internationally over the last decade. In a range of healthcare contexts, AI technologies are being used to enhance empathetic awareness; empathetic response and relational behavior; communication skills; health coaching; therapeutic interventions; moral development learning; clinical knowledge and clinical assessment; healthcare quality assessment; therapeutic bond and therapeutic alliance; and to provide health information and advice. The findings inform a reconceptualization of compassion as a human-AI system of intelligent caring comprising six elements: (1) Awareness of suffering (e.g., pain, distress, risk, disadvantage); (2) Understanding the suffering (significance, context, rights, responsibilities etc.); (3) Connecting with the suffering (e.g., verbal, physical, signs and symbols); (4) Making a judgment about the suffering (the need to act); (5) Responding with an intention to alleviate the suffering; (6) Attention to the effect and outcomes of the response. These elements can operate at an individual (human or machine) and collective systems level (healthcare organizations or systems) as a cyclical system to alleviate different types of suffering. New and novel approaches to human-AI intelligent caring could enrich education, learning, and clinical practice; extend healing spaces; and enhance healing relationships. Implications In a complex adaptive system such as healthcare, human-AI intelligent caring will need to be implemented, not as an ideology, but through strategic choices, incentives, regulation, professional education, and training, as well as through joined up thinking about human-AI intelligent caring. Research funders can encourage research and development into the topic of AI technologies and compassion as a system of human-AI intelligent caring. Educators, technologists, and health professionals can inform themselves about the system of human-AI intelligent caring.
Collapse
Affiliation(s)
| | - Teodor Zidaru
- Department of Anthropology, London School of Economics and Political Sciences, London, United Kingdom
| | - Fiona Ross
- Faculty of Health, Science, Social Care and Education, Kingston University London, London, United Kingdom
| | - Cindy Mason
- Artificial Intelligence Researcher (Independent), Palo Alto, CA, United States
| | | | - Melissa Ream
- Kent Surrey Sussex Academic Health Science Network (AHSN) and the National AHSN Network Artificial Intelligence (AI) Initiative, Surrey, United Kingdom
| | - Rich Stockley
- Head of Research and Engagement, Surrey Heartlands Health and Care Partnership, Surrey, United Kingdom
| |
Collapse
|
72
|
Cao XJ, Liu XQ. Artificial intelligence-assisted psychosis risk screening in adolescents: Practices and challenges. World J Psychiatry 2022; 12:1287-1297. [PMID: 36389087 PMCID: PMC9641379 DOI: 10.5498/wjp.v12.i10.1287] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/09/2022] [Accepted: 09/22/2022] [Indexed: 02/05/2023] Open
Abstract
Artificial intelligence-based technologies are gradually being applied to psych-iatric research and practice. This paper reviews the primary literature concerning artificial intelligence-assisted psychosis risk screening in adolescents. In terms of the practice of psychosis risk screening, the application of two artificial intelligence-assisted screening methods, chatbot and large-scale social media data analysis, is summarized in detail. Regarding the challenges of psychiatric risk screening, ethical issues constitute the first challenge of psychiatric risk screening through artificial intelligence, which must comply with the four biomedical ethical principles of respect for autonomy, nonmaleficence, beneficence and impartiality such that the development of artificial intelligence can meet the moral and ethical requirements of human beings. By reviewing the pertinent literature concerning current artificial intelligence-assisted adolescent psychosis risk screens, we propose that assuming they meet ethical requirements, there are three directions worth considering in the future development of artificial intelligence-assisted psychosis risk screening in adolescents as follows: nonperceptual real-time artificial intelligence-assisted screening, further reducing the cost of artificial intelligence-assisted screening, and improving the ease of use of artificial intelligence-assisted screening techniques and tools.
Collapse
Affiliation(s)
- Xiao-Jie Cao
- Graduate School of Education, Peking University, Beijing 100871, China
| | - Xin-Qiao Liu
- School of Education, Tianjin University, Tianjin 300350, China
| |
Collapse
|
73
|
Giansanti D. The Regulation of Artificial Intelligence in Digital Radiology in the Scientific Literature: A Narrative Review of Reviews. Healthcare (Basel) 2022; 10:1824. [PMID: 36292270 PMCID: PMC9601605 DOI: 10.3390/healthcare10101824] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/14/2022] [Accepted: 09/20/2022] [Indexed: 09/05/2023] Open
Abstract
Today, there is growing interest in artificial intelligence (AI) in the field of digital radiology (DR). This is also due to the push that has been applied in this sector due to the pandemic. Many studies are devoted to the challenges of integration in the health domain. One of the most important challenges is that of regulations. This study conducted a narrative review of reviews on the international approach to the regulation of AI in DR. The design of the study was based on: (I) An overview on Scopus and Pubmed (II) A qualification and eligibility process based on a standardized checklist and a scoring system. The results have highlighted an international approach to the regulation of these systems classified as "software as medical devices (SaMD)" arranged into: ethical issues, international regulatory framework, and bottlenecks of the legal issues. Several recommendations emerge from the analysis. They are all based on fundamental pillars: (a) The need to overcome a differentiated approach between countries. (b) The need for greater transparency and publicity of information both for SaMDs as a whole and for the algorithms and test patterns. (c) The need for an interdisciplinary approach that avoids bias (including demographic) in algorithms and test data. (d) The need to reduce some limits/gaps of the scientific literature production that do not cover the international approach.
Collapse
|
74
|
Ott T, Dabrock P. Transparent human – (non-) transparent technology? The Janus-faced call for transparency in AI-based health care technologies. Front Genet 2022; 13:902960. [PMID: 36072654 PMCID: PMC9444183 DOI: 10.3389/fgene.2022.902960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 07/15/2022] [Indexed: 11/13/2022] Open
Abstract
The use of Artificial Intelligence and Big Data in health care opens up new opportunities for the measurement of the human. Their application aims not only at gathering more and better data points but also at doing it less invasive. With this change in health care towards its extension to almost all areas of life and its increasing invisibility and opacity, new questions of transparency arise. While the complex human-machine interactions involved in deploying and using AI tend to become non-transparent, the use of these technologies makes the patient seemingly transparent. Papers on the ethical implementation of AI plead for transparency but neglect the factor of the “transparent patient” as intertwined with AI. Transparency in this regard appears to be Janus-faced: The precondition for receiving help - e.g., treatment advice regarding the own health - is to become transparent for the digitized health care system. That is, for instance, to donate data and become visible to the AI and its operators. The paper reflects on this entanglement of transparent patients and (non-) transparent technology. It argues that transparency regarding both AI and humans is not an ethical principle per se but an infraethical concept. Further, it is no sufficient basis for avoiding harm and human dignity violations. Rather, transparency must be enriched by intelligibility following Judith Butler’s use of the term. Intelligibility is understood as an epistemological presupposition for recognition and the ensuing humane treatment. Finally, the paper highlights ways to testify intelligibility in dealing with AI in health care ex ante, ex post, and continuously.
Collapse
|
75
|
Zarate D, Stavropoulos V, Ball M, de Sena Collier G, Jacobson NC. Exploring the digital footprint of depression: a PRISMA systematic literature review of the empirical evidence. BMC Psychiatry 2022; 22:421. [PMID: 35733121 PMCID: PMC9214685 DOI: 10.1186/s12888-022-04013-y] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 05/17/2022] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND This PRISMA systematic literature review examined the use of digital data collection methods (including ecological momentary assessment [EMA], experience sampling method [ESM], digital biomarkers, passive sensing, mobile sensing, ambulatory assessment, and time-series analysis), emphasizing on digital phenotyping (DP) to study depression. DP is defined as the use of digital data to profile health information objectively. AIMS Four distinct yet interrelated goals underpin this study: (a) to identify empirical research examining the use of DP to study depression; (b) to describe the different methods and technology employed; (c) to integrate the evidence regarding the efficacy of digital data in the examination, diagnosis, and monitoring of depression and (d) to clarify DP definitions and digital mental health records terminology. RESULTS Overall, 118 studies were assessed as eligible. Considering the terms employed, "EMA", "ESM", and "DP" were the most predominant. A variety of DP data sources were reported, including voice, language, keyboard typing kinematics, mobile phone calls and texts, geocoded activity, actigraphy sensor-related recordings (i.e., steps, sleep, circadian rhythm), and self-reported apps' information. Reviewed studies employed subjectively and objectively recorded digital data in combination with interviews and psychometric scales. CONCLUSIONS Findings suggest links between a person's digital records and depression. Future research recommendations include (a) deriving consensus regarding the DP definition and (b) expanding the literature to consider a person's broader contextual and developmental circumstances in relation to their digital data/records.
Collapse
Affiliation(s)
- Daniel Zarate
- Institute for Health and Sport, Victoria University, Melbourne, Australia.
| | - Vasileios Stavropoulos
- grid.1019.90000 0001 0396 9544Institute for Health and Sport, Victoria University, Melbourne, Australia ,grid.5216.00000 0001 2155 0800Department of Psychology, University of Athens, Athens, Greece
| | - Michelle Ball
- grid.1019.90000 0001 0396 9544Institute for Health and Sport, Victoria University, Melbourne, Australia
| | - Gabriel de Sena Collier
- grid.1019.90000 0001 0396 9544Institute for Health and Sport, Victoria University, Melbourne, Australia
| | - Nicholas C. Jacobson
- grid.254880.30000 0001 2179 2404Center for Technology and Behavioral Health, Geisel School of Medicine, Dartmouth College, Hanover, USA ,grid.254880.30000 0001 2179 2404Department of Biomedical Data Science, Geisel School of Medicine, Dartmouth College, Hanover, USA ,grid.254880.30000 0001 2179 2404Department of Psychiatry, Geisel School of Medicine, Dartmouth College, Hanover, USA ,grid.254880.30000 0001 2179 2404Quantitative Biomedical Sciences Program, Dartmouth College, Hanover, USA
| |
Collapse
|
76
|
Liu H. Applications of Artificial Intelligence to Popularize Legal Knowledge and Publicize the Impact on Adolescents' Mental Health Status. Front Psychiatry 2022; 13:902456. [PMID: 35722558 PMCID: PMC9199859 DOI: 10.3389/fpsyt.2022.902456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 04/21/2022] [Indexed: 11/13/2022] Open
Abstract
Artificial intelligence (AI) advancements have radically altered human production and daily living. When it comes to AI's quick rise, it facilitates the growth of China's citizens, and at the same moment, a lack of intelligence has led to several concerns regarding regulations and laws. Current investigations regarding AI on legal knowledge do not have consistent benefits in predicting adolescents' psychological status, performance, etc. The study's primary purpose is to examine the influence of AI on the legal profession and adolescent mental health using a novel cognitive fuzzy K-nearest neighbor (CF-KNN). Initially, the legal education datasets are gathered and are standardized in the pre-processing stage through the normalization technique to retrieve the unwanted noises or outliers. When normalized data are transformed into numerical features, they can be analyzed using a variational autoencoder (VAE) approach. Multi-gradient ant colony optimization (MG-ACO) is applied to select a proper subset of the features. Tree C4.5 (T-C4.5) and fitness-based logistic regression analysis (F-LRA) techniques assess the adolescent's mental health conditions. Finally, our proposed work's performance is examined and compared with classical techniques to gain our work with the greatest effectiveness. Findings are depicted in chart formation by employing the MATLAB tool.
Collapse
Affiliation(s)
- Hao Liu
- School of Law, Chongqing University, Chongqing, China
| |
Collapse
|
77
|
Abstract
Human-computer interaction (HCI) has contributed to the design and development of some efficient, user-friendly, cost-effective, and adaptable digital mental health solutions. But HCI has not been well-combined into technological developments resulting in quality and safety concerns. Digital platforms and artificial intelligence (AI) have a good potential to improve prediction, identification, coordination, and treatment by mental health care and suicide prevention services. AI is driving web-based and smartphone apps; mostly it is used for self-help and guided cognitive behavioral therapy (CBT) for anxiety and depression. Interactive AI may help real-time screening and treatment in outdated, strained or lacking mental healthcare systems. The barriers for using AI in mental healthcare include accessibility, efficacy, reliability, usability, safety, security, ethics, suitable education and training, and socio-cultural adaptability. Apps, real-time machine learning algorithms, immersive technologies, and digital phenotyping are notable prospects. Generally, there is a need for faster and better human factors in combination with machine interaction and automation, higher levels of effectiveness evaluation and the application of blended, hybrid or stepped care in an adjunct approach. HCI modeling may assist in the design and development of usable applications, and to effectively recognize, acknowledge, and address the inequities of mental health care and suicide prevention and assist in the digital therapeutic alliance.
Collapse
|
78
|
Nilsen P, Svedberg P, Nygren J, Frideros M, Johansson J, Schueller S. Accelerating the impact of artificial intelligence in mental healthcare through implementation science. IMPLEMENTATION RESEARCH AND PRACTICE 2022; 3:26334895221112033. [PMID: 37091110 PMCID: PMC9924259 DOI: 10.1177/26334895221112033] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Background The implementation of artificial intelligence (AI) in mental healthcare offers a potential solution to some of the problems associated with the availability, attractiveness, and accessibility of mental healthcare services. However, there are many knowledge gaps regarding how to implement and best use AI to add value to mental healthcare services, providers, and consumers. The aim of this paper is to identify challenges and opportunities for AI use in mental healthcare and to describe key insights from implementation science of potential relevance to understand and facilitate AI implementation in mental healthcare. Methods The paper is based on a selective review of articles concerning AI in mental healthcare and implementation science. Results Research in implementation science has established the importance of considering and planning for implementation from the start, the progression of implementation through different stages, and the appreciation of determinants at multiple levels. Determinant frameworks and implementation theories have been developed to understand and explain how different determinants impact on implementation. AI research should explore the relevance of these determinants for AI implementation. Implementation strategies to support AI implementation must address determinants specific to AI implementation in mental health. There might also be a need to develop new theoretical approaches or augment and recontextualize existing ones. Implementation outcomes may have to be adapted to be relevant in an AI implementation context. Conclusion Knowledge derived from implementation science could provide an important starting point for research on implementation of AI in mental healthcare. This field has generated many insights and provides a broad range of theories, frameworks, and concepts that are likely relevant for this research. However, when taking advantage of the existing knowledge basis, it is important to also be explorative and study AI implementation in health and mental healthcare as a new phenomenon in its own right since implementing AI may differ in various ways from implementing evidence-based practices in terms of what implementation determinants, strategies, and outcomes are most relevant. Plain Language Summary: The implementation of artificial intelligence (AI) in mental healthcare offers a potential solution to some of the problems associated with the availability, attractiveness, and accessibility of mental healthcare services. However, there are many knowledge gaps concerning how to implement and best use AI to add value to mental healthcare services, providers, and consumers. This paper is based on a selective review of articles concerning AI in mental healthcare and implementation science, with the aim to identify challenges and opportunities for the use of AI in mental healthcare and describe key insights from implementation science of potential relevance to understand and facilitate AI implementation in mental healthcare. AI offers opportunities for identifying the patients most in need of care or the interventions that might be most appropriate for a given population or individual. AI also offers opportunities for supporting a more reliable diagnosis of psychiatric disorders and ongoing monitoring and tailoring during the course of treatment. However, AI implementation challenges exist at organizational/policy, individual, and technical levels, making it relevant to draw on implementation science knowledge for understanding and facilitating implementation of AI in mental healthcare. Knowledge derived from implementation science could provide an important starting point for research on AI implementation in mental healthcare. This field has generated many insights and provides a broad range of theories, frameworks, and concepts that are likely relevant for this research.
Collapse
Affiliation(s)
| | - Petra Svedberg
- Halmstad University School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Jens Nygren
- Halmstad University School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | | | | | - Stephen Schueller
- Psychological Science, University of California Irvine, Irvine, CA, USA
| |
Collapse
|
79
|
Ćosić K, Popović S, Šarlija M, Kesedžić I, Gambiraža M, Dropuljić B, Mijić I, Henigsberg N, Jovanovic T. AI-Based Prediction and Prevention of Psychological and Behavioral Changes in Ex-COVID-19 Patients. Front Psychol 2021; 12:782866. [PMID: 35027902 PMCID: PMC8751545 DOI: 10.3389/fpsyg.2021.782866] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 12/02/2021] [Indexed: 12/30/2022] Open
Abstract
The COVID-19 pandemic has adverse consequences on human psychology and behavior long after initial recovery from the virus. These COVID-19 health sequelae, if undetected and left untreated, may lead to more enduring mental health problems, and put vulnerable individuals at risk of developing more serious psychopathologies. Therefore, an early distinction of such vulnerable individuals from those who are more resilient is important to undertake timely preventive interventions. The main aim of this article is to present a comprehensive multimodal conceptual approach for addressing these potential psychological and behavioral mental health changes using state-of-the-art tools and means of artificial intelligence (AI). Mental health COVID-19 recovery programs at post-COVID clinics based on AI prediction and prevention strategies may significantly improve the global mental health of ex-COVID-19 patients. Most COVID-19 recovery programs currently involve specialists such as pulmonologists, cardiologists, and neurologists, but there is a lack of psychiatrist care. The focus of this article is on new tools which can enhance the current limited psychiatrist resources and capabilities in coping with the upcoming challenges related to widespread mental health disorders. Patients affected by COVID-19 are more vulnerable to psychological and behavioral changes than non-COVID populations and therefore they deserve careful clinical psychological screening in post-COVID clinics. However, despite significant advances in research, the pace of progress in prevention of psychiatric disorders in these patients is still insufficient. Current approaches for the diagnosis of psychiatric disorders largely rely on clinical rating scales, as well as self-rating questionnaires that are inadequate for comprehensive assessment of ex-COVID-19 patients' susceptibility to mental health deterioration. These limitations can presumably be overcome by applying state-of-the-art AI-based tools in diagnosis, prevention, and treatment of psychiatric disorders in acute phase of disease to prevent more chronic psychiatric consequences.
Collapse
Affiliation(s)
- Krešimir Ćosić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Siniša Popović
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Marko Šarlija
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Ivan Kesedžić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Mate Gambiraža
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Branimir Dropuljić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Igor Mijić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Neven Henigsberg
- Croatian Institute for Brain Research, University of Zagreb School of Medicine, Zagreb, Croatia
| | - Tanja Jovanovic
- Department of Psychiatry and Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
| |
Collapse
|
80
|
Roth CB, Papassotiropoulos A, Brühl AB, Lang UE, Huber CG. Psychiatry in the Digital Age: A Blessing or a Curse? INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:8302. [PMID: 34444055 PMCID: PMC8391902 DOI: 10.3390/ijerph18168302] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 07/31/2021] [Accepted: 08/03/2021] [Indexed: 12/23/2022]
Abstract
Social distancing and the shortage of healthcare professionals during the COVID-19 pandemic, the impact of population aging on the healthcare system, as well as the rapid pace of digital innovation are catalyzing the development and implementation of new technologies and digital services in psychiatry. Is this transformation a blessing or a curse for psychiatry? To answer this question, we conducted a literature review covering a broad range of new technologies and eHealth services, including telepsychiatry; computer-, internet-, and app-based cognitive behavioral therapy; virtual reality; digital applied games; a digital medicine system; omics; neuroimaging; machine learning; precision psychiatry; clinical decision support; electronic health records; physician charting; digital language translators; and online mental health resources for patients. We found that eHealth services provide effective, scalable, and cost-efficient options for the treatment of people with limited or no access to mental health care. This review highlights innovative technologies spearheading the way to more effective and safer treatments. We identified artificially intelligent tools that relieve physicians from routine tasks, allowing them to focus on collaborative doctor-patient relationships. The transformation of traditional clinics into digital ones is outlined, and the challenges associated with the successful deployment of digitalization in psychiatry are highlighted.
Collapse
Affiliation(s)
- Carl B. Roth
- University Psychiatric Clinics Basel, Clinic for Adults, University of Basel, Wilhelm Klein-Strasse 27, CH-4002 Basel, Switzerland; (A.P.); (A.B.B.); (U.E.L.); (C.G.H.)
| | - Andreas Papassotiropoulos
- University Psychiatric Clinics Basel, Clinic for Adults, University of Basel, Wilhelm Klein-Strasse 27, CH-4002 Basel, Switzerland; (A.P.); (A.B.B.); (U.E.L.); (C.G.H.)
- Transfaculty Research Platform Molecular and Cognitive Neurosciences, University of Basel, Birmannsgasse 8, CH-4055 Basel, Switzerland
- Division of Molecular Neuroscience, Department of Psychology, University of Basel, Birmannsgasse 8, CH-4055 Basel, Switzerland
- Biozentrum, Life Sciences Training Facility, University of Basel, Klingelbergstrasse 50/70, CH-4056 Basel, Switzerland
| | - Annette B. Brühl
- University Psychiatric Clinics Basel, Clinic for Adults, University of Basel, Wilhelm Klein-Strasse 27, CH-4002 Basel, Switzerland; (A.P.); (A.B.B.); (U.E.L.); (C.G.H.)
| | - Undine E. Lang
- University Psychiatric Clinics Basel, Clinic for Adults, University of Basel, Wilhelm Klein-Strasse 27, CH-4002 Basel, Switzerland; (A.P.); (A.B.B.); (U.E.L.); (C.G.H.)
| | - Christian G. Huber
- University Psychiatric Clinics Basel, Clinic for Adults, University of Basel, Wilhelm Klein-Strasse 27, CH-4002 Basel, Switzerland; (A.P.); (A.B.B.); (U.E.L.); (C.G.H.)
| |
Collapse
|
81
|
Resnik P, De Choudhury M, Musacchio Schafer K, Coppersmith G. Bibliometric Studies and the Discipline of Social Media Mental Health Research. Comment on "Machine Learning for Mental Health in Social Media: Bibliometric Study". J Med Internet Res 2021; 23:e28990. [PMID: 34137722 PMCID: PMC8277321 DOI: 10.2196/28990] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Accepted: 05/13/2021] [Indexed: 12/14/2022] Open
Affiliation(s)
- Philip Resnik
- Department of Linguistics and Institute for Advanced Computer Studies, University of Maryland, College Park, MD, United States
| | - Munmun De Choudhury
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, United States
| | | | | |
Collapse
|
82
|
Renn BN, Schurr M, Zaslavsky O, Pratap A. Artificial Intelligence: An Interprofessional Perspective on Implications for Geriatric Mental Health Research and Care. Front Psychiatry 2021; 12:734909. [PMID: 34867524 PMCID: PMC8634654 DOI: 10.3389/fpsyt.2021.734909] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 10/07/2021] [Indexed: 11/26/2022] Open
Abstract
Artificial intelligence (AI) in healthcare aims to learn patterns in large multimodal datasets within and across individuals. These patterns may either improve understanding of current clinical status or predict a future outcome. AI holds the potential to revolutionize geriatric mental health care and research by supporting diagnosis, treatment, and clinical decision-making. However, much of this momentum is driven by data and computer scientists and engineers and runs the risk of being disconnected from pragmatic issues in clinical practice. This interprofessional perspective bridges the experiences of clinical scientists and data science. We provide a brief overview of AI with the main focus on possible applications and challenges of using AI-based approaches for research and clinical care in geriatric mental health. We suggest future AI applications in geriatric mental health consider pragmatic considerations of clinical practice, methodological differences between data and clinical science, and address issues of ethics, privacy, and trust.
Collapse
Affiliation(s)
- Brenna N Renn
- Department of Psychology, University of Nevada, Las Vegas, NV, United States
| | - Matthew Schurr
- Department of Psychology, University of Nevada, Las Vegas, NV, United States
| | - Oleg Zaslavsky
- Department of Biobehavioral Nursing and Health Informatics, University of Washington, Seattle, WA, United States
| | - Abhishek Pratap
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, ON, Canada.,Vector Institute for Artificial Intelligence, Toronto, ON, Canada.,Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, United States.,Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| |
Collapse
|