1
|
Lim B, Seth I, Maxwell M, Cuomo R, Ross RJ, Rozen WM. Evaluating the Efficacy of Large Language Models in Generating Medical Documentation: A Comparative Study of ChatGPT-4, ChatGPT-4o, and Claude. Aesthetic Plast Surg 2025:10.1007/s00266-025-04842-8. [PMID: 40229614 DOI: 10.1007/s00266-025-04842-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Accepted: 03/14/2025] [Indexed: 04/16/2025]
Abstract
BACKGROUND Large language models (LLMs) have demonstrated transformative potential in health care. They can enhance clinical and academic medicine by facilitating accurate diagnoses, interpreting laboratory results, and automating documentation processes. This study evaluates the efficacy of LLMs in generating surgical operation reports and discharge summaries, focusing on accuracy, efficiency, and quality. METHODS This study assessed the effectiveness of three leading LLMs-ChatGPT-4.0, ChatGPT-4o, and Claude-using six prompts and analyzing their responses for readability and output quality, validated by plastic surgeons. Readability was measured with the Flesch-Kincaid, Flesch reading ease scores, and Coleman-Liau Index, while reliability was evaluated using the DISCERN score. A paired two-tailed t-test (p<0.05) compared the statistical significance of these metrics and the time taken to generate operation reports and discharge summaries against the authors' results. RESULTS Table 3 shows statistically significant differences in readability between ChatGPT-4o and Claude across all metrics, while ChatGPT-4 and Claude differ significantly in the Flesch reading ease and Coleman-Liau indices. Table 6 reveals extremely low p-values across BL, IS, and MM for all models, with Claude consistently outperforming both ChatGPT-4 and ChatGPT-4o. Additionally, Claude generated documents the fastest, completing tasks in approximately 10 to 14 s. These results suggest that Claude not only excels in readability but also demonstrates superior reliability and speed, making it an efficient choice for practical applications. CONCLUSION The study highlights the importance of selecting appropriate LLMs for clinical use. Integrating these LLMs can streamline healthcare documentation, improve efficiency, and enhance patient outcomes through clearer communication and more accurate medical reports. LEVEL OF EVIDENCE V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Bryan Lim
- Department of Plastic and Reconstructive Surgery, Frankston Hospital, Peninsula Health, Frankston, VIC, Australia.
- Peninsula Clinical School, Central Clinical School, Faculty of Medicine, Monash University, Frankston, VIC, Australia.
| | - Ishith Seth
- Department of Plastic and Reconstructive Surgery, Frankston Hospital, Peninsula Health, Frankston, VIC, Australia
- Peninsula Clinical School, Central Clinical School, Faculty of Medicine, Monash University, Frankston, VIC, Australia
| | - Molly Maxwell
- Department of Plastic and Reconstructive Surgery, Frankston Hospital, Peninsula Health, Frankston, VIC, Australia
| | - Roberto Cuomo
- Department of Plastic and Reconstructive Surgery, University of Siena, Siena, Italy
| | - Richard J Ross
- Department of Plastic and Reconstructive Surgery, Frankston Hospital, Peninsula Health, Frankston, VIC, Australia
| | - Warren M Rozen
- Department of Plastic and Reconstructive Surgery, Frankston Hospital, Peninsula Health, Frankston, VIC, Australia
- Peninsula Clinical School, Central Clinical School, Faculty of Medicine, Monash University, Frankston, VIC, Australia
| |
Collapse
|
2
|
Kropf M. Trust as a Solution to Human Vulnerability: Ethical Considerations on Trust in Care Robots. Nurs Philos 2025; 26:e70020. [PMID: 40068131 PMCID: PMC11896634 DOI: 10.1111/nup.70020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Accepted: 02/27/2025] [Indexed: 03/15/2025]
Abstract
In the care sector, professionals face numerous challenges, such as a lack of resources, overloaded wards, physical and psychological strain, stressful constellations with patients and cooperation with medical professionals. Care robots are therefore increasingly being used to provide relief or to test new forms of interaction. However, this also raises the question of trust in these technical companions and the potential vulnerability to which these people then expose themselves. This article deals with an ethical analysis of the two concepts of trust and vulnerability in the context of care robotics. The first step is to examine what can be understood by vulnerability, focusing specifically on Misztal's three proposed types (relationships, future anticipation, past experiences). This strategy is often used as a starting point by authors and also seems relevant for the connection to the concept of trust. In a second step, these three types of human vulnerability are examined on the basis of a technical concept of trust. It is shown that (1) relationships and thus also interdependence can create additional options, (2) the anticipation problem with regard to the actions of others also makes responsibility transferable and (3) an explication of freedom is also associated with potential traumatic experiences. The final step brings together the previous considerations and makes it clear once again that trust in a care robot need not only be associated with vulnerability, but that vulnerability can also potentially be reduced, transferred and overcome.
Collapse
Affiliation(s)
- Mario Kropf
- Institute of Moral Theology, Faculty of Catholic TheologyUniversity of GrazGrazAustria
| |
Collapse
|
3
|
Lim B, Lirios G, Sakalkale A, Satheakeerthy S, Hayes D, Yeung JMC. Assessing the efficacy of artificial intelligence to provide peri-operative information for patients with a stoma. ANZ J Surg 2025; 95:464-496. [PMID: 39620607 DOI: 10.1111/ans.19337] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 10/11/2024] [Accepted: 11/17/2024] [Indexed: 03/27/2025]
Abstract
BACKGROUND Stomas present significant lifestyle and psychological challenges for patients, requiring comprehensive education and support. Current educational methods have limitations in offering relevant information to the patient, highlighting a potential role for artificial intelligence (AI). This study examined the utility of AI in enhancing stoma therapy management following colorectal surgery. MATERIAL AND METHODS We compared the efficacy of four prominent large language models (LLM)-OpenAI's ChatGPT-3.5 and ChatGPT-4.0, Google's Gemini, and Bing's CoPilot-against a series of metrics to evaluate their suitability as supplementary clinical tools. Through qualitative and quantitative analyses, including readability scores (Flesch-Kincaid, Flesch-Reading Ease, and Coleman-Liau index) and reliability assessments (Likert scale, DISCERN score and QAMAI tool), the study aimed to assess the appropriateness of LLM-generated advice for patients managing stomas. RESULTS There are varying degrees of readability and reliability across the evaluated models, with CoPilot and ChatGPT-4 demonstrating superior performance in several key metrics such as readability and comprehensiveness. However, the study underscores the infant stage of LLM technology in clinical applications. All responses required high school to college level education to comprehend comfortably. While the LLMs addressed users' questions directly, the absence of incorporating patient-specific factors such as past medical history generated broad and generic responses rather than offering tailored advice. CONCLUSION The complexity of individual patient conditions can challenge AI systems. The use of LLMs in clinical settings holds promise for improving patient education and stoma management support, but requires careful consideration of the models' capabilities and the context of their use.
Collapse
Affiliation(s)
- Bryan Lim
- Department of Colorectal Surgery, Western Health, Melbourne, Australia
| | - Gabriel Lirios
- Department of Colorectal Surgery, Western Health, Melbourne, Australia
| | - Aditya Sakalkale
- Department of Surgery, Western Precinct, University of Melbourne, Melbourne, Australia
| | | | - Diana Hayes
- Department of Colorectal Surgery, Western Health, Melbourne, Australia
| | - Justin M C Yeung
- Department of Colorectal Surgery, Western Health, Melbourne, Australia
- Department of Surgery, Western Precinct, University of Melbourne, Melbourne, Australia
| |
Collapse
|
4
|
Mwaka ES, Bazzeketa D, Mirembe J, Emoru RD, Twimukye A, Kivumbi A. Barriers to and enhancement of the utilization of digital mental health interventions in low-resource settings: Perceptions of young people in Uganda. Digit Health 2025; 11:20552076251321698. [PMID: 39963503 PMCID: PMC11831655 DOI: 10.1177/20552076251321698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Accepted: 02/03/2025] [Indexed: 02/20/2025] Open
Abstract
Introduction Digital mental health (DMH) enhances access to healthcare, particularly in low- and middle-income countries where investment in mental healthcare is low. However, utilization among young people (YP) is low. This study aimed to explore YP's perceptions of the barriers to the using of DMH interventions in low-resource settings. Methods A qualitative descriptive approach was used. Six face-to-face focus group discussions were conducted with 50 YP from nine universities in Uganda. The median age was 24 years (range 21-25 years) and respondents were drawn from diverse academic programmes with the majority being medical students (54%). A thematic approach was used to interpret the results. Results Three themes were identified from the data including perceptions of using DMH services, the perceived barriers to utilization, and suggestions for enhancement of DMH for YP in low-resource settings. Most respondents had a positive attitude towards DMH. The perceived barriers to utilization of DMH included the fear of stigma, affordability, inequitable access, privacy and confidentiality concerns, and app-related challenges. Access and use of DMH can be enhanced through public engagement, creating awareness, enhanced training, and access to affordable DMH interventions. Conclusion DMH was deemed important in extending healthcare to YP, particularly in health systems where traditional mental health services are not readily available. However, several factors hinder equitable access to DMH in low-resource settings. There is a need for long-term investment in digital health technologies.
Collapse
Affiliation(s)
- Erisa Sabakaki Mwaka
- School of Biomedical Sciences, College of Health Sciences, Makerere University, Kampala, Uganda
| | - Datsun Bazzeketa
- Faculty of Science and Technology, International University of East Africa, Kampala, Kampala, Uganda
- Faculty of Science and Computing, Ndejje University, Kampala, Kampala, Uganda
| | - Joy Mirembe
- School of Biomedical Sciences, College of Health Sciences, Makerere University, Kampala, Uganda
| | - Reagan D. Emoru
- School of Biomedical Sciences, College of Health Sciences, Makerere University, Kampala, Uganda
| | - Adelline Twimukye
- Infectious Diseases Institute, Makerere University, Kampala, Central, Uganda
| | - Apollo Kivumbi
- School of Biomedical Sciences, College of Health Sciences, Makerere University, Kampala, Uganda
- Dornsife School of Public Health, Drexel University, Philadelphia, PA, USA
| |
Collapse
|
5
|
Tavory T. Regulating AI in Mental Health: Ethics of Care Perspective. JMIR Ment Health 2024; 11:e58493. [PMID: 39298759 PMCID: PMC11450345 DOI: 10.2196/58493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 06/29/2024] [Accepted: 07/20/2024] [Indexed: 09/22/2024] Open
Abstract
This article contends that the responsible artificial intelligence (AI) approach-which is the dominant ethics approach ruling most regulatory and ethical guidance-falls short because it overlooks the impact of AI on human relationships. Focusing only on responsible AI principles reinforces a narrow concept of accountability and responsibility of companies developing AI. This article proposes that applying the ethics of care approach to AI regulation can offer a more comprehensive regulatory and ethical framework that addresses AI's impact on human relationships. This dual approach is essential for the effective regulation of AI in the domain of mental health care. The article delves into the emergence of the new "therapeutic" area facilitated by AI-based bots, which operate without a therapist. The article highlights the difficulties involved, mainly the absence of a defined duty of care toward users, and shows how implementing ethics of care can establish clear responsibilities for developers. It also sheds light on the potential for emotional manipulation and the risks involved. In conclusion, the article proposes a series of considerations grounded in the ethics of care for the developmental process of AI-powered therapeutic tools.
Collapse
Affiliation(s)
- Tamar Tavory
- Faculty of Law, Bar Ilan University, Ramat Gan, Israel
- The Samueli Initiative for Responsible AI in Medicine, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
6
|
Liedo B, Van Grunsven J, Marin L. Emotional Labor and the Problem of Exploitation in Roboticized Care Practices: Enriching the Framework of Care Centred Value Sensitive Design. SCIENCE AND ENGINEERING ETHICS 2024; 30:42. [PMID: 39259354 PMCID: PMC11390761 DOI: 10.1007/s11948-024-00511-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 08/23/2024] [Indexed: 09/13/2024]
Abstract
Care ethics has been advanced as a suitable framework for evaluating the ethical significance of assistive robotics. One of the most prominent care ethical contributions to the ethical assessment of assistive robots comes through the work of Aimee Van Wynsberghe, who has developed the Care-Centred Value-Sensitive Design framework (CCVSD) in order to incorporate care values into the design of assistive robots. Building upon the care ethics work of Joan Tronto, CCVSD has been able to highlight a number of ways in which care practices can undergo significant ethical transformations upon the introduction of assistive robots. In this paper, we too build upon the work of Tronto in an effort to enrich the CCVSD framework. Combining insights from Tronto's work with the sociological concept of emotional labor, we argue that CCVSD remains underdeveloped with respect to the impact robots may have on the emotional labor required by paid care workers. Emotional labor consists of the managing of emotions and of emotional bonding, both of which signify a demanding yet potentially fulfilling dimension of paid care work. Because of the conditions in which care labor is performed nowadays, emotional labor is also susceptible to exploitation. While CCVSD can acknowledge some manifestations of unrecognized emotional labor in care delivery, it remains limited in capturing the structural conditions that fuel this vulnerability to exploitation. We propose that the idea of privileged irresponsibility, coined by Tronto, helps to understand how the exploitation of emotional labor can be prone to happen in roboticized care practices.
Collapse
Affiliation(s)
- Belén Liedo
- Institute of Philosophy, Spanish National Research Council, Madrid, Spain.
| | - Janna Van Grunsven
- Ethics and Philosophy of Technology Section, TU Delft, Delft, The Netherlands
| | - Lavinia Marin
- Ethics and Philosophy of Technology Section, TU Delft, Delft, The Netherlands
| |
Collapse
|
7
|
Muyskens K, Ma Y, Dunn M. Can an AI-carebot be filial? Reflections from Confucian ethics. Nurs Ethics 2024; 31:999-1009. [PMID: 38472138 DOI: 10.1177/09697330241238332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
This article discusses the application of artificially intelligent robots within eldercare and explores a series of ethical considerations, including the challenges that AI (Artificial Intelligence) technology poses to traditional Chinese Confucian filial piety. From the perspective of Confucian ethics, the paper argues that robots cannot adequately fulfill duties of care. Due to their detachment from personal relationships and interactions, the "emotions" of AI robots are merely performative reactions in different situations, rather than actual emotional abilities. No matter how "humanized" robots become, it is difficult to establish genuine empathy and a meaningful relationship with them for this reason. Even so, we acknowledge that AI robots are a significant tool in managing the demands of elder care and the growth of care poverty, and as such, we attempt to outline some parameters within which care robotics could be acceptable within a Confucian ethical system. Finally, the paper discusses the social impact and ethical considerations brought on by the interaction between humans and machines. It is observed that the relationship between humans and technology has always had both utopian and dystopian aspects, and robotic elder care is no exception. AI caregiver robots will likely become a part of elder care, and the transformation of these robots from "service providers" to "companions" seems inevitable. In light of this, the application of AI-augmented robotic elder care will also eventually change our understanding of interpersonal relationships and traditional requirements of filial piety.
Collapse
|
8
|
Palmier C, Rigaud AS, Ogawa T, Wieching R, Dacunha S, Barbarossa F, Stara V, Bevilacqua R, Pino M. Identification of Ethical Issues and Practice Recommendations Regarding the Use of Robotic Coaching Solutions for Older Adults: Narrative Review. J Med Internet Res 2024; 26:e48126. [PMID: 38888953 PMCID: PMC11220435 DOI: 10.2196/48126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 12/22/2023] [Accepted: 03/12/2024] [Indexed: 06/20/2024] Open
Abstract
BACKGROUND Technological advances in robotics, artificial intelligence, cognitive algorithms, and internet-based coaches have contributed to the development of devices capable of responding to some of the challenges resulting from demographic aging. Numerous studies have explored the use of robotic coaching solutions (RCSs) for supporting healthy behaviors in older adults and have shown their benefits regarding the quality of life and functional independence of older adults at home. However, the use of RCSs by individuals who are potentially vulnerable raises many ethical questions. Establishing an ethical framework to guide the development, use, and evaluation practices regarding RCSs for older adults seems highly pertinent. OBJECTIVE The objective of this paper was to highlight the ethical issues related to the use of RCSs for health care purposes among older adults and draft recommendations for researchers and health care professionals interested in using RCSs for older adults. METHODS We conducted a narrative review of the literature to identify publications including an analysis of the ethical dimension and recommendations regarding the use of RCSs for older adults. We used a qualitative analysis methodology inspired by a Health Technology Assessment model. We included all article types such as theoretical papers, research studies, and reviews dealing with ethical issues or recommendations for the implementation of these RCSs in a general population, particularly among older adults, in the health care sector and published after 2011 in either English or French. The review was performed between August and December 2021 using the PubMed, CINAHL, Embase, Scopus, Web of Science, IEEE Explore, SpringerLink, and PsycINFO databases. Selected publications were analyzed using the European Network of Health Technology Assessment Core Model (version 3.0) around 5 ethical topics: benefit-harm balance, autonomy, privacy, justice and equity, and legislation. RESULTS In the 25 publications analyzed, the most cited ethical concerns were the risk of accidents, lack of reliability, loss of control, risk of deception, risk of social isolation, data confidentiality, and liability in case of safety problems. Recommendations included collecting the opinion of target users, collecting their consent, and training professionals in the use of RCSs. Proper data management, anonymization, and encryption appeared to be essential to protect RCS users' personal data. CONCLUSIONS Our analysis supports the interest in using RCSs for older adults because of their potential contribution to individuals' quality of life and well-being. This analysis highlights many ethical issues linked to the use of RCSs for health-related goals. Future studies should consider the organizational consequences of the implementation of RCSs and the influence of cultural and socioeconomic specificities of the context of experimentation. We suggest implementing a scalable ethical and regulatory framework to accompany the development and implementation of RCSs for various aspects related to the technology, individual, or legal aspects.
Collapse
Affiliation(s)
- Cécilia Palmier
- Maladie d'Alzheimer, Université de Paris, Paris, France
- Service de Gériatrie 1 & 2, Hôpital Broca, Assistance Publique - Hôpitaux de Paris, Paris, France
| | - Anne-Sophie Rigaud
- Maladie d'Alzheimer, Université de Paris, Paris, France
- Service de Gériatrie 1 & 2, Hôpital Broca, Assistance Publique - Hôpitaux de Paris, Paris, France
| | - Toshimi Ogawa
- Smart-Aging Research Center, Tohoku University, Sendai, Japan
| | - Rainer Wieching
- Institute for New Media & Information Systems, University of Siegen, Siegen, Germany
| | - Sébastien Dacunha
- Maladie d'Alzheimer, Université de Paris, Paris, France
- Service de Gériatrie 1 & 2, Hôpital Broca, Assistance Publique - Hôpitaux de Paris, Paris, France
| | - Federico Barbarossa
- Scientific Direction, Istituto Nazionale di Ricovero e Cura per Anziani, Ancona, Italy
| | - Vera Stara
- Scientific Direction, Istituto Nazionale di Ricovero e Cura per Anziani, Ancona, Italy
| | - Roberta Bevilacqua
- Scientific Direction, Istituto Nazionale di Ricovero e Cura per Anziani, Ancona, Italy
| | - Maribel Pino
- Maladie d'Alzheimer, Université de Paris, Paris, France
- Service de Gériatrie 1 & 2, Hôpital Broca, Assistance Publique - Hôpitaux de Paris, Paris, France
| |
Collapse
|
9
|
Siebelink NM, van Dam KN, Lukkien DRM, Boon B, Smits M, van der Poel A. Action Opportunities to Pursue Responsible Digital Care for People With Intellectual Disabilities: Qualitative Study. JMIR Ment Health 2024; 11:e48147. [PMID: 38416547 PMCID: PMC10938230 DOI: 10.2196/48147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 10/26/2023] [Accepted: 01/12/2024] [Indexed: 02/29/2024] Open
Abstract
BACKGROUND Responsible digital care refers to any intentional systematic effort designed to increase the likelihood of a digital care technology developed through ethical decision-making, being socially responsible and aligned with the values and well-being of those impacted by it. OBJECTIVE We aimed to present examples of action opportunities for (1) designing "technology"; (2) shaping the "context" of use; and (3) adjusting the behavior of "users" to guide responsible digital care for people with intellectual disabilities. METHODS Three cases were considered: (1) design of a web application to support the preparation of meals for groups of people with intellectual disabilities, (2) implementation of an app to help people with intellectual disabilities regulate their stress independently, and (3) implementation of a social robot to stimulate interaction and physical activity among people with intellectual disabilities. Overall, 26 stakeholders participated in 3 multistakeholder workshops (case 1: 10/26, 38%; case 2: 10/26, 38%; case 3: 6/26, 23%) based on the "guidance ethics approach." We identified stakeholders' values based on bottom-up exploration of experienced and expected effects of using the technology, and we formulated action opportunities for these values in the specific context of use. Qualitative data were analyzed thematically. RESULTS Overall, 232 effects, 33 values, and 156 action opportunities were collected. General and case-specific themes were identified. Important stakeholder values included quality of care, autonomy, efficiency, health, enjoyment, reliability, and privacy. Both positive and negative effects could underlie stakeholders' values and influence the development of action opportunities. Action opportunities comprised the following: (1) technology: development of the technology (eg, user experience and customization), technology input (eg, recipes for meals, intervention options for reducing stress, and activities), and technology output (eg, storage and use of data); (2) context: guidelines, training and support, policy or agreements, and adjusting the physical environment in which the technology is used; and (3) users: integrating the technology into daily care practice, by diminishing (eg, "letting go" to increase the autonomy of people with intellectual disabilities), retaining (eg, face-to-face contact), and adding (eg, evaluation moments) certain behaviors of care professionals. CONCLUSIONS This is the first study to provide insight into responsible digital care for people with intellectual disabilities by means of bottom-up exploration of action opportunities to take account of stakeholders' values in designing technology, shaping the context of use, and adjusting the behavior of users. Although part of the findings may be generalized, case-specific insights and a complementary top-down approach (eg, predefined ethical frameworks) are essential. The findings represent a part of an ethical discourse that requires follow-up to meet the dynamism of stakeholders' values and further develop and implement action opportunities to achieve socially desirable, ethically acceptable, and sustainable digital care that improves the lives of people with intellectual disabilities.
Collapse
Affiliation(s)
| | - Kirstin N van Dam
- Academy Het Dorp, Arnhem, Netherlands
- Tranzo, Tilburg School of Social and Behavioral Sciences, Tilburg University, Tilburg, Netherlands
| | - Dirk R M Lukkien
- Vilans, Utrecht, Netherlands
- Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, Netherlands
| | - Brigitte Boon
- Academy Het Dorp, Arnhem, Netherlands
- Tranzo, Tilburg School of Social and Behavioral Sciences, Tilburg University, Tilburg, Netherlands
- Siza, Arnhem, Netherlands
| | | | | |
Collapse
|
10
|
Li Y, Wu Y, Chen X, Chen H, Kong D, Tang H, Li S. Beyond Human Detection: A Benchmark for Detecting Common Human Posture. SENSORS (BASEL, SWITZERLAND) 2023; 23:8061. [PMID: 37836891 PMCID: PMC10574885 DOI: 10.3390/s23198061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/01/2023] [Accepted: 09/20/2023] [Indexed: 10/15/2023]
Abstract
Human detection is the task of locating all instances of human beings present in an image, which has a wide range of applications across various fields, including search and rescue, surveillance, and autonomous driving. The rapid advancement of computer vision and deep learning technologies has brought significant improvements in human detection. However, for more advanced applications like healthcare, human-computer interaction, and scene understanding, it is crucial to obtain information beyond just the localization of humans. These applications require a deeper understanding of human behavior and state to enable effective and safe interactions with humans and the environment. This study presents a comprehensive benchmark, the Common Human Postures (CHP) dataset, aimed at promoting a more informative and more encouraging task beyond mere human detection. The benchmark dataset comprises a diverse collection of images, featuring individuals in different environments, clothing, and occlusions, performing a wide range of postures and activities. The benchmark aims to enhance research in this challenging task by designing novel and precise methods specifically for it. The CHP dataset consists of 5250 human images collected from different scenes, annotated with bounding boxes for seven common human poses. Using this well-annotated dataset, we have developed two baseline detectors, namely CHP-YOLOF and CHP-YOLOX, building upon two identity-preserved human posture detectors: IPH-YOLOF and IPH-YOLOX. We evaluate the performance of these baseline detectors through extensive experiments. The results demonstrate that these baseline detectors effectively detect human postures on the CHP dataset. By releasing the CHP dataset, we aim to facilitate further research on human pose estimation and to attract more researchers to focus on this challenging task.
Collapse
Affiliation(s)
| | | | | | | | | | - Haihua Tang
- Guangxi Key Laboratory of Embedded Technology and Intelligent Information Processing, College of Information Science and Engineering, Guilin University of Technology, Guilin 541006, China
| | | |
Collapse
|
11
|
Sun J, Dong QX, Wang SW, Zheng YB, Liu XX, Lu TS, Yuan K, Shi J, Hu B, Lu L, Han Y. Artificial intelligence in psychiatry research, diagnosis, and therapy. Asian J Psychiatr 2023; 87:103705. [PMID: 37506575 DOI: 10.1016/j.ajp.2023.103705] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 07/16/2023] [Accepted: 07/20/2023] [Indexed: 07/30/2023]
Abstract
Psychiatric disorders are now responsible for the largest proportion of the global burden of disease, and even more challenges have been seen during the COVID-19 pandemic. Artificial intelligence (AI) is commonly used to facilitate the early detection of disease, understand disease progression, and discover new treatments in the fields of both physical and mental health. The present review provides a broad overview of AI methodology and its applications in data acquisition and processing, feature extraction and characterization, psychiatric disorder classification, potential biomarker detection, real-time monitoring, and interventions in psychiatric disorders. We also comprehensively summarize AI applications with regard to the early warning, diagnosis, prognosis, and treatment of specific psychiatric disorders, including depression, schizophrenia, autism spectrum disorder, attention-deficit/hyperactivity disorder, addiction, sleep disorders, and Alzheimer's disease. The advantages and disadvantages of AI in psychiatry are clarified. We foresee a new wave of research opportunities to facilitate and improve AI technology and its long-term implications in psychiatry during and after the COVID-19 era.
Collapse
Affiliation(s)
- Jie Sun
- Pain Medicine Center, Peking University Third Hospital, Beijing 100191, China; Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China
| | - Qun-Xi Dong
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - San-Wang Wang
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China; Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Yong-Bo Zheng
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China; Peking-Tsinghua Center for Life Sciences and PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
| | - Xiao-Xing Liu
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China
| | - Tang-Sheng Lu
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence Research, Peking University, Beijing 100191, China
| | - Kai Yuan
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China
| | - Jie Shi
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence Research, Peking University, Beijing 100191, China
| | - Bin Hu
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| | - Lin Lu
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China; Peking-Tsinghua Center for Life Sciences and PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China.
| | - Ying Han
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence Research, Peking University, Beijing 100191, China.
| |
Collapse
|
12
|
Ragno L, Borboni A, Vannetti F, Amici C, Cusano N. Application of Social Robots in Healthcare: Review on Characteristics, Requirements, Technical Solutions. SENSORS (BASEL, SWITZERLAND) 2023; 23:6820. [PMID: 37571603 PMCID: PMC10422563 DOI: 10.3390/s23156820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 07/14/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023]
Abstract
Cyber-physical or virtual systems or devices that are capable of autonomously interacting with human or non-human agents in real environments are referred to as social robots. The primary areas of application for biomedical technology are nursing homes, hospitals, and private homes for the purpose of providing assistance to the elderly, people with disabilities, children, and medical personnel. This review examines the current state-of-the-art of social robots used in healthcare applications, with a particular emphasis on the technical characteristics and requirements of these different types of systems. Humanoids robots, companion robots, and telepresence robots are the three primary categories of devices that are identified and discussed in this article. The research looks at commercial applications, as well as scientific literature (according to the Scopus Elsevier database), patent analysis (using the Espacenet search engine), and more (searched with Google search engine). A variety of devices are enumerated and categorized, and then our discussion and organization of their respective specifications takes place.
Collapse
Affiliation(s)
- Luca Ragno
- Department of Mechanical and Industrial Engineering, Università degli Studi di Brescia, Via Branze 38, 25123 Brescia, Italy
| | - Alberto Borboni
- Department of Mechanical and Industrial Engineering, Università degli Studi di Brescia, Via Branze 38, 25123 Brescia, Italy
| | - Federica Vannetti
- IRCCS Fondazione Don Carlo Gnocchi, Via di Scandicci 269, 50143 Florence, Italy
| | - Cinzia Amici
- Department of Mechanical and Industrial Engineering, Università degli Studi di Brescia, Via Branze 38, 25123 Brescia, Italy
| | - Nicoletta Cusano
- Faculty of Political Science and Sociopsychological Dynamics, Università degli Studi Internazionali, Via Cristoforo Colombo 200, 00147 Rome, Italy
| |
Collapse
|
13
|
Deng D, Rogers T, Naslund JA. The Role of Moderators in Facilitating and Encouraging Peer-to-Peer Support in an Online Mental Health Community: A Qualitative Exploratory Study. JOURNAL OF TECHNOLOGY IN BEHAVIORAL SCIENCE 2023; 8:128-139. [PMID: 36810998 PMCID: PMC9933803 DOI: 10.1007/s41347-023-00302-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 12/04/2022] [Accepted: 01/17/2023] [Indexed: 02/18/2023]
Abstract
Online peer support platforms have gained popularity as a potential way for people struggling with mental health problems to share information and provide support to each other. While these platforms can offer an open space to discuss emotionally difficult issues, unsafe or unmoderated communities can allow potential harm to users by spreading triggering content, misinformation or hostile interactions. The purpose of this study was to explore the role of moderators in these online communities, and how moderators can facilitate peer-to-peer support, while minimizing harms to users and amplifying potential benefits. Moderators of the Togetherall peer support platform were recruited to participate in qualitative interviews. The moderators, referred to as 'Wall Guides', were asked about their day-to-day responsibilities, positive and negative experiences they have witnessed on the platform and the strategies they employ when encountering problems such as lack of engagement or posting of inappropriate content. The data were then analyzed qualitatively using thematic content analysis and consensus codes were deduced and reviewed to reach final results and representative themes. In total, 20 moderators participated in this study, and described their experiences and efforts to follow a consistent and shared protocol for responding to common scenarios in the online community. Many reported the deep connections formed by the online community, the helpful and thoughtful responses that members give each other and the satisfaction of seeing progress in members' recovery. They also reported occasional aggressive, sensitive or inconsiderate comments and posts on the platform. They respond by removing or revising the hurtful post or reaching out to the affected member to maintain the 'house rules'. Lastly, many discussed strategies they elicit to promote engagement from members within the community and ensure each member is supported through their use of the platform. This study sheds light on the critical role of moderators of online peer support communities, and their ability to contribute to the potential benefits of digital peer support while minimizing risks to users. The findings reported here accentuate the importance of having well-trained moderators on online peer support platforms and can guide future efforts to effectively train and supervise prospective peer support moderators. Moderators can become an active 'shaping force' and bring a cohesive culture of expressed empathy, sensitivity and care. The delivery of a healthy and safe community contrasts starkly with non-moderated online forums, which can become unhealthy and unsafe as a result.
Collapse
Affiliation(s)
- Davy Deng
- grid.189504.10000 0004 1936 7558Harvard Chan School of Public Health, Boston, MA USA
| | | | - John A. Naslund
- grid.38142.3c000000041936754XDepartment of Global Health and Social Medicine, Harvard Medical School, Boston, MA USA
| |
Collapse
|
14
|
Lukkien DRM, Nap HH, Buimer HP, Peine A, Boon WPC, Ket JCF, Minkman MMN, Moors EHM. Toward Responsible Artificial Intelligence in Long-Term Care: A Scoping Review on Practical Approaches. THE GERONTOLOGIST 2023; 63:155-168. [PMID: 34871399 PMCID: PMC9872770 DOI: 10.1093/geront/gnab180] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Indexed: 02/02/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) is widely positioned to become a key element of intelligent technologies used in the long-term care (LTC) for older adults. The increasing relevance and adoption of AI has encouraged debate over the societal and ethical implications of introducing and scaling AI. This scoping review investigates how the design and implementation of AI technologies in LTC is addressed responsibly: so-called responsible innovation (RI). RESEARCH DESIGN AND METHODS We conducted a systematic literature search in 5 electronic databases using concepts related to LTC, AI, and RI. We then performed a descriptive and thematic analysis to map the key concepts, types of evidence, and gaps in the literature. RESULTS After reviewing 3,339 papers, 25 papers were identified that met our inclusion criteria. From this literature, we extracted 3 overarching themes: user-oriented AI innovation; framing AI as a solution to RI issues; and context-sensitivity. Our results provide an overview of measures taken and recommendations provided to address responsible AI innovation in LTC. DISCUSSION AND IMPLICATIONS The review underlines the importance of the context of use when addressing responsible AI innovation in LTC. However, limited empirical evidence actually details how responsible AI innovation is addressed in context. Therefore, we recommend expanding empirical studies on RI at the level of specific AI technologies and their local contexts of use. Also, we call for more specific frameworks for responsible AI innovation in LTC to flexibly guide researchers and innovators. Future frameworks should clearly distinguish between RI processes and outcomes.
Collapse
Affiliation(s)
- Dirk R M Lukkien
- Vilans Centre of Expertise for Long-Term Care, Utrecht, The Netherlands
- Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, The Netherlands
| | - Henk Herman Nap
- Vilans Centre of Expertise for Long-Term Care, Utrecht, The Netherlands
- Human Technology Interaction, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Hendrik P Buimer
- Vilans Centre of Expertise for Long-Term Care, Utrecht, The Netherlands
| | - Alexander Peine
- Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, The Netherlands
| | - Wouter P C Boon
- Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, The Netherlands
| | | | - Mirella M N Minkman
- Vilans Centre of Expertise for Long-Term Care, Utrecht, The Netherlands
- TIAS School for Business and Society, Tilburg University, Tilburg, The Netherlands
| | - Ellen H M Moors
- Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
15
|
From Pluralistic Normative Principles to Autonomous-Agent Rules. Minds Mach (Dordr) 2022. [DOI: 10.1007/s11023-022-09614-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractWith recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural (‘SLEEC’) nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications of autonomous agents requires the refinement of normative principles into explicitly formulated practical rules. This paper develops a process for deriving specification rules from a set of high-level norms, thereby bridging the gap between normative principles and operational practice. This enables autonomous agents to select and execute the most normatively favourable action in the intended context premised on a range of underlying relevant normative principles. In the translation and reduction of normative principles to SLEEC rules, we present an iterative process that uncovers normative principles, addresses SLEEC concerns, identifies and resolves SLEEC conflicts, and generates both preliminary and complex normatively-relevant rules, thereby guiding the development of autonomous agents and better positioning them as normatively SLEEC-sensitive or SLEEC-compliant.
Collapse
|
16
|
Ruf E, Pauli C, Misoch S. Emotionale Reaktionen älterer Menschen gegenüber Sozial Assistiven Robotern. GIO-GRUPPE-INTERAKTION-ORGANISATION-ZEITSCHRIFT FUER ANGEWANDTE ORGANISATIONSPSYCHOLOGIE 2022. [DOI: 10.1007/s11612-022-00641-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
ZusammenfassungDieser Beitrag der Zeitschrift Gruppe. Interaktion. Organisation. (GIO) beschreibt unterschiedliche emotionale Reaktionen älterer Personen auf in verschiedenen Settings eingesetzte Sozial Assistive Roboter (SAR). In Folge des demographischen Wandels gibt es zunehmend mehr Personen in hohem Lebensalter, welche zuhause oder in Institutionen Unterstützung benötigen. Der Einsatz von Robotern zur Unterstützung wird als eine Möglichkeit gesehen, den gesellschaftlichen Herausforderungen zu begegnen. Gerade SAR werden zunehmend für ältere Personen erprobt und eingesetzt. Systematische Reviews zeigen das positive Potenzial von SAR auf ältere Menschen hinsichtlich (sozial-)psychologischer und physiologischer Parameter, gleichzeitig hat der Einsatz von SAR bei älteren Menschen eine intensive ethische Diskussion ausgelöst. Emotionen von Nutzenden gegenüber Robotern stehen dabei im Fokus, da diese einen wichtigen Aspekt der Akzeptanz und Wirkung darstellen. Dabei werden vor allem Fragen, die mit einer emotionalen Bindung an den Roboter zusammenhängen, kritisch diskutiert. Das Institut für Altersforschung (IAF) der Ostschweizer Fachhochschule (OST) hat im Rahmen von Feldtestungen mit unterschiedlichen SAR bei unterschiedlichen Personengruppen und Einsatzbereichen geforscht. Im Rahmen einer Sekundäranalyse wurden eine Bandbreite emotionaler Reaktionen bis hin zu Bindungen der verschiedenen Nutzergruppen registriert. Es konnte gezeigt werden, dass sozio-emotionale Bedürfnisse von Nutzenden durch den SAR gestillt werden können, und es zu Ablehnung kommen kann, wenn diesen nicht Rechnung getragen wird. Emotionale Bindungen sind jedoch differenziert zu betrachten, da der Einsatz von SAR, gerade bei vulnerablen Personen, trotz funktionaler Bindung auch neu induzierte negative Gefühle hervorrufen kann. Beim Einsatz von SAR in der Praxis es ist wichtig, alle Emotionen der Nutzenden gegenüber SAR frühzeitig zu erheben und im Hinblick auf mögliche unterwünschte Wirkungen wie (zu) starkem emotionalen Attachment zu beurteilen. Die dargestellten explorativen Studien ermöglichen es, exemplarische Einsatzfelder mit positivem Potential zu definieren, aber auch ethisch problematische Situationen zu beschreiben, um diese in Zukunft vermeiden zu können.
Collapse
|
17
|
Fronemann N, Pollmann K, Loh W. Should my robot know what's best for me? Human–robot interaction between user experience and ethical design. AI & SOCIETY 2022. [DOI: 10.1007/s00146-021-01210-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractTo integrate social robots in real-life contexts, it is crucial that they are accepted by the users. Acceptance is not only related to the functionality of the robot but also strongly depends on how the user experiences the interaction. Established design principles from usability and user experience research can be applied to the realm of human–robot interaction, to design robot behavior for the comfort and well-being of the user. Focusing the design on these aspects alone, however, comes with certain ethical challenges, especially regarding the user’s privacy and autonomy. Based on an example scenario of human–robot interaction in elder care, this paper discusses how established design principles can be used in social robotic design. It then juxtaposes these with ethical considerations such as privacy and user autonomy. Combining user experience and ethical perspectives, we propose adjustments to the original design principles and canvass our own design recommendations for a positive and ethically acceptable social human–robot interaction design. In doing so, we show that positive user experience and ethical design may be sometimes at odds, but can be reconciled in many cases, if designers are willing to adjust and amend time-tested design principles.
Collapse
|
18
|
The Ethical Governance for the Vulnerability of Care Robots: Interactive-Distance-Oriented Flexible Design. SUSTAINABILITY 2022. [DOI: 10.3390/su14042303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The application of caring robots is currently a widely accepted solution to the problem of aging. However, for the elderly groups who live in gregarious residences and share intelligence devices, caring robots will cause intimacy and assistance dilemmas in the relationship between humans and non-human agencies. This is an information-assisted machine setting, with resulting design ethics issues brought about by the binary values of human and machine, body and mind. The “vulnerability” in risk ethics demonstrates that the ethical problems of human institutions stem from the increase of dependence and the obstruction of intimacy, which are essentially caused by the increased degree of ethical risk exposure and the restriction of agency. Based on value-sensitive design, caring ethics and machine ethics, this paper proposes a flexible design with the interaction-distance-oriented concept, and reprograms the ethical design of caring robots with intentional distance, representational distance and interpretive distance as indicators. The main purpose is to advocate a new type of human-machine interaction relationship emphasizing diversity and physical interaction.
Collapse
|
19
|
Nickelsen NCM, Simonsen Abildgaard J. The entwinement of policy, design and care scripts: Providing alternative choice-dependency situations with care robots. SOCIOLOGY OF HEALTH & ILLNESS 2022; 44:451-468. [PMID: 35092619 DOI: 10.1111/1467-9566.13434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 12/17/2021] [Accepted: 01/04/2022] [Indexed: 06/14/2023]
Abstract
The use of robots to assist feeding has become important for people with an impaired arm function. Yet, despite large-scale dissemination strategies, it has proven difficult to sustain the use of this technology. This ethnographic study draws on the script approach to discuss the use of robots to assist feeding. The empirical work was done at locations in Denmark and Sweden. Drawing on document studies, interviews, observation of meals and video footage, we discuss (1) policy strategies promoting ideas such as self-reliance; (2) design visions promoting ideas such as empowerment; (3) and three scripts of care: (a) the script of choice, (b) the script of eating alone and (c) the script of eating together. We argue that scripts entwine and give rise to and prevent the use of robots. The study contributes to the script literature and the care robot literature by substantiating that care robots may generate choice-dependency situations for users. Rather than the somewhat overflowing 'self-reliance' and 'empowerment', alternative configurations of choice and dependency emerge, in which some situations fit users better than others. We conclude that although sustaining the use of feeding robots is difficult, in some cases, useful choices arise for both end-users and care providers.
Collapse
Affiliation(s)
| | - Johan Simonsen Abildgaard
- The National Research Centre for the Working Environment, Copenhagen, Denmark
- Department of Organization, Copenhagen Business School, Frederiksberg, Denmark
| |
Collapse
|
20
|
Wies B, Landers C, Ienca M. Digital Mental Health for Young People: A Scoping Review of Ethical Promises and Challenges. Front Digit Health 2021; 3:697072. [PMID: 34713173 PMCID: PMC8521997 DOI: 10.3389/fdgth.2021.697072] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 07/06/2021] [Indexed: 11/13/2022] Open
Abstract
Mental health disorders are complex disorders of the nervous system characterized by a behavioral or mental pattern that causes significant distress or impairment of personal functioning. Mental illness is of particular concern for younger people. The WHO estimates that around 20% of the world's children and adolescents have a mental health condition, a rate that is almost double compared to the general population. One approach toward mitigating the medical and socio-economic effects of mental health disorders is leveraging the power of digital health technology to deploy assistive, preventative, and therapeutic solutions for people in need. We define "digital mental health" as any application of digital health technology for mental health assessment, support, prevention, and treatment. However, there is only limited evidence that digital mental health tools can be successfully implemented in clinical settings. Authors have pointed to a lack of technical and medical standards for digital mental health apps, personalized neurotechnology, and assistive cognitive technology as a possible cause of suboptimal adoption and implementation in the clinical setting. Further, ethical concerns have been raised related to insufficient effectiveness, lack of adequate clinical validation, and user-centered design as well as data privacy vulnerabilities of current digital mental health products. The aim of this paper is to report on a scoping review we conducted to capture and synthesize the growing literature on the promises and ethical challenges of digital mental health for young people aged 0-25. This review seeks to survey the scope and focus of the relevant literature, identify major benefits and opportunities of ethical significance (e.g., reducing suffering and improving well-being), and provide a comprehensive mapping of the emerging ethical challenges. Our findings provide a comprehensive synthesis of the current literature and offer a detailed informative basis for any stakeholder involved in the development, deployment, and management of ethically-aligned digital mental health solutions for young people.
Collapse
Affiliation(s)
| | | | - Marcello Ienca
- Department of Health Sciences and Technology, ETH Zurich (Swiss Federal Institut of Technology), Zurich, Switzerland
| |
Collapse
|
21
|
Gibelli F, Ricci G, Sirignano A, Turrina S, De Leo D. The Increasing Centrality of Robotic Technology in the Context of Nursing Care: Bioethical Implications Analyzed through a Scoping Review Approach. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:1478025. [PMID: 34493953 PMCID: PMC8418927 DOI: 10.1155/2021/1478025] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 08/02/2021] [Accepted: 08/14/2021] [Indexed: 11/18/2022]
Abstract
At the dawn of the fourth industrial revolution, the healthcare industry is experiencing a momentous shift in the direction of increasingly pervasive technologization of care. If, up until the 2000s, imagining healthcare provided by robots was a purely futuristic fantasy, today, such a scenario is in fact a concrete reality, especially in some countries, such as Japan, where nursing care is largely delivered by assistive and social robots in both public and private healthcare settings, as well as in home care. This revolution in the context of care, already underway in many countries and destined to take place soon on a global scale, raises obvious ethical issues, related primarily to the progressive dehumanization of healthcare, a process which, moreover, has undergone an important acceleration following the outbreak of the COVID-19 pandemic, which has made it necessary to devise new systems to deliver healthcare services while minimizing interhuman contact. According to leading industry experts, nurses will be the primary users of healthcare robots in the short term. The aim of this study is to provide a general overview, through a scoping review approach, of the most relevant ethical issues that have emerged in the nursing care field in relation to the increasingly decisive role that service robots play in the provision of care. Specifically, through the adoption of the population-concept-context framework, we formulated this broad question: what are the most relevant ethical issues directly impacting clinical practice that arise in nursing care delivered by assistive and social robots? We conducted the review according to the five-step methodology outlined by Arksey and O'Malley. The first two steps, formulating the main research question and carrying out the literature search, were performed based on the population-context-concept (PCC) framework suggested by the Joanna Briggs Institute. Starting from an initial quota of 2,328 scientific papers, we performed an initial screening through a computer system by eliminating duplicated and non-English language articles. The next step consisted of selection based on a reading of the titles and abstracts, adopting four precise exclusion criteria: articles related to a nonnursing environment, articles dealing with bioethical aspects in a marginal way, articles related to technological devices other than robots, and articles that did not treat the dynamics of human-robot relationships in depth. Of the 2,328 titles and abstracts screened, we included 14. The results of the 14 papers revealed the existence of nonnegligible difficulties in the integration of robotic systems within nursing, leading to a lively search for new theoretical ethical frameworks, in which robots can find a place; concurrent with this exploration are the frantic attempts to identify the best ethical design system applicable to robots who work alongside nurses in hospital wards. In the final part of the paper, we also proposed considerations about the Italian nursing context and the legal implications of nursing care provided by robots in light of the Italian legislative panorama. Regarding future perspectives, this paper offers insights regarding robot engagement strategies within nursing.
Collapse
Affiliation(s)
- Filippo Gibelli
- Department of Diagnostics and Public Health, Section of Forensic Medicine, University of Verona, Verona, Italy
| | - Giovanna Ricci
- Section of Legal Medicine, School of Law, University of Camerino, Camerino, Italy
| | - Ascanio Sirignano
- Section of Legal Medicine, School of Law, University of Camerino, Camerino, Italy
| | - Stefania Turrina
- Department of Diagnostics and Public Health, Section of Forensic Medicine, University of Verona, Verona, Italy
| | - Domenico De Leo
- Department of Diagnostics and Public Health, Section of Forensic Medicine, University of Verona, Verona, Italy
| |
Collapse
|
22
|
Allan K, Oren N, Hutchison J, Martin D. In search of a Goldilocks zone for credible AI. Sci Rep 2021; 11:13687. [PMID: 34211064 PMCID: PMC8249604 DOI: 10.1038/s41598-021-93109-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 06/17/2021] [Indexed: 02/06/2023] Open
Abstract
If artificial intelligence (AI) is to help solve individual, societal and global problems, humans should neither underestimate nor overestimate its trustworthiness. Situated in-between these two extremes is an ideal 'Goldilocks' zone of credibility. But what will keep trust in this zone? We hypothesise that this role ultimately falls to the social cognition mechanisms which adaptively regulate conformity between humans. This novel hypothesis predicts that human-like functional biases in conformity should occur during interactions with AI. We examined multiple tests of this prediction using a collaborative remembering paradigm, where participants viewed household scenes for 30 s vs. 2 min, then saw 2-alternative forced-choice decisions about scene content originating either from AI- or human-sources. We manipulated the credibility of different sources (Experiment 1) and, from a single source, the estimated-likelihood (Experiment 2) and objective accuracy (Experiment 3) of specific decisions. As predicted, each manipulation produced functional biases for AI-sources mirroring those found for human-sources. Participants conformed more to higher credibility sources, and higher-likelihood or more objectively accurate decisions, becoming increasingly sensitive to source accuracy when their own capability was reduced. These findings support the hypothesised role of social cognition in regulating AI's influence, raising important implications and new directions for research on human-AI interaction.
Collapse
Affiliation(s)
- Kevin Allan
- School of Psychology, University of Aberdeen, Aberdeen, AB24 2UB, UK.
| | - Nir Oren
- School of Natural and Computing Sciences, University of Aberdeen, Aberdeen, AB24 2UB, UK
| | - Jacqui Hutchison
- School of Psychology, University of Aberdeen, Aberdeen, AB24 2UB, UK
| | - Douglas Martin
- School of Psychology, University of Aberdeen, Aberdeen, AB24 2UB, UK
| |
Collapse
|
23
|
Howick J, Morley J, Floridi L. An Empathy Imitation Game: Empathy Turing Test for Care- and Chat-bots. Minds Mach (Dordr) 2021. [DOI: 10.1007/s11023-021-09555-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|