1
|
Yamamoto Y. Suggestive answers strategy in human-chatbot interaction: a route to engaged critical decision making. Front Psychol 2024; 15:1382234. [PMID: 38605834 PMCID: PMC11007170 DOI: 10.3389/fpsyg.2024.1382234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 03/14/2024] [Indexed: 04/13/2024] Open
Abstract
In this study, we proposed a novel chatbot interaction strategy based on the suggestive ending of answers. This strategy is inspired by the cliffhanger ending narrative technique, which ends a story without specifying conclusions to spark readers' curiosity as to what will happen next and is often used in television series. Common chatbots provide relevant and comprehensive answers to users' questions. In contrast, chatbots with our proposed strategy end their answers with hints potentially interest-triggering users. The suggestive ending strategy aims to stimulate users' inquisition for critical decision-making, relating to a psychological phenomenon where humans are often urged to finish the uncompleted tasks they have initiated. We demonstrated the implication of our strategy by conducting an online user study involving 300 participants, where they used chatbots to perform three decision-making tasks. We adopted a between-subjects factorial experimental design and compared between the following UIs: (1) plain chatbot-it provides a generated answer when participants issue a question; (2) expositive chatbot-it provides a generated answer for a question, adding short summaries of a positive and negative person's opinion for the answer; (3) suggestive chatbot-it provides a generated answer for a question, which ends with a suggestion of a positive and negative person for the answer. We found that users of the suggestive chatbot were inclined to ask more questions to the bot, engage in prolonged decision-making and information-seeking actions, and formulate their opinions from various perspectives. These findings vary with the users' experience with plain and expositive chatbots.
Collapse
Affiliation(s)
- Yusuke Yamamoto
- School of Data Science, Nagoya City University, Nagoya, Japan
| |
Collapse
|
2
|
Guingrich RE, Graziano MSA. Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction. Front Psychol 2024; 15:1322781. [PMID: 38605842 PMCID: PMC11008604 DOI: 10.3389/fpsyg.2024.1322781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 03/13/2024] [Indexed: 04/13/2024] Open
Abstract
The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI's inherent conscious or moral status.
Collapse
Affiliation(s)
- Rose E. Guingrich
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton School of Public and International Affairs, Princeton University, Princeton, NJ, United States
| | - Michael S. A. Graziano
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States
| |
Collapse
|
3
|
Frantzidis CA, Peristeri E, Andreou M, Cristea AI. Editorial: New challenges and future perspectives in cognitive neuroscience. Front Hum Neurosci 2024; 18:1390788. [PMID: 38524922 PMCID: PMC10957546 DOI: 10.3389/fnhum.2024.1390788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 02/26/2024] [Indexed: 03/26/2024] Open
Affiliation(s)
| | - Eleni Peristeri
- Department of English Studies, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Maria Andreou
- Department of Speech and Language Therapy, University of Peloponnese, Kalamata, Greece
| | | |
Collapse
|
4
|
Li MD, Little BP. Appropriate Reliance on Artificial Intelligence in Radiology Education. J Am Coll Radiol 2023; 20:1126-1130. [PMID: 37392983 DOI: 10.1016/j.jacr.2023.04.019] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/20/2023] [Accepted: 04/06/2023] [Indexed: 07/03/2023]
Abstract
Users of artificial intelligence (AI) can become overreliant on AI, negatively affecting the performance of human-AI teams. For a future in which radiologists use interpretive AI tools routinely in clinical practice, radiology education will need to evolve to provide radiologists with the skills to use AI appropriately and wisely. In this work, we examine how overreliance on AI may develop in radiology trainees and explore how this problem can be mitigated, including through the use of AI-augmented education. Radiology trainees will still need to develop the perceptual skills and mastery of knowledge fundamental to radiology to use AI safely. We propose a framework for radiology trainees to use AI tools with appropriate reliance, drawing on lessons from human-AI interactions research.
Collapse
Affiliation(s)
- Matthew D Li
- Department of Radiology and Diagnostic Imaging, Faculty of Medicine & Dentistry, University of Alberta, Edmonton, Alberta, Canada.
| | - Brent P Little
- Mayo Clinic College of Medicine and Science, Department of Radiology, Division of Cardiothoracic Imaging, Mayo Clinic Florida, Florida; Committee Member, ACR Appropriateness Criteria Thoracic Imaging
| |
Collapse
|
5
|
Schreibelmayr S, Moradbakhti L, Mara M. First impressions of a financial AI assistant: differences between high trust and low trust users. Front Artif Intell 2023; 6:1241290. [PMID: 37854078 PMCID: PMC10579608 DOI: 10.3389/frai.2023.1241290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/05/2023] [Indexed: 10/20/2023] Open
Abstract
Calibrating appropriate trust of non-expert users in artificial intelligence (AI) systems is a challenging yet crucial task. To align subjective levels of trust with the objective trustworthiness of a system, users need information about its strengths and weaknesses. The specific explanations that help individuals avoid over- or under-trust may vary depending on their initial perceptions of the system. In an online study, 127 participants watched a video of a financial AI assistant with varying degrees of decision agency. They generated 358 spontaneous text descriptions of the system and completed standard questionnaires from the Trust in Automation and Technology Acceptance literature (including perceived system competence, understandability, human-likeness, uncanniness, intention of developers, intention to use, and trust). Comparisons between a high trust and a low trust user group revealed significant differences in both open-ended and closed-ended answers. While high trust users characterized the AI assistant as more useful, competent, understandable, and humanlike, low trust users highlighted the system's uncanniness and potential dangers. Manipulating the AI assistant's agency had no influence on trust or intention to use. These findings are relevant for effective communication about AI and trust calibration of users who differ in their initial levels of trust.
Collapse
Affiliation(s)
| | | | - Martina Mara
- Robopsychology Lab, Linz Institute of Technology, Johannes Kepler University Linz, Linz, Austria
| |
Collapse
|
6
|
Tahri Sqalli M, Aslonov B, Gafurov M, Nurmatov S. Humanizing AI in medical training: ethical framework for responsible design. Front Artif Intell 2023; 6:1189914. [PMID: 37261331 PMCID: PMC10227566 DOI: 10.3389/frai.2023.1189914] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 04/24/2023] [Indexed: 06/02/2023] Open
Abstract
The increasing use of artificial intelligence (AI) in healthcare has brought about numerous ethical considerations that push for reflection. Humanizing AI in medical training is crucial to ensure that the design and deployment of its algorithms align with ethical principles and promote equitable healthcare outcomes for both medical practitioners trainees and patients. This perspective article provides an ethical framework for responsibly designing AI systems in medical training, drawing on our own past research in the fields of electrocardiogram interpretation training and e-health wearable devices. The article proposes five pillars of responsible design: transparency, fairness and justice, safety and wellbeing, accountability, and collaboration. The transparency pillar highlights the crucial role of maintaining the explainabilty of AI algorithms, while the fairness and justice pillar emphasizes on addressing biases in healthcare data and designing models that prioritize equitable medical training outcomes. The safety and wellbeing pillar however, emphasizes on the need to prioritize patient safety and wellbeing in AI model design whether it is for training or simulation purposes, and the accountability pillar calls for establishing clear lines of responsibility and liability for AI-derived decisions. Finally, the collaboration pillar emphasizes interdisciplinary collaboration among stakeholders, including physicians, data scientists, patients, and educators. The proposed framework thus provides a practical guide for designing and deploying AI in medicine generally, and in medical training specifically in a responsible and ethical manner.
Collapse
Affiliation(s)
- Mohammed Tahri Sqalli
- Department of Economics, School of Foreign Services, Georgetown University in Qatar, Doha, Qatar
| | - Begali Aslonov
- Department of Control and Computer Engineering, Politecnico di Torino, Turin, Italy
| | - Mukhammadjon Gafurov
- Department of Business Administration, Carnegie Mellon University in Qatar, Doha, Qatar
| | - Shokhrukhbek Nurmatov
- Department of Economics, School of Foreign Services, Georgetown University in Qatar, Doha, Qatar
| |
Collapse
|
7
|
Jungwirth D, Haluza D. Artificial Intelligence and Public Health: An Exploratory Study. Int J Environ Res Public Health 2023; 20:ijerph20054541. [PMID: 36901550 PMCID: PMC10002031 DOI: 10.3390/ijerph20054541] [Citation(s) in RCA: 25] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 02/21/2023] [Accepted: 03/02/2023] [Indexed: 05/26/2023]
Abstract
Artificial intelligence (AI) has the potential to revolutionize research by automating data analysis, generating new insights, and supporting the discovery of new knowledge. The top 10 contribution areas of AI towards public health were gathered in this exploratory study. We utilized the "text-davinci-003" model of GPT-3, using OpenAI playground default parameters. The model was trained with the largest training dataset any AI had, limited to a cut-off date in 2021. This study aimed to test the ability of GPT-3 to advance public health and to explore the feasibility of using AI as a scientific co-author. We asked the AI asked for structured input, including scientific quotations, and reviewed responses for plausibility. We found that GPT-3 was able to assemble, summarize, and generate plausible text blocks relevant for public health concerns, elucidating valuable areas of application for itself. However, most quotations were purely invented by GPT-3 and thus invalid. Our research showed that AI can contribute to public health research as a team member. According to authorship guidelines, the AI was ultimately not listed as a co-author, as it would be done with a human researcher. We conclude that good scientific practice also needs to be followed for AI contributions, and a broad scientific discourse on AI contributions is needed.
Collapse
|
8
|
Mavragani A, Horstmanshof L. Human Decision-making in an Artificial Intelligence-Driven Future in Health: Protocol for Comparative Analysis and Simulation. JMIR Res Protoc 2022; 11:e42353. [PMID: 36460486 PMCID: PMC9823572 DOI: 10.2196/42353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND Health care can broadly be divided into two domains: clinical health services and complex health services (ie, nonclinical health services, eg, health policy and health regulation). Artificial intelligence (AI) is transforming both of these areas. Currently, humans are leaders, managers, and decision makers in complex health services. However, with the rise of AI, the time has come to ask whether humans will continue to have meaningful decision-making roles in this domain. Further, rationality has long dominated this space. What role will intuition play? OBJECTIVE The aim is to establish a protocol of protocols to be used in the proposed research, which aims to explore whether humans will continue in meaningful decision-making roles in complex health services in an AI-driven future. METHODS This paper describes a set of protocols for the proposed research, which is designed as a 4-step project across two phases. This paper describes the protocols for each step. The first step is a scoping review to identify and map human attributes that influence decision-making in complex health services. The research question focuses on the attributes that influence human decision-making in this context as reported in the literature. The second step is a scoping review to identify and map AI attributes that influence decision-making in complex health services. The research question focuses on attributes that influence AI decision-making in this context as reported in the literature. The third step is a comparative analysis: a narrative comparison followed by a mathematical comparison of the two sets of attributes-human and AI. This analysis will investigate whether humans have one or more unique attributes that could influence decision-making for the better. The fourth step is a simulation of a nonclinical environment in health regulation and policy into which virtual human and AI decision makers (agents) are introduced. The virtual human and AI will be based on the human and AI attributes identified in the scoping reviews. The simulation will explore, observe, and document how humans interact with AI, and whether humans are likely to compete, cooperate, or converge with AI. RESULTS The results will be presented in tabular form, visually intuitive formats, and-in the case of the simulation-multimedia formats. CONCLUSIONS This paper provides a road map for the proposed research. It also provides an example of a protocol of protocols for methods used in complex health research. While there are established guidelines for a priori protocols for scoping reviews, there is a paucity of guidance on establishing a protocol of protocols. This paper takes the first step toward building a scaffolding for future guidelines in this regard. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/42353.
Collapse
Affiliation(s)
| | - Louise Horstmanshof
- Faculty of Health, Southern Cross University, Lismore, New South Wales, Australia
| |
Collapse
|
9
|
Jansson M, Ohtonen P, Alalääkkölä T, Heikkinen J, Mäkiniemi M, Lahtinen S, Lahtela R, Ahonen M, Jämsä S, Liisantti J. Artificial intelligence-enhanced care pathway planning and scheduling system: content validity assessment of required functionalities. BMC Health Serv Res 2022; 22:1513. [PMID: 36510176 PMCID: PMC9746075 DOI: 10.1186/s12913-022-08780-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 11/02/2022] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Artificial intelligence (AI) and machine learning are transforming the optimization of clinical and patient workflows in healthcare. There is a need for research to specify clinical requirements for AI-enhanced care pathway planning and scheduling systems to improve human-AI interaction in machine learning applications. The aim of this study was to assess content validity and prioritize the most relevant functionalities of an AI-enhanced care pathway planning and scheduling system. METHODS A prospective content validity assessment was conducted in five university hospitals in three different countries using an electronic survey. The content of the survey was formed from clinical requirements, which were formulated into generic statements of required AI functionalities. The relevancy of each statement was evaluated using a content validity index. In addition, weighted ranking points were calculated to prioritize the most relevant functionalities of an AI-enhanced care pathway planning and scheduling system. RESULTS A total of 50 responses were received from clinical professionals from three European countries. An item-level content validity index ranged from 0.42 to 0.96. 45% of the generic statements were considered good. The highest ranked functionalities for an AI-enhanced care pathway planning and scheduling system were related to risk assessment, patient profiling, and resources. The highest ranked functionalities for the user interface were related to the explainability of machine learning models. CONCLUSION This study provided a comprehensive list of functionalities that can be used to design future AI-enhanced solutions and evaluate the designed solutions against requirements. The relevance of statements concerning the AI functionalities were considered somewhat relevant, which might be due to the low level or organizational readiness for AI in healthcare.
Collapse
Affiliation(s)
- Miia Jansson
- grid.10858.340000 0001 0941 4873Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - Pasi Ohtonen
- grid.10858.340000 0001 0941 4873Research Unit of Surgery, Anesthesia and Intensive Care, Oulu University Hospital, University of Oulu, Oulu, Finland
| | - Timo Alalääkkölä
- grid.412326.00000 0004 4685 4917Testing and Innovations, Oulu University Hospital, Oulu, Finland
| | - Juuso Heikkinen
- grid.412326.00000 0004 4685 4917Division of Orthopedic and Trauma Surgery, Department of Surgery, Medical Research Center, Oulu University Hospital, Oulu, Finland
| | - Minna Mäkiniemi
- grid.412326.00000 0004 4685 4917Oulu University Hospital, Oulu, Finland
| | - Sanna Lahtinen
- grid.412326.00000 0004 4685 4917Department of Anesthesiology, Oulu University Hospital, Oulu, Finland ,MRC Oulu, Research Group of Anesthesiology, Oulu, Finland
| | - Riikka Lahtela
- grid.412326.00000 0004 4685 4917Department of Anesthesiology, Oulu University Hospital, Oulu, Finland
| | - Merja Ahonen
- grid.412326.00000 0004 4685 4917Department of Anesthesiology, Oulu University Hospital, Oulu, Finland ,MRC Oulu, Research Group of Anesthesiology, Oulu, Finland
| | - Sirpa Jämsä
- grid.412326.00000 0004 4685 4917Sense Organ Diseases Centre, Oulu University Hospital, Oulu, Finland
| | - Janne Liisantti
- grid.412326.00000 0004 4685 4917Department of Anesthesiology, Oulu University Hospital, Oulu, Finland ,MRC Oulu, Research Group of Anesthesiology, Oulu, Finland
| |
Collapse
|
10
|
Wang R, Fu G, Li J, Pei Y. Diagnosis after zooming in: A multilabel classification model by imitating doctor reading habits to diagnose brain diseases. Med Phys 2022; 49:7054-7070. [PMID: 35880443 DOI: 10.1002/mp.15871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 03/18/2022] [Accepted: 06/28/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors. METHODS In this paper, we propose a model called DrCT2 that can detect brain diseases without using image-level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open-access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human-computer cooperation in different aspects. RESULTS The method achieved F1-scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency. CONCLUSIONS We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value.
Collapse
Affiliation(s)
- Ruiqian Wang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Guanghui Fu
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, F-75013, Paris, France
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu, Japan
| |
Collapse
|
11
|
Tahri Sqalli M, Al-Thani D. Evolution of Wearable Devices in Health Coaching: Challenges and Opportunities. Front Digit Health 2021; 2:545646. [PMID: 34713031 PMCID: PMC8521831 DOI: 10.3389/fdgth.2020.545646] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 09/15/2020] [Indexed: 11/13/2022] Open
Abstract
Wearable devices hold an enormous potential in contributing to an improved global health. The availability, non-invasiveness, and affordability of those systems make them promising candidates to transform the standard of care for health coaching. These wearable devices are now considered as versatile coaching systems. Patients who wish to improve their health and well-being refer to wearables for tracking and quantifying their improvement. The timeliness of the “wearable device as a health coaching enabler” field of research will inevitably know a prominent growth in the upcoming years. This growth is expected to stem from both the computing and the medical fields. In this perspective article, we list the potential challenges as well as the opportunities of this newly born field from an interdisciplinary perspective. We mainly focus on both the computing and healthcare perspectives. We also chart guidelines for the healthcare research community that is willing to get involved in the computing field to harness the benefits of wearable devices.
Collapse
Affiliation(s)
- Mohammed Tahri Sqalli
- Information and Computing Technology Division, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Dena Al-Thani
- Information and Computing Technology Division, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| |
Collapse
|