1
|
Lin HL, Liao LL, Wang YN, Chang LC. Attitude and utilization of ChatGPT among registered nurses: A cross-sectional study. Int Nurs Rev 2025; 72:e13012. [PMID: 38979771 DOI: 10.1111/inr.13012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 06/10/2024] [Indexed: 07/10/2024]
Abstract
AIM This study explores the influencing factors of attitudes and behaviors toward use of ChatGPT based on the Technology Acceptance Model among registered nurses in Taiwan. BACKGROUND The complexity of medical services and nursing shortages increases workloads. ChatGPT swiftly answers medical questions, provides clinical guidelines, and assists with patient information management, thereby improving nursing efficiency. INTRODUCTION To facilitate the development of effective ChatGPT training programs, it is essential to examine registered nurses' attitudes toward and utilization of ChatGPT across diverse workplace settings. METHODS An anonymous online survey was used to collect data from over 1000 registered nurses recruited through social media platforms between November 2023 and January 2024. Descriptive statistics and multiple linear regression analyses were conducted for data analysis. RESULTS Among respondents, some were unfamiliar with ChatGPT, while others had used it before, with higher usage among males, higher-educated individuals, experienced nurses, and supervisors. Gender and work settings influenced perceived risks, and those familiar with ChatGPT recognized its social impact. Perceived risk and usefulness significantly influenced its adoption. DISCUSSION Nurse attitudes to ChatGPT vary based on gender, education, experience, and role. Positive perceptions emphasize its usefulness, while risk concerns affect adoption. The insignificant role of perceived ease of use highlights ChatGPT's user-friendly nature. CONCLUSION Over half of the surveyed nurses had used or were familiar with ChatGPT and showed positive attitudes toward its use. Establishing rigorous guidelines to enhance their interaction with ChatGPT is crucial for future training. IMPLICATIONS FOR NURSING AND HEALTH POLICY Nurse managers should understand registered nurses' attitudes toward ChatGPT and integrate it into in-service education with tailored support and training, including appropriate prompt formulation and advanced decision-making, to prevent misuse.
Collapse
Affiliation(s)
- Hui-Ling Lin
- Department of Nursing, Linkou Branch, Chang Gung Memorial Hospital, Taoyuan, Taiwan, ROC
- School of Nursing, College of Medicine, Chang Gung University, Taoyuan, Taiwan, ROC
- School of Nursing, Chang Gung University of Science and Technology, Gui-Shan Town, Taoyuan, Taiwan, ROC
- Taipei Medical University, Taipei, Taiwan
| | - Li-Ling Liao
- Department of Public Health, College of Health Science, Kaohsiung Medical University, Kaohsiung City, Taiwan
- Department of Medical Research, Kaohsiung Medical University Hospital, Kaohsiung City, Taiwan
| | - Ya-Ni Wang
- School of Nursing, College of Medicine, Chang Gung University, Taoyuan, Taiwan, ROC
| | - Li-Chun Chang
- Department of Nursing, Linkou Branch, Chang Gung Memorial Hospital, Taoyuan, Taiwan, ROC
- School of Nursing, College of Medicine, Chang Gung University, Taoyuan, Taiwan, ROC
- School of Nursing, Chang Gung University of Science and Technology, Gui-Shan Town, Taoyuan, Taiwan, ROC
| |
Collapse
|
2
|
Kunze KN. Generative Artificial Intelligence and Musculoskeletal Health Care. HSS J 2025:15563316251335334. [PMID: 40297632 PMCID: PMC12033169 DOI: 10.1177/15563316251335334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2025] [Accepted: 01/29/2025] [Indexed: 04/30/2025]
Abstract
Generative artificial intelligence (AI) comprises a class of AI models that generate synthetic outputs based on learning acquired from a dataset that trained the model. This means that they can create entirely new outputs that resemble real-world data despite not being explicitly instructed to do so during training. Regarding technological capabilities, computing power, and data availability, generative AI has given rise to more advanced and versatile models including diffusion and large language models that hold promise in healthcare. In musculoskeletal healthcare, generative AI applications may involve the enhancement of images, generation of audio and video, automation of clinical documentation and administrative tasks, use of surgical planning aids, augmentation of treatment decisions, and personalization of patient communication. Limitations of the use of generative AI in healthcare include hallucinations, model bias, ethical considerations during clinical use, knowledge gaps, and lack of transparency. This review introduces critical concepts of generative AI, presents clinical applications relevant to musculoskeletal healthcare that are in development, and highlights limitations preventing deployment in clinical settings.
Collapse
Affiliation(s)
- Kyle N. Kunze
- Department of Orthopedic Surgery, Hospital for Special Surgery, New York, NY, USA
| |
Collapse
|
3
|
Moskovich L, Rozani V. Health profession students' perceptions of ChatGPT in healthcare and education: insights from a mixed-methods study. BMC MEDICAL EDUCATION 2025; 25:98. [PMID: 39833868 PMCID: PMC11748239 DOI: 10.1186/s12909-025-06702-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2024] [Accepted: 01/13/2025] [Indexed: 01/22/2025]
Abstract
OBJECTIVE The aim of this study was to investigate the perceptions of health profession students regarding ChatGPT use and the potential impact of integrating ChatGPT in healthcare and education. BACKGROUND Artificial Intelligence is increasingly utilized in medical education and clinical profession training. However, since its introduction, ChatGPT remains relatively unexplored in terms of health profession students' acceptance of its use in education and practice. DESIGN This study employed a mixed-methods approach, using a web-based survey. METHODS The study involved a convenience sample recruited through various methods, including Faculty of Medicine announcements, social media, and snowball sampling, during the second semester (March to June 2023). Data were collected using a structured questionnaire with closed-ended questions and three open-ended questions. The final sample comprised 217 undergraduate health profession students, including 73 (33.6%) nursing students, 65 (30.0%) medical students, and 79 (36.4%) occupational therapy, physiotherapy, and speech therapy students. RESULTS Among the surveyed students, 86.2% were familiar with ChatGPT, with generally positive perceptions as reflected by a mean score of 4.04 (SD = 0.62) on a scale of 1 to 5. Positive feedback was particularly noted with respect to ChatGPT's role in information retrieval and summarization. The qualitative data revealed three main themes: experiences with ChatGPT, its impact on the quality of healthcare, and its integration into the curriculum. The findings highlight benefits such as serving as a convenient tool for accessing information, reducing human errors, and fostering innovative learning approaches. However, they also underscore areas of concern, including ethical considerations, challenges in fostering critical thinking, and issues related to verification. The absence of significant differences between the different fields of study indicates consistent perceptions across nursing, medicine, and other health profession students. CONCLUSIONS Our findings underscore the necessity for continuous refinement to enhance ChatGPT's accuracy, reliability, and alignment with the diverse educational needs of health professions. These insights not only deepen our understanding of student perceptions of ChatGPT in healthcare education but also have significant implications for the future integration of AI in health profession practice. The study emphasizes the importance of a careful balance between leveraging the benefits of AI tools and addressing ethical and pedagogical concerns.
Collapse
Affiliation(s)
- Lior Moskovich
- Faculty of Medical and Health Sciences, Tel Aviv University, Tel Aviv, Israel
| | - Violetta Rozani
- Department of Nursing Sciences, Faculty of Medical and Health Sciences, The Stanley Steyer School of Health Professions, Tel Aviv University, Tel Aviv, Israel.
| |
Collapse
|
4
|
Lopez-Gonzalez R, Sanchez-Cordero S, Pujol-Gebellí J, Castellvi J. Evaluation of the Impact of ChatGPT on the Selection of Surgical Technique in Bariatric Surgery. Obes Surg 2025; 35:19-24. [PMID: 38760650 DOI: 10.1007/s11695-024-07279-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 05/08/2024] [Accepted: 05/09/2024] [Indexed: 05/19/2024]
Abstract
PURPOSE With the growing interest in artificial intelligence (AI) applications in medicine, this study explores ChatGPT's potential to influence surgical technique selection in metabolic and bariatric surgery (MBS), contrasting AI recommendations with established clinical guidelines and expert consensus. MATERIALS AND METHODS Conducting a single-center retrospective analysis, the study involved 161 patients who underwent MBS between January 2022 and December 2023. ChatGPT4 was used to analyze patient data, including demographics, pathological history, and BMI, to recommend the most suitable surgical technique. These AI recommendations were then compared with the hospital's algorithm-based decisions. RESULTS ChatGPT recommended Roux-en-Y gastric bypass in over half of the cases. However, a significant difference was observed between AI suggestions and actual surgical techniques applied, with only a 34.16% match rate. Further analysis revealed any significant correlation between ChatGPT recommendations and the established surgical algorithm. CONCLUSION Despite ChatGPT's ability to process and analyze large datasets, its recommendations for MBS techniques do not align closely with those determined by expert surgical teams using a high success rate algorithm. Consequently, the study concludes that ChatGPT4 should not replace expert consultation in selecting MBS techniques.
Collapse
Affiliation(s)
- Ruth Lopez-Gonzalez
- General and Digestive Surgery, Moises Broggi University Hospital, C Oriol Martorell 12, 08970, Barcelona, Spain.
| | - Sergi Sanchez-Cordero
- General and Digestive Surgery, Moises Broggi University Hospital, C Oriol Martorell 12, 08970, Barcelona, Spain
| | - Jordi Pujol-Gebellí
- General and Digestive Surgery, Moises Broggi University Hospital, C Oriol Martorell 12, 08970, Barcelona, Spain
| | - Jordi Castellvi
- General and Digestive Surgery, Moises Broggi University Hospital, C Oriol Martorell 12, 08970, Barcelona, Spain
| |
Collapse
|
5
|
Ah-Yan C, Boissonnault È, Boudier-Revéret M, Mares C. Impact of artificial intelligence in managing musculoskeletal pathologies in physiatry: a qualitative observational study evaluating the potential use of ChatGPT versus Copilot for patient information and clinical advice on low back pain. JOURNAL OF YEUNGNAM MEDICAL SCIENCE 2024; 42:11. [PMID: 39610054 PMCID: PMC11812099 DOI: 10.12701/jyms.2024.01151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Revised: 10/25/2024] [Accepted: 10/25/2024] [Indexed: 11/30/2024]
Abstract
BACKGROUND The self-management of low back pain (LBP) through patient information interventions offers significant benefits in terms of cost, reduced work absenteeism, and overall healthcare utilization. Using a large language model (LLM), such as ChatGPT (OpenAI) or Copilot (Microsoft), could potentially enhance these outcomes further. Thus, it is important to evaluate the LLMs ChatGPT and Copilot in providing medical advice for LBP and assessing the impact of clinical context on the quality of responses. METHODS This was a qualitative comparative observational study. It was conducted within the Department of Physical Medicine and Rehabilitation, University of Montreal in Montreal, QC, Canada. ChatGPT and Copilot were used to answer 27 common questions related to LBP, with and without a specific clinical context. The responses were evaluated by physiatrists for validity, safety, and usefulness using a 4-point Likert scale (4, most favorable). RESULTS Both ChatGPT and Copilot demonstrated good performance across all measures. Validity scores were 3.33 for ChatGPT and 3.18 for Copilot, safety scores were 3.19 for ChatGPT and 3.13 for Copilot, and usefulness scores were 3.60 for ChatGPT and 3.57 for Copilot. The inclusion of clinical context did not significantly change the results. CONCLUSION LLMs, such as ChatGPT and Copilot, can provide reliable medical advice on LBP, irrespective of the detailed clinical context, supporting their potential to aid in patient self-management.
Collapse
Affiliation(s)
- Christophe Ah-Yan
- Department of Physical Medicine and Rehabilitation, University of Montreal, Montreal, QC, Canada
| | - Ève Boissonnault
- Department of Physical Medicine and Rehabilitation, Centre Hospitalier de l’Université de Montréal, Montreal, QC, Canada
| | - Mathieu Boudier-Revéret
- Department of Physical Medicine and Rehabilitation, Centre Hospitalier de l’Université de Montréal, Montreal, QC, Canada
| | - Christopher Mares
- Department of Physical Medicine and Rehabilitation, Centre Hospitalier de l’Université de Montréal, Montreal, QC, Canada
| |
Collapse
|
6
|
Kisvarday S, Yan A, Yarahuan J, Kats DJ, Ray M, Kim E, Hong P, Spector J, Bickel J, Parsons C, Rabbani N, Hron JD. ChatGPT Use Among Pediatric Health Care Providers: Cross-Sectional Survey Study. JMIR Form Res 2024; 8:e56797. [PMID: 39265163 PMCID: PMC11427860 DOI: 10.2196/56797] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 05/10/2024] [Accepted: 07/14/2024] [Indexed: 09/14/2024] Open
Abstract
BACKGROUND The public launch of OpenAI's ChatGPT platform generated immediate interest in the use of large language models (LLMs). Health care institutions are now grappling with establishing policies and guidelines for the use of these technologies, yet little is known about how health care providers view LLMs in medical settings. Moreover, there are no studies assessing how pediatric providers are adopting these readily accessible tools. OBJECTIVE The aim of this study was to determine how pediatric providers are currently using LLMs in their work as well as their interest in using a Health Insurance Portability and Accountability Act (HIPAA)-compliant version of ChatGPT in the future. METHODS A survey instrument consisting of structured and unstructured questions was iteratively developed by a team of informaticians from various pediatric specialties. The survey was sent via Research Electronic Data Capture (REDCap) to all Boston Children's Hospital pediatric providers. Participation was voluntary and uncompensated, and all survey responses were anonymous. RESULTS Surveys were completed by 390 pediatric providers. Approximately 50% (197/390) of respondents had used an LLM; of these, almost 75% (142/197) were already using an LLM for nonclinical work and 27% (52/195) for clinical work. Providers detailed the various ways they are currently using an LLM in their clinical and nonclinical work. Only 29% (n=105) of 362 respondents indicated that ChatGPT should be used for patient care in its present state; however, 73.8% (273/368) reported they would use a HIPAA-compliant version of ChatGPT if one were available. Providers' proposed future uses of LLMs in health care are described. CONCLUSIONS Despite significant concerns and barriers to LLM use in health care, pediatric providers are already using LLMs at work. This study will give policy makers needed information about how providers are using LLMs clinically.
Collapse
Affiliation(s)
- Susannah Kisvarday
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States
| | - Adam Yan
- Division of Hematology/Oncology, The Hospital for Sick Kids, Toronto, ON, Canada
- Department of Pediatrics, The University of Toronto, Toronto, ON, Canada
| | - Julia Yarahuan
- Children's Healthcare of Atlanta, Atlanta, GA, United States
- School of Medicine, Emory University, Atlanta, GA, United States
| | - Daniel J Kats
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States
| | - Mondira Ray
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States
| | - Eugene Kim
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States
| | - Peter Hong
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Pediatrics, Harvard Medical School, Boston, MA, United States
| | - Jacob Spector
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Pediatrics, Harvard Medical School, Boston, MA, United States
| | | | - Chase Parsons
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Pediatrics, Harvard Medical School, Boston, MA, United States
| | - Naveed Rabbani
- Department of Pediatrics, Harvard Medical School, Boston, MA, United States
- Pediatric Physicians' Organization at Children's Hospital, Wellesley, MA, United States
- Computational Health Informatics Program, Boston Children's Hospital, Boston, MA, United States
| | - Jonathan D Hron
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Pediatrics, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
7
|
Ajmera P, Nischal N, Ariyaratne S, Botchu B, Bhamidipaty K, Iyengar KP, Ajmera SR, Jenko N, Botchu R. Response to: ChatGPT's limited accuracy in generating anatomical images for medical. Skeletal Radiol 2024; 53:1597. [PMID: 38506965 DOI: 10.1007/s00256-024-04656-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/22/2024]
Affiliation(s)
- P Ajmera
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - N Nischal
- Department of Radiology, Holy Family Hospital, New Delhi, India
| | - S Ariyaratne
- Department of Musculoskeletal Radiology, Royal Orthopedic Hospital, Birmingham, UK
| | | | | | - K P Iyengar
- Department of Orthopedics, Southport and Ormskirk Hospital, Mersey and West Lancashire, NHS Trust, Southport, UK
| | - S R Ajmera
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - N Jenko
- Department of Musculoskeletal Radiology, Royal Orthopedic Hospital, Birmingham, UK
| | - R Botchu
- Department of Musculoskeletal Radiology, Royal Orthopedic Hospital, Birmingham, UK.
| |
Collapse
|
8
|
Ray PP. Advancing AI in rheumatology: critical reflections and proposals for future research using large language models. Rheumatol Int 2024; 44:573-574. [PMID: 37891327 DOI: 10.1007/s00296-023-05488-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 10/03/2023] [Indexed: 10/29/2023]
Affiliation(s)
- Partha Pratim Ray
- Department of Computer Applications, Sikkim University, 6th Mile, PO-Tadong, Gangtok, 737102, Sikkim, India.
| |
Collapse
|
9
|
Coraci D, Maccarone MC, Regazzo G, Accordi G, Papathanasiou JV, Masiero S. ChatGPT in the development of medical questionnaires. The example of the low back pain. Eur J Transl Myol 2023; 33:12114. [PMID: 38112605 PMCID: PMC10811646 DOI: 10.4081/ejtm.2023.12114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 12/04/2023] [Indexed: 12/21/2023] Open
Abstract
In the last year, Chat Generative Pre-Trained Transformer (ChatGPT), a web software based on artificial intelligence has been showing high potential in every field of knowledge. In the medical area, its possible application is an object of many studies with promising results. We performed the current study to investigate the possible usefulness of ChatGPT in assessing low back pain. We asked ChatGPT to generate a questionnaire about this clinical condition and we compared the obtained questions and results with the ones obtained by other validated questionnaires: Oswestry Disability Index, Quebec Back Pain Disability Scale, Roland-Morris Disability Questionnaire, and Numeric Rating Scale for pain. We enrolled 20 subjects with low back pain and we found important consistencies among the validated questionnaires. The ChatGPT questionnaire showed an acceptable significant correlation only with Oswestry Disability Index and Quebec Back Pain Disability Scale. ChatGPT showed some peculiarities, especially in the assessment of quality of life and medical consultation and treatments. Our study shows that ChatGPT can help evaluate patients, including multilevel perspectives. However, its power is limited, and further research and validation are required.
Collapse
Affiliation(s)
- Daniele Coraci
- Department of Neuroscience, Section of Rehabilitation, University of Padova, Padua.
| | | | - Gianluca Regazzo
- Department of Neuroscience, Section of Rehabilitation, University of Padova, Padua.
| | - Giorgia Accordi
- Department of Neuroscience, Section of Rehabilitation, University of Padova, Padua.
| | - Jannis V Papathanasiou
- Department of Kinesiotherapy, Faculty of Public Health, Medical University of Sofia, Sofia, Bulgaria; Department of Medical Imaging, Allergology and Physiotherapy, Faculty of Dental Medicine, Medical University of Plovdiv, Plovdiv.
| | - Stefano Masiero
- Department of Neuroscience, Section of Rehabilitation, University of Padova, Padua.
| |
Collapse
|