151
|
Dantas-Torres F. Artificial intelligence, parasites and parasitic diseases. Parasit Vectors 2023; 16:340. [PMID: 37770977 PMCID: PMC10540454 DOI: 10.1186/s13071-023-05972-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/30/2023] Open
Affiliation(s)
- Filipe Dantas-Torres
- Aggeu Magalhães Institute, Oswaldo Cruz Foundation (Fiocruz), Recife, PE, Brazil.
| |
Collapse
|
152
|
Jeyaraman M, Ramasubramanian S, Balaji S, Jeyaraman N, Nallakumarasamy A, Sharma S. ChatGPT in action: Harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research. World J Methodol 2023; 13:170-178. [PMID: 37771867 PMCID: PMC10523250 DOI: 10.5662/wjm.v13.i4.170] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/29/2023] [Accepted: 07/24/2023] [Indexed: 09/20/2023] Open
Abstract
Artificial intelligence (AI) tools, like OpenAI's Chat Generative Pre-trained Transformer (ChatGPT), hold considerable potential in healthcare, academia, and diverse industries. Evidence demonstrates its capability at a medical student level in standardized tests, suggesting utility in medical education, radiology reporting, genetics research, data optimization, and drafting repetitive texts such as discharge summaries. Nevertheless, these tools should augment, not supplant, human expertise. Despite promising applications, ChatGPT confronts limitations, including critical thinking tasks and generating false references, necessitating stringent cross-verification. Ensuing concerns, such as potential misuse, bias, blind trust, and privacy, underscore the need for transparency, accountability, and clear policies. Evaluations of AI-generated content and preservation of academic integrity are critical. With responsible use, AI can significantly improve healthcare, academia, and industry without compromising integrity and research quality. For effective and ethical AI deployment, collaboration amongst AI developers, researchers, educators, and policymakers is vital. The development of domain-specific tools, guidelines, regulations, and the facilitation of public dialogue must underpin these endeavors to responsibly harness AI's potential.
Collapse
Affiliation(s)
- Madhan Jeyaraman
- Department of Orthopaedics, ACS Medical College and Hospital, Dr MGR Educational and Research Institute, Chennai 600077, Tamil Nadu, India
| | - Swaminathan Ramasubramanian
- Department of General Medicine, Government Medical College, Omandurar Government Estate, Chennai 600018, Tamil Nadu, India
| | - Sangeetha Balaji
- Department of General Medicine, Government Medical College, Omandurar Government Estate, Chennai 600018, Tamil Nadu, India
| | - Naveen Jeyaraman
- Department of Orthopaedics, ACS Medical College and Hospital, Dr MGR Educational and Research Institute, Chennai 600077, Tamil Nadu, India
| | - Arulkumar Nallakumarasamy
- Department of Orthopaedics, ACS Medical College and Hospital, Dr MGR Educational and Research Institute, Chennai 600077, Tamil Nadu, India
| | - Shilpa Sharma
- Department of Paediatric Surgery, All India Institute of Medical Sciences, Delhi 110029, New Delhi, India
| |
Collapse
|
153
|
Lareyre F, Nasr B, Chaudhuri A, Di Lorenzo G, Carlier M, Raffort J. Comprehensive Review of Natural Language Processing (NLP) in Vascular Surgery. EJVES Vasc Forum 2023; 60:57-63. [PMID: 37822918 PMCID: PMC10562666 DOI: 10.1016/j.ejvsvf.2023.09.002] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 07/13/2023] [Accepted: 09/08/2023] [Indexed: 10/13/2023] Open
Abstract
Objective The use of Natural Language Processing (NLP) has attracted increased interest in healthcare with various potential applications including identification and extraction of health information, development of chatbots and virtual assistants. The aim of this comprehensive literature review was to provide an overview of NLP applications in vascular surgery, identify current limitations, and discuss future perspectives in the field. Data sources The MEDLINE database was searched on April 2023. Review methods The database was searched using a combination of keywords to identify studies reporting the use of NLP and chatbots in three main vascular diseases. Keywords used included Natural Language Processing, chatbot, chatGPT, aortic disease, carotid, peripheral artery disease, vascular, and vascular surgery. Results Given the heterogeneity of study design, techniques, and aims, a comprehensive literature review was performed to provide an overview of NLP applications in vascular surgery. By enabling identification and extraction of information on patients with vascular diseases, such technology could help to analyse data from healthcare information systems to provide feedback on current practice and help in optimising patient care. In addition, chatbots and NLP driven techniques have the potential to be used as virtual assistants for both health professionals and patients. Conclusion While Artificial Intelligence and NLP technology could be used to enhance care for patients with vascular diseases, many challenges remain including the need to define guidelines and clear consensus on how to evaluate and validate these innovations before their implementation into clinical practice.
Collapse
Affiliation(s)
- Fabien Lareyre
- Department of Vascular Surgery, Hospital of Antibes Juan-les-Pins, France
- Université Côte d'Azur, Inserm, U1065, C3M, Nice, France
| | - Bahaa Nasr
- Department of Vascular and Endovascular Surgery, Brest University Hospital, Brest, France
- INSERM, UMR 1101, LaTIM, Brest, France
| | - Arindam Chaudhuri
- Bedfordshire - Milton Keynes Vascular Centre, Bedfordshire Hospitals, NHS Foundation Trust, Bedford, UK
| | - Gilles Di Lorenzo
- Department of Vascular Surgery, Hospital of Antibes Juan-les-Pins, France
| | - Mathieu Carlier
- Department of Urology, University Hospital of Nice, Nice, France
| | - Juliette Raffort
- Université Côte d'Azur, Inserm, U1065, C3M, Nice, France
- Institute 3IA Côte d’Azur, Université Côte d’Azur, France
- Clinical Chemistry Laboratory, University Hospital of Nice, France
| |
Collapse
|
154
|
Barrington NM, Gupta N, Musmar B, Doyle D, Panico N, Godbole N, Reardon T, D’Amico RS. A Bibliometric Analysis of the Rise of ChatGPT in Medical Research. Med Sci (Basel) 2023; 11:61. [PMID: 37755165 PMCID: PMC10535733 DOI: 10.3390/medsci11030061] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 09/04/2023] [Accepted: 09/11/2023] [Indexed: 09/28/2023] Open
Abstract
The rapid emergence of publicly accessible artificial intelligence platforms such as large language models (LLMs) has led to an equally rapid increase in articles exploring their potential benefits and risks. We performed a bibliometric analysis of ChatGPT literature in medicine and science to better understand publication trends and knowledge gaps. Following title, abstract, and keyword searches of PubMed, Embase, Scopus, and Web of Science databases for ChatGPT articles published in the medical field, articles were screened for inclusion and exclusion criteria. Data were extracted from included articles, with citation counts obtained from PubMed and journal metrics obtained from Clarivate Journal Citation Reports. After screening, 267 articles were included in the study, most of which were editorials or correspondence with an average of 7.5 +/- 18.4 citations per publication. Published articles on ChatGPT were authored largely in the United States, India, and China. The topics discussed included use and accuracy of ChatGPT in research, medical education, and patient counseling. Among non-surgical specialties, radiology published the most ChatGPT-related articles, while plastic surgery published the most articles among surgical specialties. The average citation number among the top 20 most-cited articles was 60.1 +/- 35.3. Among journals with the most ChatGPT-related publications, there were on average 10 +/- 3.7 publications. Our results suggest that managing the inevitable ethical and safety issues that arise with the implementation of LLMs will require further research exploring the capabilities and accuracy of ChatGPT, to generate policies guiding the adoption of artificial intelligence in medicine and science.
Collapse
Affiliation(s)
- Nikki M. Barrington
- Chicago Medical School, Rosalind Franklin University, North Chicago, IL 60064, USA
| | - Nithin Gupta
- School of Osteopathic Medicine, Campbell University, Lillington, NC 27546, USA
| | - Basel Musmar
- Faculty of Medicine and Health Sciences, An-Najah National University, Nablus P.O. Box 7, West Bank, Palestine
| | - David Doyle
- Central Michigan College of Medicine, Mount Pleasant, MI 48858, USA
| | - Nicholas Panico
- Lake Erie College of Osteopathic Medicine, Erie, PA 16509, USA
| | - Nikhil Godbole
- School of Medicine, Tulane University, New Orleans, LA 70112, USA
| | - Taylor Reardon
- Department of Neurology, Henry Ford Hospital, Detroit, MI 48202, USA
| | - Randy S. D’Amico
- Department of Neurosurgery, Lenox Hill Hospital, New York, NY 10075, USA
| |
Collapse
|
155
|
Talyshinskii A, Naik N, Hameed BMZ, Zhanbyrbekuly U, Khairli G, Guliev B, Juilebø-Jones P, Tzelves L, Somani BK. Expanding horizons and navigating challenges for enhanced clinical workflows: ChatGPT in urology. Front Surg 2023; 10:1257191. [PMID: 37744723 PMCID: PMC10512827 DOI: 10.3389/fsurg.2023.1257191] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 08/28/2023] [Indexed: 09/26/2023] Open
Abstract
Purpose of review ChatGPT has emerged as a potential tool for facilitating doctors' workflows. However, when it comes to applying these findings within a urological context, there have not been many studies. Thus, our objective was rooted in analyzing the pros and cons of ChatGPT use and how it can be exploited and used by urologists. Recent findings ChatGPT can facilitate clinical documentation and note-taking, patient communication and support, medical education, and research. In urology, it was proven that ChatGPT has the potential as a virtual healthcare aide for benign prostatic hyperplasia, an educational and prevention tool on prostate cancer, educational support for urological residents, and as an assistant in writing urological papers and academic work. However, several concerns about its exploitation are presented, such as lack of web crawling, risk of accidental plagiarism, and concerns about patients-data privacy. Summary The existing limitations mediate the need for further improvement of ChatGPT, such as ensuring the privacy of patient data and expanding the learning dataset to include medical databases, and developing guidance on its appropriate use. Urologists can also help by conducting studies to determine the effectiveness of ChatGPT in urology in clinical scenarios and nosologies other than those previously listed.
Collapse
Affiliation(s)
- Ali Talyshinskii
- Department of Urology, Astana Medical University, Astana, Kazakhstan
| | - Nithesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, India
| | | | | | - Gafur Khairli
- Department of Urology, Astana Medical University, Astana, Kazakhstan
| | - Bakhman Guliev
- Department of Urology, Mariinsky Hospital, St Petersburg, Russia
| | | | - Lazaros Tzelves
- Department of Urology, National and Kapodistrian University of Athens, Sismanogleion Hospital, Athens, Marousi, Greece
| | - Bhaskar Kumar Somani
- Department of Urology, University Hospital Southampton NHS Trust, Southampton, United Kingdom
| |
Collapse
|
156
|
Brameier DT, Alnasser AA, Carnino JM, Bhashyam AR, von Keudell AG, Weaver MJ. Artificial Intelligence in Orthopaedic Surgery: Can a Large Language Model "Write" a Believable Orthopaedic Journal Article? J Bone Joint Surg Am 2023; 105:1388-1392. [PMID: 37437021 DOI: 10.2106/jbjs.23.00473] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 07/14/2023]
Abstract
ABSTRACT ➢ Natural language processing with large language models is a subdivision of artificial intelligence (AI) that extracts meaning from text with use of linguistic rules, statistics, and machine learning to generate appropriate text responses. Its utilization in medicine and in the field of orthopaedic surgery is rapidly growing.➢ Large language models can be utilized in generating scientific manuscript texts of a publishable quality; however, they suffer from AI hallucinations, in which untruths or half-truths are stated with misleading confidence. Their use raises considerable concerns regarding the potential for research misconduct and for hallucinations to insert misinformation into the clinical literature.➢ Current editorial processes are insufficient for identifying the involvement of large language models in manuscripts. Academic publishing must adapt to encourage safe use of these tools by establishing clear guidelines for their use, which should be adopted across the orthopaedic literature, and by implementing additional steps in the editorial screening process to identify the use of these tools in submitted manuscripts.
Collapse
Affiliation(s)
- Devon T Brameier
- Department of Orthopaedic Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Ahmad A Alnasser
- Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Jonathan M Carnino
- Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts
| | - Abhiram R Bhashyam
- Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Arvind G von Keudell
- Department of Orthopaedic Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
- Bispebjerg Hospital, University of Copenhagen, Copenhagen, Denmark
| | - Michael J Weaver
- Department of Orthopaedic Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
157
|
Jain A. ChatGPT for scientific community: Boon or bane? Med J Armed Forces India 2023; 79:498-499. [PMID: 37719916 PMCID: PMC10499628 DOI: 10.1016/j.mjafi.2023.06.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 06/22/2023] [Indexed: 09/19/2023] Open
Affiliation(s)
- Ankur Jain
- Assistant Professor (Clinical Haematology), Vardhman Mahavir Medical College & Safdarjung Hospital, New Delhi, India
| |
Collapse
|
158
|
Dubin JA, Bains SS, Hameed D, Chen Z, Nace J, Mont MA, Delanois RE. Letter to the Editor "Assessing ChatGPT's Potential: A Critical Analysis and Future Directions in Total Joint Arthroplasty". J Arthroplasty 2023; 38:e21. [PMID: 37573084 DOI: 10.1016/j.arth.2023.05.059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 05/25/2023] [Indexed: 08/14/2023] Open
Affiliation(s)
- Jeremy A Dubin
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| | - Sandeep S Bains
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| | - Daniel Hameed
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| | - Zhongming Chen
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| | - James Nace
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| | - Michael A Mont
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| | - Ronald E Delanois
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| |
Collapse
|
159
|
Gravel J, D’Amours-Gravel M, Osmanlliu E. Learning to Fake It: Limited Responses and Fabricated References Provided by ChatGPT for Medical Questions. MAYO CLINIC PROCEEDINGS. DIGITAL HEALTH 2023; 1:226-234. [PMID: 40206627 PMCID: PMC11975740 DOI: 10.1016/j.mcpdig.2023.05.004] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Objective To evaluate the quality of the answers and the references provided by ChatGPT for medical questions. Patients and Methods Three researchers asked ChatGPT 20 medical questions and prompted it to provide the corresponding references. The responses were evaluated for the quality of content by medical experts using a verbal numeric scale going from 0% to 100%. These experts were the corresponding authors of the 20 articles from where the medical questions were derived. We planned to evaluate 3 references per response for their pertinence, but this was amended on the basis of preliminary results showing that most references provided by ChatGPT were fabricated. This experimental observational study was conducted in February 2023. Results ChatGPT provided responses varying between 53 and 244 words long and reported 2 to 7 references per answer. Seventeen of the 20 invited raters provided feedback. The raters reported limited quality of the responses, with a median score of 60% (first and third quartiles: 50% and 85%, respectively). In addition, they identified major (n=5) and minor (n=7) factual errors among the 17 evaluated responses. Of the 59 references evaluated, 41 (69%) were fabricated, although they appeared real. Most fabricated citations used names of authors with previous relevant publications, a title that seemed pertinent and a credible journal format. Conclusion When asked multiple medical questions, ChatGPT provided answers of limited quality for scientific publication. More importantly, ChatGPT provided deceptively real references. Users of ChatGPT should pay particular attention to the references provided before integration into medical manuscripts.
Collapse
Affiliation(s)
- Jocelyn Gravel
- Department of Pediatric Emergency Medicine, CHU Sainte-Justine, Université de Montréal, Montréal, Québec, Canada
| | | | - Esli Osmanlliu
- Division of Pediatric Emergency Medicine, Montréal Children Hospital, McGill University, Montréal, Québec, Canada
| |
Collapse
|
160
|
Livberber T, Ayvaz S. The impact of Artificial Intelligence in academia: Views of Turkish academics on ChatGPT. Heliyon 2023; 9:e19688. [PMID: 37809772 PMCID: PMC10558923 DOI: 10.1016/j.heliyon.2023.e19688] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 08/15/2023] [Accepted: 08/30/2023] [Indexed: 10/10/2023] Open
Abstract
In the past decade, Artificial Intelligence (AI) and machine learning technologies have become increasingly prevalent in the academic world. This growing trend has led to debates about the impact of these technologies on academia. The purpose of this article is to examine the impact of ChatGPT, an AI and machine learning technology, in the academic field and to determine academics' perceptions of it. To achieve this goal, in-depth interviews were conducted with 10 academics, and their views on the subject were analyzed. It is seen that academics believe that ChatGPT will play a helpful role as a tool in scientific research and educational processes and can serve as an inspiration for new topics and research areas. Despite these advantages, academics also have ethical concerns, such as plagiarism and misinformation. The study found that ChatGPT is viewed positively as a useful tool in scientific research and education, but ethical concerns such as plagiarism and misinformation need to be addressed.
Collapse
Affiliation(s)
- Tuba Livberber
- Department of Journalism, Faculty of Communication, University of Akdeniz, Antalya, Turkey
| | - Süheyla Ayvaz
- Department of Advertising, Faculty of Communication, University of Selcuk, Konya, Turkey
| |
Collapse
|
161
|
Chinnadurai S, Mahadevan S, Navaneethakrishnan B, Mamadapur M. Decoding Applications of Artificial Intelligence in Rheumatology. Cureus 2023; 15:e46164. [PMID: 37905264 PMCID: PMC10613315 DOI: 10.7759/cureus.46164] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/27/2023] [Indexed: 11/02/2023] Open
Abstract
Artificial intelligence (AI) is not a newcomer in medicine. It has been employed for image analysis, disease diagnosis, drug discovery, and improving overall patient care. ChatGPT (Chat Generative Pre-trained Transformer, Inc., Delaware) has renewed interest and enthusiasm in artificial intelligence. Algorithms, machine learning, deep learning, and data analysis are some of the complex terminologies often encountered when health professionals try to learn AI. In this article, we try to review the practical applications of artificial intelligence in vernacular language in the fields of medicine and rheumatology in particular. From the standpoint of the everyday physician, we have endeavored to encapsulate the influence of AI on the cutting edge of medical practice and the potential revolutionary shift in the realm of rheumatology.
Collapse
Affiliation(s)
- Saranya Chinnadurai
- Rheumatology, Sri Ramachandra Institute of Higher Education and Research, Chennai, IND
| | | | | | | |
Collapse
|
162
|
瞿 星, 杨 金, 陈 滔, 张 伟. [Reflections on the Implications of the Developments in ChatGPT for Changes in Medical Education Models]. SICHUAN DA XUE XUE BAO. YI XUE BAN = JOURNAL OF SICHUAN UNIVERSITY. MEDICAL SCIENCE EDITION 2023; 54:937-940. [PMID: 37866949 PMCID: PMC10579070 DOI: 10.12182/20231360302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Indexed: 10/24/2023]
Abstract
Ever since its official launch, Chat Generative Pre-Trained Transformer, or ChatGPT, a natural language processing tool driven by artificial intelligence (AI) technology, has attracted much attention from the education community. ChatGPT can play an important role in the field of medical education, with its potential applications ranging from assisting teachers in designing individualized teaching scenarios to enhancing students' practical ability for solving clinical problems and improving teaching and research efficiency. With the developments in technology, it is inevitable that ChatGPT, or other generative AI models, will be thoroughly integrated in more and more medical contexts, which will further enhance the efficiency and quality of medical services and allow doctors to spend more time interacting with patients and implement personalized health management. Herein, we suggested that proactive reflections be made to figure out the best way to cultivate health professional in the context of New Medical Education, to help more medical professionals enhance their understanding of developments in artificial intelligence, and to make preparations for the challenges that will emerge in the new round of technological revolution. Medical educators should focus on guiding students to make proper use of AI tools in the appropriate context, thereby prevening abuse or overreliance caused by a lack of discrimating ability. Teachers should focus on helping medical students make improvements in clinical reasoning skills, self-directed learning, and clinical practical skills. Teachers should stress the importance for medical students to understand the philosophical implications of the mind-body unity concept, holistic medical thinking, and systematic medical thinking. It is important to enhance medical students' humanistic qualities, cultivate their empathy and communication skills, and continually enhance their ability to meet the requirements of individualized precision diagnosis and treatment so that they will better adapt to the future developments in medicine.
Collapse
Affiliation(s)
- 星 瞿
- 四川大学华西医院 医院管理研究所 (成都 610041)Institute of Hospital Management, West China Hospital, Sichuan University, Chengdu 610041, China
| | - 金铭 杨
- 四川大学华西医院 医院管理研究所 (成都 610041)Institute of Hospital Management, West China Hospital, Sichuan University, Chengdu 610041, China
| | - 滔 陈
- 四川大学华西医院 医院管理研究所 (成都 610041)Institute of Hospital Management, West China Hospital, Sichuan University, Chengdu 610041, China
| | - 伟 张
- 四川大学华西医院 医院管理研究所 (成都 610041)Institute of Hospital Management, West China Hospital, Sichuan University, Chengdu 610041, China
| |
Collapse
|
163
|
Ho WLJ, Koussayer B, Sujka J. ChatGPT: Friend or foe in medical writing? An example of how ChatGPT can be utilized in writing case reports. SURGERY IN PRACTICE AND SCIENCE 2023; 14:100185. [PMID: 39845855 PMCID: PMC11749974 DOI: 10.1016/j.sipas.2023.100185] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 05/30/2023] [Accepted: 05/31/2023] [Indexed: 01/24/2025] Open
Abstract
ChatGPT is a chatbot built on a natural language processing model which can generate human-like responses to prompts given to it. Despite its lack of domain-specific training, ChatGPT has developed remarkable accuracy in interpreting clinical information. In this article, we aim to assess what role ChatGPT can serve in medical writing. We recruited a first-year medical student with no prior experience in writing case reports to write a case report on a complex surgery with the assistance of ChatGPT. After a thorough evaluation of its responses, we believe that ChatGPT is a powerful medical writing tool that can be used to generate summaries, proofread, and provide valuable medical insight. However, ChatGPT is not a substitute for a study author due to several significant limitations, and should instead be used in conjunction with the author during the writing process. As the impact of natural language processing models such as ChatGPT grows, we suggest that guidelines be established on how to better utilize this technology to improve clinical research rather than outright prohibiting its usage.
Collapse
Affiliation(s)
- Wai Lone Jonathan Ho
- USF Health Morsani College of Medicine, 560 Channelside Dr, Tampa, FL 33602, United States
| | - Bilal Koussayer
- USF Health Morsani College of Medicine, 560 Channelside Dr, Tampa, FL 33602, United States
| | - Joseph Sujka
- USF Department of General Surgery 2 Tampa General Circle, 7th Floor Tampa, FL 33606, United States
| |
Collapse
|
164
|
Shaffrey EC, Eftekari SC, Wilke LG, Poore SO. Surgeon or Bot? The Risks of Using Artificial Intelligence in Surgical Journal Publications. ANNALS OF SURGERY OPEN 2023; 4:e309. [PMID: 37746615 PMCID: PMC10513298 DOI: 10.1097/as9.0000000000000309] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 06/01/2023] [Indexed: 09/26/2023] Open
Abstract
Mini-Abstract ChatGPT is an artificial intelligence (AI) technology that has begun to transform academics through its ability to create human-like text. This has raised ethical concerns about its assistance in writing scientific literature. Our aim is to highlight the benefits and risks that this technology may pose to the surgical field.
Collapse
Affiliation(s)
- Ellen C. Shaffrey
- From the Division of Plastic Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI
| | - Sahand C. Eftekari
- From the Division of Plastic Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI
| | - Lee G. Wilke
- Department of Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI
| | - Samuel O. Poore
- From the Division of Plastic Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI
| |
Collapse
|
165
|
Koga S. The Integration of Large Language Models Such as ChatGPT in Scientific Writing: Harnessing Potential and Addressing Pitfalls. Korean J Radiol 2023; 24:924-925. [PMID: 37634646 PMCID: PMC10462902 DOI: 10.3348/kjr.2023.0738] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Accepted: 08/08/2023] [Indexed: 08/29/2023] Open
Affiliation(s)
- Shunsuke Koga
- Department of Pathology and Laboratory Medicine, Hospital of the University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
166
|
Reis LO. ChatGPT for medical applications and urological science. Int Braz J Urol 2023; 49:652-656. [PMID: 37338818 PMCID: PMC10482461 DOI: 10.1590/s1677-5538.ibju.2023.0112] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 04/30/2023] [Indexed: 06/21/2023] Open
Affiliation(s)
- Leonardo O. Reis
- Universidade Estadual de CampinasFaculdade de Ciências MédicasDepartamento de UrologiaSão PauloCampinasBrasilUroScience e Departamento de Urologia, Faculdade de Ciências Médicas, Universidade Estadual de Campinas - UNICAMP, Campinas, São Paulo, Brasil
- Pontifícia Universidade Católica de CampinasFaculdade de Ciências da VidaDepartamento de ImunoncologiaSão PauloCampinasBrasilDepartamento de Imunoncologia, Faculdade de Ciências da Vida, Pontifícia Universidade Católica de Campinas, PUC-Campinas, Campinas, São Paulo, Brasil
| |
Collapse
|
167
|
Leung TI, de Azevedo Cardoso T, Mavragani A, Eysenbach G. Best Practices for Using AI Tools as an Author, Peer Reviewer, or Editor. J Med Internet Res 2023; 25:e51584. [PMID: 37651164 PMCID: PMC10502596 DOI: 10.2196/51584] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 08/28/2023] [Indexed: 09/01/2023] Open
Abstract
The ethics of generative artificial intelligence (AI) use in scientific manuscript content creation has become a serious matter of concern in the scientific publishing community. Generative AI has computationally become capable of elaborating research questions; refining programming code; generating text in scientific language; and generating images, graphics, or figures. However, this technology should be used with caution. In this editorial, we outline the current state of editorial policies on generative AI or chatbot use in authorship, peer review, and editorial processing of scientific and scholarly manuscripts. Additionally, we provide JMIR Publications' editorial policies on these issues. We further detail JMIR Publications' approach to the applications of AI in the editorial process for manuscripts in review in a JMIR Publications journal.
Collapse
Affiliation(s)
- Tiffany I Leung
- JMIR Publications, Inc, Toronto, ON, Canada
- Department of Internal Medicine (adjunct), Southern Illinois University School of Medicine, Springfield, IL, United States
| | | | | | - Gunther Eysenbach
- JMIR Publications, Inc, Toronto, ON, Canada
- University of Victoria, Victoria, BC, Canada
| |
Collapse
|
168
|
Hulman A, Dollerup OL, Mortensen JF, Fenech ME, Norman K, Støvring H, Hansen TK. ChatGPT- versus human-generated answers to frequently asked questions about diabetes: A Turing test-inspired survey among employees of a Danish diabetes center. PLoS One 2023; 18:e0290773. [PMID: 37651381 PMCID: PMC10470899 DOI: 10.1371/journal.pone.0290773] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 08/16/2023] [Indexed: 09/02/2023] Open
Abstract
Large language models have received enormous attention recently with some studies demonstrating their potential clinical value, despite not being trained specifically for this domain. We aimed to investigate whether ChatGPT, a language model optimized for dialogue, can answer frequently asked questions about diabetes. We conducted a closed e-survey among employees of a large Danish diabetes center. The study design was inspired by the Turing test and non-inferiority trials. Our survey included ten questions with two answers each. One of these was written by a human expert, while the other was generated by ChatGPT. Participants had the task to identify the ChatGPT-generated answer. Data was analyzed at the question-level using logistic regression with robust variance estimation with clustering at participant level. In secondary analyses, we investigated the effect of participant characteristics on the outcome. A 55% non-inferiority margin was pre-defined based on precision simulations and had been published as part of the study protocol before data collection began. Among 311 invited individuals, 183 participated in the survey (59% response rate). 64% had heard of ChatGPT before, and 19% had tried it. Overall, participants could identify ChatGPT-generated answers 59.5% (95% CI: 57.0, 62.0) of the time, which was outside of the non-inferiority zone. Among participant characteristics, previous ChatGPT use had the strongest association with the outcome (odds ratio: 1.52 (1.16, 2.00), p = 0.003). Previous users answered 67.4% (61.7, 72.7) of the questions correctly, versus non-users' 57.6% (54.9, 60.3). Participants could distinguish between ChatGPT-generated and human-written answers somewhat better than flipping a fair coin, which was against our initial hypothesis. Rigorously planned studies are needed to elucidate the risks and benefits of integrating such technologies in routine clinical practice.
Collapse
Affiliation(s)
- Adam Hulman
- Steno Diabetes Center Aarhus, Aarhus University Hospital, Aarhus, Denmark
- Department of Public Health, Aarhus University, Aarhus, Denmark
| | | | - Jesper Friis Mortensen
- Steno Diabetes Center Aarhus, Aarhus University Hospital, Aarhus, Denmark
- Department of Public Health, Aarhus University, Aarhus, Denmark
| | | | - Kasper Norman
- Steno Diabetes Center Aarhus, Aarhus University Hospital, Aarhus, Denmark
| | - Henrik Støvring
- Steno Diabetes Center Aarhus, Aarhus University Hospital, Aarhus, Denmark
- Department of Public Health, University of Southern Denmark, Odense, Denmark
| | - Troels Krarup Hansen
- Steno Diabetes Center Aarhus, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
169
|
Watters C, Lemanski MK. Universal skepticism of ChatGPT: a review of early literature on chat generative pre-trained transformer. Front Big Data 2023; 6:1224976. [PMID: 37680954 PMCID: PMC10482048 DOI: 10.3389/fdata.2023.1224976] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 07/10/2023] [Indexed: 09/09/2023] Open
Abstract
ChatGPT, a new language model developed by OpenAI, has garnered significant attention in various fields since its release. This literature review provides an overview of early ChatGPT literature across multiple disciplines, exploring its applications, limitations, and ethical considerations. The review encompasses Scopus-indexed publications from November 2022 to April 2023 and includes 156 articles related to ChatGPT. The findings reveal a predominance of negative sentiment across disciplines, though subject-specific attitudes must be considered. The review highlights the implications of ChatGPT in many fields including healthcare, raising concerns about employment opportunities and ethical considerations. While ChatGPT holds promise for improved communication, further research is needed to address its capabilities and limitations. This literature review provides insights into early research on ChatGPT, informing future investigations and practical applications of chatbot technology, as well as development and usage of generative AI.
Collapse
Affiliation(s)
- Casey Watters
- Faculty of Law, Bond University, Gold Coast, QLD, Australia
| | | |
Collapse
|
170
|
Leung TI, Sagar A, Shroff S, Henry TL. Can AI Mitigate Bias in Writing Letters of Recommendation? JMIR MEDICAL EDUCATION 2023; 9:e51494. [PMID: 37610808 PMCID: PMC10483302 DOI: 10.2196/51494] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 08/08/2023] [Accepted: 08/08/2023] [Indexed: 08/24/2023]
Abstract
Letters of recommendation play a significant role in higher education and career progression, particularly for women and underrepresented groups in medicine and science. Already, there is evidence to suggest that written letters of recommendation contain language that expresses implicit biases, or unconscious biases, and that these biases occur for all recommenders regardless of the recommender's sex. Given that all individuals have implicit biases that may influence language use, there may be opportunities to apply contemporary technologies, such as large language models or other forms of generative artificial intelligence (AI), to augment and potentially reduce implicit biases in the written language of letters of recommendation. In this editorial, we provide a brief overview of existing literature on the manifestations of implicit bias in letters of recommendation, with a focus on academia and medical education. We then highlight potential opportunities and drawbacks of applying this emerging technology in augmenting the focused, professional task of writing letters of recommendation. We also offer best practices for integrating their use into the routine writing of letters of recommendation and conclude with our outlook for the future of generative AI applications in supporting this task.
Collapse
Affiliation(s)
- Tiffany I Leung
- Department of Internal Medicine (adjunct), Southern Illinois University School of Medicine, Toronto, ON, Canada
- JMIR Publications, Toronto, ON, Canada
| | - Ankita Sagar
- CommonSpirit Health, Chicago, IL, United States
- Creighton University School of Medicine, Omaha, NE, United States
| | - Swati Shroff
- Division of Internal Medicine, Thomas Jefferson University, Philadelphia, PA, United States
| | - Tracey L Henry
- Department of Medicine, Emory University School of Medicine, Atlanta, GA, United States
| |
Collapse
|
171
|
Reed MD. Artificial Intelligence-AI-and The Journal of Pediatric Pharmacology and Therapeutics. J Pediatr Pharmacol Ther 2023; 28:284-286. [PMID: 37795286 PMCID: PMC10547040 DOI: 10.5863/1551-6776-28.4.284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 05/26/2023] [Indexed: 10/06/2023]
Affiliation(s)
- Michael D. Reed
- Editor-in-Chief, The Journal of Pediatric Pharmacology and Therapeutics; Professor Emeritus of Pediatrics, School of Medicine, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
172
|
Anderson KR. CORR Insights ® : Characterization and Reach of Orthopaedic Research Posted to Preprint Servers: Are We "Undercooking" Our Science? Clin Orthop Relat Res 2023; 481:1501-1503. [PMID: 37220087 PMCID: PMC10344567 DOI: 10.1097/corr.0000000000002714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 05/04/2023] [Indexed: 05/25/2023]
|
173
|
Scimeca M, Bonfiglio R. Dignity of science and the use of ChatGPT as a co-author. ESMO Open 2023; 8:101607. [PMID: 37450951 PMCID: PMC10368860 DOI: 10.1016/j.esmoop.2023.101607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 06/20/2023] [Indexed: 07/18/2023] Open
Affiliation(s)
- M Scimeca
- Department of Experimental Medicine, TOR, University of Rome Tor Vergata, Rome; San Raffaele Telematic University, Rome; Faculty of Medicine, Saint Camillus International University of Health Sciences, Rome, Italy
| | - R Bonfiglio
- Department of Experimental Medicine, TOR, University of Rome Tor Vergata, Rome.
| |
Collapse
|
174
|
Zheng H, Zhan H. ChatGPT in Scientific Writing: A Cautionary Tale. Am J Med 2023; 136:725-726.e6. [PMID: 36906169 DOI: 10.1016/j.amjmed.2023.02.011] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 02/14/2023] [Indexed: 03/13/2023]
Affiliation(s)
- Haoyi Zheng
- Saint Francis Hospital and Heart Center, Roslyn, NY.
| | - Huichun Zhan
- Department of Medicine, Stony Brook School of Medicine, Stony Brook, NY; Medical Service, Northport VA Medical Center, Northport, NY
| |
Collapse
|
175
|
Bom HSH. Exploring the Opportunities and Challenges of ChatGPT in Academic Writing: a Roundtable Discussion. Nucl Med Mol Imaging 2023; 57:165-167. [PMID: 37483875 PMCID: PMC10359226 DOI: 10.1007/s13139-023-00809-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 05/09/2023] [Accepted: 05/11/2023] [Indexed: 07/25/2023] Open
Affiliation(s)
- Hee-Seung Henry Bom
- Department of Nuclear Medicine, Chonnam National University Hwasun Hospital, 322 Seoyang-ro, Hwasun-eup, Hwasun-gun, Jeollanam-do Republic of Korea
| |
Collapse
|
176
|
Jeyaraman M, Balaji S, Jeyaraman N, Yadav S. Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare. Cureus 2023; 15:e43262. [PMID: 37692617 PMCID: PMC10492220 DOI: 10.7759/cureus.43262] [Citation(s) in RCA: 44] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/10/2023] [Indexed: 09/12/2023] Open
Abstract
The integration of artificial intelligence (AI) into healthcare promises groundbreaking advancements in patient care, revolutionizing clinical diagnosis, predictive medicine, and decision-making. This transformative technology uses machine learning, natural language processing, and large language models (LLMs) to process and reason like human intelligence. OpenAI's ChatGPT, a sophisticated LLM, holds immense potential in medical practice, research, and education. However, as AI in healthcare gains momentum, it brings forth profound ethical challenges that demand careful consideration. This comprehensive review explores key ethical concerns in the domain, including privacy, transparency, trust, responsibility, bias, and data quality. Protecting patient privacy in data-driven healthcare is crucial, with potential implications for psychological well-being and data sharing. Strategies like homomorphic encryption (HE) and secure multiparty computation (SMPC) are vital to preserving confidentiality. Transparency and trustworthiness of AI systems are essential, particularly in high-risk decision-making scenarios. Explainable AI (XAI) emerges as a critical aspect, ensuring a clear understanding of AI-generated predictions. Cybersecurity becomes a pressing concern as AI's complexity creates vulnerabilities for potential breaches. Determining responsibility in AI-driven outcomes raises important questions, with debates on AI's moral agency and human accountability. Shifting from data ownership to data stewardship enables responsible data management in compliance with regulations. Addressing bias in healthcare data is crucial to avoid AI-driven inequities. Biases present in data collection and algorithm development can perpetuate healthcare disparities. A public-health approach is advocated to address inequalities and promote diversity in AI research and the workforce. Maintaining data quality is imperative in AI applications, with convolutional neural networks showing promise in multi-input/mixed data models, offering a comprehensive patient perspective. In this ever-evolving landscape, it is imperative to adopt a multidimensional approach involving policymakers, developers, healthcare practitioners, and patients to mitigate ethical concerns. By understanding and addressing these challenges, we can harness the full potential of AI in healthcare while ensuring ethical and equitable outcomes.
Collapse
Affiliation(s)
- Madhan Jeyaraman
- Orthopedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Sangeetha Balaji
- Orthopedics, Government Medical College, Omandurar Government Estate, Chennai, IND
| | - Naveen Jeyaraman
- Orthopedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Sankalp Yadav
- Medicine, Shri Madan Lal Khurana Chest Clinic, New Delhi, IND
| |
Collapse
|
177
|
Lubiana T, Lopes R, Medeiros P, Silva JC, Goncalves ANA, Maracaja-Coutinho V, Nakaya HI. Ten quick tips for harnessing the power of ChatGPT in computational biology. PLoS Comput Biol 2023; 19:e1011319. [PMID: 37561669 PMCID: PMC10414555 DOI: 10.1371/journal.pcbi.1011319] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/12/2023] Open
Affiliation(s)
- Tiago Lubiana
- School of Pharmaceutical Sciences, University of São Paulo, São Paulo, Brazil
| | - Rafael Lopes
- Department of Epidemiology of Microbial Diseases and Public Health Modeling Unit, Yale School of Public Health, New Haven, Connecticut, United States of America
| | | | - Juan Carlo Silva
- School of Pharmaceutical Sciences, University of São Paulo, São Paulo, Brazil
| | | | - Vinicius Maracaja-Coutinho
- Advanced Center for Chronic Diseases, Universidad de Chile, Santiago, Chile
- Centro de Modelamiento Molecular, Biofísica y Bioinformática—CM2B2, Facultad de Ciencias Químicas y Farmacéuticas, Universidad de Chile, Santiago, Chile
- ANID Anillo ACT210004 SYSTEMIX, Rancagua, Chile
- Anillo Inflammation in HIV/AIDS—InflammAIDS, Santiago, Chile
- Beagle Bioinformatics, São Paulo, Brasil & Santiago, Chile
| | - Helder I. Nakaya
- School of Pharmaceutical Sciences, University of São Paulo, São Paulo, Brazil
- Hospital Israelita Albert Einstein, São Paulo, Brazil
| |
Collapse
|
178
|
Salas A, Rivero-Calle I, Martinón-Torres F. Chatting with ChatGPT to learn about safety of COVID-19 vaccines - A perspective. Hum Vaccin Immunother 2023; 19:2235200. [PMID: 37660470 PMCID: PMC10478732 DOI: 10.1080/21645515.2023.2235200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 07/06/2023] [Indexed: 09/05/2023] Open
Abstract
Vaccine hesitancy is among the top 10 threats to global health, according to the World Health Organization (WHO). In this exploration, we delve into ChatGPT capacity to generate opinions on vaccine hesitancy by interrogating this AI chatbot for the 50 most prevalent counterfait messages, false and true contraindications, and myths circulating on the internet regarding vaccine safety. Our results indicate that, while the present version of ChatGPT's default responses may be incomplete, they are generally satisfactory. Although ChatGPT cannot substitute an expert or the scientific evidence itself, this form of AI has the potential to guide users toward information that aligns well with scientific evidence.
Collapse
Affiliation(s)
- Antonio Salas
- Unidade de Xenética, Instituto de Ciencias Forenses, Facultade de Medicina, Universidade de Santiago de Compostela, and GenPoB Research Group, Instituto de Investigación Sanitaria (IDIS), Hospital Clínico Universitario de Santiago (SERGAS), Santiago de Compostela, Spain
- Genetics, Vaccines and Infections Research Group (GENVIP), Instituto de Investigación Sanitaria de Santiago, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
- Centro de Investigación Biomédica en Red de Enfermedades Respiratorias (CIBER-ES), Madrid, Spain
| | - Irene Rivero-Calle
- Genetics, Vaccines and Infections Research Group (GENVIP), Instituto de Investigación Sanitaria de Santiago, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
- Centro de Investigación Biomédica en Red de Enfermedades Respiratorias (CIBER-ES), Madrid, Spain
- WHO Collaborating Centre for Vaccine Safety of Santiago de Compostela, Servizo Galego de Saude, Santiago de Compostela, Spain
- Translational Pediatrics and Infectious Diseases, Department of Pediatrics, Hospital Clínico Universitario de Santiago de Compostela, Santiago de Compostela, Spain
| | - Federico Martinón-Torres
- Genetics, Vaccines and Infections Research Group (GENVIP), Instituto de Investigación Sanitaria de Santiago, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
- Centro de Investigación Biomédica en Red de Enfermedades Respiratorias (CIBER-ES), Madrid, Spain
- WHO Collaborating Centre for Vaccine Safety of Santiago de Compostela, Servizo Galego de Saude, Santiago de Compostela, Spain
- Translational Pediatrics and Infectious Diseases, Department of Pediatrics, Hospital Clínico Universitario de Santiago de Compostela, Santiago de Compostela, Spain
| |
Collapse
|
179
|
Mondal H, Panigrahi M, Mishra B, Behera JK, Mondal S. A pilot study on the capability of artificial intelligence in preparation of patients' educational materials for Indian public health issues. J Family Med Prim Care 2023; 12:1659-1662. [PMID: 37767452 PMCID: PMC10521817 DOI: 10.4103/jfmpc.jfmpc_262_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 06/09/2023] [Accepted: 06/10/2023] [Indexed: 09/29/2023] Open
Abstract
Background Patient education is an essential component of improving public health as it empowers individuals with the knowledge and skills necessary for making informed decisions about their health and well-being. Primary care physicians play a crucial role in patients' education as they are the first contact between the patients and the healthcare system. However, they may not get adequate time to prepare educational material for their patients. An artificial intelligence-based writer like ChatGPT can help write the material for physicians. Aim This study aimed to ascertain the capability of ChatGPT for generating patients' educational materials for common public health issues in India. Materials and Methods This observational study was conducted on the internet using the free research version of ChatGPT, a conversational artificial intelligence that can generate human-like text output. We conversed with the program with the question - "prepare a patients' education material for X in India." In the X, we used the following words or phrases - "air pollution," "malnutrition," "maternal and child health," "mental health," "noncommunicable diseases," "road traffic accidents," "tuberculosis," and "water-borne diseases." The textual response in the conversation was collected and stored for further analysis. The text was analyzed for readability, grammatical errors, and text similarity. Result We generated a total of eight educational documents with a median of 26 (Q1-Q3: 21.5-34) sentences with a median of 349 (Q1-Q3: 329-450.5) words. The median Flesch Reading Ease Score was 48.2 (Q1-Q3: 39-50.65). It indicates that the text can be understood by a college student. The text was grammatically correct with very few (seven errors in 3415 words) errors. The text was very clear in the majority (8 out of 9) of documents with a median score of 85 (Q1-Q3: 82.5-85) in 100. The overall text similarity index was 18% (Q1-Q3: 7.5-26). Conclusion The research version of the ChatGPT (January 30, 2023 version) is capable of generating patients' educational materials for common public health issues in India with a difficulty level ideal for college students with high grammatical accuracy. However, the text similarity should be checked before using it. Primary care physicians can take the help of ChatGPT for generating text for materials used for patients' education.
Collapse
Affiliation(s)
- Himel Mondal
- Department of Physiology, All India Institute of Medical Sciences, Deoghar, Jharkhand, India
| | - Muralidhar Panigrahi
- Department of Pharmacology, Bhima Bhoi Medical College and Hospital, Balangir, Odisha, India
| | - Baidyanath Mishra
- Department of Physiology, Sri Jagannath Medical College and Hospital, Puri, Odisha, India
| | - Joshil K. Behera
- Department of Physiology, Nagaland Institute of Medical Science and Research, Nagaland, India
| | - Shaikat Mondal
- Department of Physiology, Raiganj Government Medical College and Hospital, West Bengal, India
| |
Collapse
|
180
|
Doyal AS, Sender D, Nanda M, Serrano RA. ChatGPT and Artificial Intelligence in Medical Writing: Concerns and Ethical Considerations. Cureus 2023; 15:e43292. [PMID: 37692694 PMCID: PMC10492634 DOI: 10.7759/cureus.43292] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/10/2023] [Indexed: 09/12/2023] Open
Abstract
Artificial intelligence (AI) language generation models, such as ChatGPT, have the potential to revolutionize the field of medical writing and other natural language processing (NLP) tasks. It is crucial to consider the ethical concerns that come with their use. These include bias, misinformation, privacy, lack of transparency, job displacement, stifling creativity, plagiarism, authorship, and dependence. Therefore, it is essential to develop strategies to understand and address these concerns. Important techniques include common bias and misinformation detection, ensuring privacy, providing transparency, and being mindful of the impact on employment. The AI-generated text must be critically reviewed by medical experts to validate the output generated by these models before being used in any clinical or medical context. By considering these ethical concerns and taking appropriate measures, we can ensure that the benefits of these powerful tools are maximized while minimizing any potential harm. This article focuses on the implications of AI assistants in medical writing and hopes to provide insight into the perceived rapid rate of technological progression from a historical and ethical perspective.
Collapse
Affiliation(s)
- Alexander S Doyal
- Anesthesiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, USA
| | - David Sender
- Anesthesiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, USA
| | - Monika Nanda
- Anesthesiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, USA
| | - Ricardo A Serrano
- Anesthesiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, USA
| |
Collapse
|
181
|
Jeyaraman M, K SP, Jeyaraman N, Nallakumarasamy A, Yadav S, Bondili SK. ChatGPT in Medical Education and Research: A Boon or a Bane? Cureus 2023; 15:e44316. [PMID: 37779749 PMCID: PMC10536401 DOI: 10.7759/cureus.44316] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/29/2023] [Indexed: 10/03/2023] Open
Abstract
Science fiction literature and films no longer focus on artificial intelligence. In contrast to all other aspects of life, medical education and clinical patient care have been progressing slowly. Recently, a lot of text from the internet was used to construct and train chatbots, especially ChatGPT. The language model ChatGPT, created by OpenAI, has emerged as a useful resource for medical research and education. It has proven to be a useful tool for researchers, students, and medical professionals because of its capacity to produce human-like answers to challenging medical queries. However, using ChatGPT also has significant drawbacks. The possibility of erroneous or biased information being spread, which could have negative effects on patient care, is one of the key worries. Moreover, the overreliance on technology in medical education could also lead to a decline in critical thinking and clinical decision-making skills. Overall, ChatGPT has the potential to be a boon to medical education and research, but its use must be accompanied by caution and critical evaluation.
Collapse
Affiliation(s)
- Madhan Jeyaraman
- Orthopaedics, South Texas Orthopaedic Research Institute (STORI), Laredo, USA
- Orthopaedics, ACS Medical College and Hospital, Dr MGR Educational and Research Institute, Chennai, IND
| | - Shanmuga Priya K
- Pulmonology, Faculty of Medicine, Sri Lalithambigai Medical College and Hospital, Dr MGR Educational and Research Institute, Chennai, IND
| | - Naveen Jeyaraman
- Orthopaedics, ACS Medical College and Hospital, Dr MGR Educational and Research Institute, Chennai, IND
| | - Arulkumar Nallakumarasamy
- Orthopaedics, ACS Medical College and Hospital, Dr MGR Educational and Research Institute, Chennai, IND
| | - Sankalp Yadav
- Medicine, Shri Madan Lal Khurana Chest Clinic, New Delhi, IND
| | | |
Collapse
|
182
|
Dwivedi YK, Kshetri N, Hughes L, Slade EL, Jeyaraj A, Kar AK, Baabdullah AM, Koohang A, Raghavan V, Ahuja M, Albanna H, Albashrawi MA, Al-Busaidi AS, Balakrishnan J, Barlette Y, Basu S, Bose I, Brooks L, Buhalis D, Carter L, Chowdhury S, Crick T, Cunningham SW, Davies GH, Davison RM, Dé R, Dennehy D, Duan Y, Dubey R, Dwivedi R, Edwards JS, Flavián C, Gauld R, Grover V, Hu MC, Janssen M, Jones P, Junglas I, Khorana S, Kraus S, Larsen KR, Latreille P, Laumer S, Malik FT, Mardani A, Mariani M, Mithas S, Mogaji E, Nord JH, O’Connor S, Okumus F, Pagani M, Pandey N, Papagiannidis S, Pappas IO, Pathak N, Pries-Heje J, Raman R, Rana NP, Rehm SV, Ribeiro-Navarrete S, Richter A, Rowe F, Sarker S, Stahl BC, Tiwari MK, van der Aalst W, Venkatesh V, Viglia G, Wade M, Walton P, Wirtz J, Wright R. “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2023. [DOI: 10.1016/j.ijinfomgt.2023.102642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
|
183
|
Miller R. A Surgical Perspective on Large Language Models. Ann Surg 2023; 278:e211-e213. [PMID: 37132392 DOI: 10.1097/sla.0000000000005896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Affiliation(s)
- Robert Miller
- Plastics and Reconstructive Surgery Department, Charing Cross Hospital, Imperial College Healthcare Trust, London, UK
- Fellow in Clinical Artificial Intelligence, The London Medical Imaging & AI Centre for Value Based Healthcare, London, UK
| |
Collapse
|
184
|
Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med 2023; 29:1930-1940. [PMID: 37460753 DOI: 10.1038/s41591-023-02448-8] [Citation(s) in RCA: 737] [Impact Index Per Article: 368.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 06/08/2023] [Indexed: 08/17/2023]
Abstract
Large language models (LLMs) can respond to free-text queries without being specifically trained in the task in question, causing excitement and concern about their use in healthcare settings. ChatGPT is a generative artificial intelligence (AI) chatbot produced through sophisticated fine-tuning of an LLM, and other tools are emerging through similar developmental processes. Here we outline how LLM applications such as ChatGPT are developed, and we discuss how they are being leveraged in clinical settings. We consider the strengths and limitations of LLMs and their potential to improve the efficiency and effectiveness of clinical, educational and research work in medicine. LLM chatbots have already been deployed in a range of biomedical contexts, with impressive but mixed results. This review acts as a primer for interested clinicians, who will determine if and how LLM technology is used in healthcare for the benefit of patients and practitioners.
Collapse
Affiliation(s)
- Arun James Thirunavukarasu
- University of Cambridge School of Clinical Medicine, Cambridge, UK
- Corpus Christi College, University of Cambridge, Cambridge, UK
| | - Darren Shu Jeng Ting
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
- Birmingham and Midland Eye Centre, Birmingham, UK
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, UK
| | - Kabilan Elangovan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Laura Gutierrez
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ting Fang Tan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Department of Ophthalmology and Visual Sciences, Duke-National University of Singapore Medical School, Singapore, Singapore
| | - Daniel Shu Wei Ting
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Department of Ophthalmology and Visual Sciences, Duke-National University of Singapore Medical School, Singapore, Singapore.
- Byers Eye Institute, Stanford University, Palo Alto, CA, USA.
| |
Collapse
|
185
|
Mukherjee S, Durkin C, PeBenito A, Ferrante N, Umana I, Kochman M. Assessing ChatGPT's Ability to Reply to Queries Regarding Colon Cancer Screening Based on Multisociety Guidelines. GASTRO HEP ADVANCES 2023; 2:1040-1043. [PMID: 37974564 PMCID: PMC10653253 DOI: 10.1016/j.gastha.2023.07.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 07/17/2023] [Indexed: 11/19/2023]
Affiliation(s)
- S. Mukherjee
- Gastroenterology Division, Department of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| | - C. Durkin
- Gastroenterology Division, Department of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| | - A.M. PeBenito
- Gastroenterology Division, Department of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| | - N.D. Ferrante
- Gastroenterology Division, Department of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| | - I.C. Umana
- Gastroenterology Division, Department of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| | - M.L. Kochman
- Gastroenterology Division, Department of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
- Department of Medicine, Center for Endoscopic Innovation Research and Training, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
186
|
Meyer JG, Urbanowicz RJ, Martin PCN, O'Connor K, Li R, Peng PC, Bright TJ, Tatonetti N, Won KJ, Gonzalez-Hernandez G, Moore JH. ChatGPT and large language models in academia: opportunities and challenges. BioData Min 2023; 16:20. [PMID: 37443040 PMCID: PMC10339472 DOI: 10.1186/s13040-023-00339-9] [Citation(s) in RCA: 47] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/15/2023] Open
Abstract
The introduction of large language models (LLMs) that allow iterative "chat" in late 2022 is a paradigm shift that enables generation of text often indistinguishable from that written by humans. LLM-based chatbots have immense potential to improve academic work efficiency, but the ethical implications of their fair use and inherent bias must be considered. In this editorial, we discuss this technology from the academic's perspective with regard to its limitations and utility for academic writing, education, and programming. We end with our stance with regard to using LLMs and chatbots in academia, which is summarized as (1) we must find ways to effectively use them, (2) their use does not constitute plagiarism (although they may produce plagiarized text), (3) we must quantify their bias, (4) users must be cautious of their poor accuracy, and (5) the future is bright for their application to research and as an academic tool.
Collapse
Affiliation(s)
- Jesse G Meyer
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles California, USA.
| | - Ryan J Urbanowicz
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles California, USA
| | - Patrick C N Martin
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles California, USA
| | - Karen O'Connor
- Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Ruowang Li
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles California, USA
| | - Pei-Chen Peng
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles California, USA
| | - Tiffani J Bright
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles California, USA
| | - Nicholas Tatonetti
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles California, USA
| | - Kyoung Jae Won
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles California, USA
| | | | - Jason H Moore
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles California, USA.
| |
Collapse
|
187
|
Ide K, Hawke P, Nakayama T. Can ChatGPT Be Considered an Author of a Medical Article? J Epidemiol 2023; 33:381-382. [PMID: 37032109 PMCID: PMC10257993 DOI: 10.2188/jea.je20230030] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 02/26/2023] [Indexed: 04/11/2023] Open
Affiliation(s)
- Kazuki Ide
- Division of Scientific Information and Public Policy, Center for Infectious Disease Education and Research (CiDER), Osaka University, Osaka, Japan
- Research Center on Ethical, Legal and Social Issues, Osaka University, Osaka, Japan
- National Institute of Science and Technology Policy (NISTEP), Tokyo, Japan
| | - Philip Hawke
- School of Pharmaceutical Sciences, University of Shizuoka, Shizuoka, Japan
| | - Takeo Nakayama
- Department of Health Informatics, Graduate School of Medicine/School of Public Health, Kyoto University, Kyoto, Japan
| |
Collapse
|
188
|
Milton CL. ChatGPT and Forms of Deception. Nurs Sci Q 2023; 36:232-233. [PMID: 37309153 DOI: 10.1177/08943184231169753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The artificial intelligence (AI) chatbot ChatGPT movement has upset and permeated all aspects of the healthcare arena, including the discipline of nursing. The use of ChatGPT is ethically controversial. This article begins a discussion regarding the impacts of ChatGPT and the possibilities of deception with its usage in scientific and disciplinary publications and academic products.
Collapse
Affiliation(s)
- Constance L Milton
- Professor Emeritus, School of Nursing, Azusa Pacific University, Azusa, CA, USA
| |
Collapse
|
189
|
Altmäe S, Sola-Leyva A, Salumets A. Artificial intelligence in scientific writing: a friend or a foe? Reprod Biomed Online 2023; 47:3-9. [PMID: 37142479 DOI: 10.1016/j.rbmo.2023.04.009] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 04/12/2023] [Indexed: 05/06/2023]
Abstract
The generative pre-trained transformer, ChatGPT, is a chatbot that could serve as a powerful tool in scientific writing. ChatGPT is a so-called large language model (LLM) that is trained to mimic the statistical patterns of language in an enormous database of human-generated text combined from text in books, articles and websites across a wide range of domains. ChatGPT can assist scientists with material organization, draft creation and proofreading, making it a valuable tool in research and publishing. This paper discusses the use of this artificial intelligence (AI) chatbot in academic writing by presenting one simplified example. Specifically, it reflects our experience of using ChatGPT to draft a scientific article for Reproductive BioMedicine Online and highlights the pros, cons and concerns associated with using LLM-based AI for generating a manuscript.
Collapse
Affiliation(s)
- Signe Altmäe
- Department of Biochemistry and Molecular Biology, Faculty of Sciences, University of Granada, Granada, Spain; Instituto de Investigación Biosanitaria ibs.GRANADA, Granada, Spain; Division of Obstetrics and Gynecology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet and Karolinska University Hospital, Stockholm, Sweden.
| | - Alberto Sola-Leyva
- Department of Biochemistry and Molecular Biology, Faculty of Sciences, University of Granada, Granada, Spain; Instituto de Investigación Biosanitaria ibs.GRANADA, Granada, Spain
| | - Andres Salumets
- Division of Obstetrics and Gynecology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet and Karolinska University Hospital, Stockholm, Sweden; Competence Centre on Health Technologies, Tartu, Estonia; Department of Obstetrics and Gynaecology, Institute of Clinical Medicine, University of Tartu, Tartu, Estonia
| |
Collapse
|
190
|
Lee SW, Choi WJ. Utilizing ChatGPT in clinical research related to anesthesiology: a comprehensive review of opportunities and limitations. Anesth Pain Med (Seoul) 2023; 18:244-251. [PMID: 37691594 PMCID: PMC10410543 DOI: 10.17085/apm.23056] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 07/15/2023] [Accepted: 07/17/2023] [Indexed: 09/12/2023] Open
Abstract
Chat generative pre-trained transformer (ChatGPT) is a chatbot developed by OpenAI that answers questions in a human-like manner. ChatGPT is a GPT language model that understands and responds to natural language created using a transformer, which is a new artificial neural network algorithm first introduced by Google in 2017. ChatGPT can be used to identify research topics and proofread English writing and R scripts to improve work efficiency and optimize time. Attempts to actively utilize generative artificial intelligence (AI) are expected to continue in clinical settings. However, ChatGPT still has many limitations for widespread use in clinical research, owing to AI hallucination symptoms and its training data constraints. Researchers recommend avoiding scientific writing using ChatGPT in many traditional journals because of the current lack of originality guidelines and plagiarism of content generated by ChatGPT. Further regulations and discussions on these topics are expected in the future.
Collapse
Affiliation(s)
- Sang-Wook Lee
- Department of Anesthesiology and Pain Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Woo-Jong Choi
- Department of Anesthesiology and Pain Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| |
Collapse
|
191
|
Rawashdeh B, Kim J, AlRyalat SA, Prasad R, Cooper M. ChatGPT and Artificial Intelligence in Transplantation Research: Is It Always Correct? Cureus 2023; 15:e42150. [PMID: 37602076 PMCID: PMC10438857 DOI: 10.7759/cureus.42150] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/18/2023] [Indexed: 08/22/2023] Open
Abstract
INTRODUCTION ChatGPT (OpenAI, San Francisco, California, United States) is a chatbot powered by language-based artificial intelligence (AI). It generates text based on the information provided by users. It is currently being evaluated in medical research, publishing, and healthcare. However, there has been no prior study on the evaluation of its ability to help in kidney transplant research. This feasibility study aimed to evaluate the application and accuracy of ChatGPT in the field of kidney transplantation. METHODS On two separate dates, February 21 and March 2, 2023, ChatGPT 3.5 was questioned regarding the medical treatment of kidney transplants and related scientific facts. The responses provided by the chatbot were compiled, and a panel of two specialists reviewed the correctness of each answer. RESULTS We demonstrated that ChatGPT possessed substantial general knowledge of kidney transplantation; however, they lacked sufficient information and had inaccurate information that necessitates a deeper understanding of the topic. Moreover, ChatGPT failed to provide references for any of the scientific data it provided regarding kidney transplants, and when requested for references, it provided inaccurate ones. CONCLUSION The results of this short feasibility study indicate that ChatGPT may have the ability to assist in data collecting when a particular query is posed. However, caution should be exercised and it should not be used in isolation as a supplement to research or decisions regarding healthcare because there are still challenges with data accuracy and missing information.
Collapse
Affiliation(s)
- Badi Rawashdeh
- Transplant Surgery, Medical College of Wisconsin, Milwaukee, USA
| | - Joohyun Kim
- Transplant Surgery, Medical College of Wisconsin, Milwaukee, USA
| | | | - Raj Prasad
- Transplant Surgery, Medical College of Wisconsin, Milwaukee, USA
| | - Matthew Cooper
- Transplant Surgery, Medical College of Wisconsin, Milwaukee, USA
| |
Collapse
|
192
|
Mondal H, Mondal S, Podder I. Using ChatGPT for Writing Articles for Patients' Education for Dermatological Diseases: A Pilot Study. Indian Dermatol Online J 2023; 14:482-486. [PMID: 37521213 PMCID: PMC10373821 DOI: 10.4103/idoj.idoj_72_23] [Citation(s) in RCA: 39] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 04/09/2023] [Accepted: 04/16/2023] [Indexed: 08/01/2023] Open
Abstract
Background Patients' education is a vital strategy for understanding a disease by patients and proper management of the condition. Physicians and academicians frequently make customized education materials for their patients. An artificial intelligence (AI)-based writer can help them write an article. Chat Generative Pre-Trained Transformer (ChatGPT) is a conversational language model developed by OpenAI (openai.com). The model can generate human-like responses. Objective We aimed to evaluate the generated text from ChatGPT for its suitability in patients' education. Materials and Methods We asked the ChatGPT to list common dermatological diseases. It provided a list of 14 diseases. We used the disease names to converse with the application with disease-specific input (e.g., write a patient education guide on acne). The text was copied for checking the number of words, readability, and text similarity by software. The text's accuracy was checked by a dermatologist following the structure of observed learning outcomes (SOLO) taxonomy. For the readability ease score, we compared the observed value with a score of 30. For the similarity index, we compared the observed value with 15% and tested it with a one-sample t-test. Results The ChatGPT generated a paragraph of text of 377.43 ± 60.85 words for a patient education guide on skin diseases. The average text reading ease score was 46.94 ± 8.23 (P < 0.0001), and it indicates that this level of text can easily be understood by a high-school student to a newly joined college student. The text similarity index was higher (27.07 ± 11.46%, P = 0.002) than the expected limit of 15%. The text had a "relational" level of accuracy according to the SOLO taxonomy. Conclusion In its current form, ChatGPT can generate a paragraph of text for patients' educational purposes that can be easily understood. However, the similarity index is high. Hence, doctors should be cautious when using the text generated by ChatGPT and must check for text similarity before using it.
Collapse
Affiliation(s)
- Himel Mondal
- Department of Physiology, All India Institute of Medical Sciences, Deoghar, Jharkhand, India
| | - Shaikat Mondal
- Department of Physiology, Raiganj Government Medical College and Hospital, West Bengal, India
| | - Indrashis Podder
- Department of Dermatology, College of Medicine and Sagore Dutta Hospital, Kolkata, West Bengal, India
| |
Collapse
|
193
|
Grech V, Cuschieri S, Eldawlatly AA. Artificial intelligence in medicine and research - the good, the bad, and the ugly. Saudi J Anaesth 2023; 17:401-406. [PMID: 37601525 PMCID: PMC10435812 DOI: 10.4103/sja.sja_344_23] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/26/2023] [Indexed: 08/22/2023] Open
Abstract
Artificial intelligence (AI) broadly refers to machines that simulate intelligent human behavior, and research into this field is exponential and worldwide, with global players such as Microsoft battling with Google for supremacy and market share. This paper reviews the "good" aspects of AI in medicine for individuals who embrace the 4P model of medicine (Predictive, Preventive, Personalized, and Participatory) to medical assistants in diagnostics, surgery, and research. The "bad" aspects relate to the potential for errors, culpability, ethics, data loss and data breaches, and so on. The "ugly" aspects are deliberate personal malfeasances and outright scientific misconduct including the ease of plagiarism and fabrication, with particular reference to the novel ChatGPT as well as AI software that can also fabricate graphs and images. The issues pertaining to the potential dangers of creating rogue, super-intelligent AI systems that lead to a technological singularity and the ensuing perceived existential threat to mankind by leading AI researchers are also briefly discussed.
Collapse
|
194
|
Ritz T. Intelligence or artificial intelligence? More hard problems for authors of Biological Psychology, the neurosciences, and everyone else. Biol Psychol 2023; 181:108590. [PMID: 37236498 DOI: 10.1016/j.biopsycho.2023.108590] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 05/22/2023] [Indexed: 05/28/2023]
Affiliation(s)
- Thomas Ritz
- Department of Psychology, Southern Methodist University, Dallas, TX, USA.
| |
Collapse
|
195
|
Lingard L. Writing with ChatGPT: An Illustration of its Capacity, Limitations & Implications for Academic Writers. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:261-270. [PMID: 37397181 PMCID: PMC10312253 DOI: 10.5334/pme.1072] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 06/06/2023] [Indexed: 07/04/2023]
|
196
|
Cadamuro J, Cabitza F, Debeljak Z, De Bruyne S, Frans G, Perez SM, Ozdemir H, Tolios A, Carobene A, Padoan A. Potentials and pitfalls of ChatGPT and natural-language artificial intelligence models for the understanding of laboratory medicine test results. An assessment by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Working Group on Artificial Intelligence (WG-AI). Clin Chem Lab Med 2023; 61:1158-1166. [PMID: 37083166 DOI: 10.1515/cclm-2023-0355] [Citation(s) in RCA: 48] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 04/12/2023] [Indexed: 04/22/2023]
Abstract
OBJECTIVES ChatGPT, a tool based on natural language processing (NLP), is on everyone's mind, and several potential applications in healthcare have been already proposed. However, since the ability of this tool to interpret laboratory test results has not yet been tested, the EFLM Working group on Artificial Intelligence (WG-AI) has set itself the task of closing this gap with a systematic approach. METHODS WG-AI members generated 10 simulated laboratory reports of common parameters, which were then passed to ChatGPT for interpretation, according to reference intervals (RI) and units, using an optimized prompt. The results were subsequently evaluated independently by all WG-AI members with respect to relevance, correctness, helpfulness and safety. RESULTS ChatGPT recognized all laboratory tests, it could detect if they deviated from the RI and gave a test-by-test as well as an overall interpretation. The interpretations were rather superficial, not always correct, and, only in some cases, judged coherently. The magnitude of the deviation from the RI seldom plays a role in the interpretation of laboratory tests, and artificial intelligence (AI) did not make any meaningful suggestion regarding follow-up diagnostics or further procedures in general. CONCLUSIONS ChatGPT in its current form, being not specifically trained on medical data or laboratory data in particular, may only be considered a tool capable of interpreting a laboratory report on a test-by-test basis at best, but not on the interpretation of an overall diagnostic picture. Future generations of similar AIs with medical ground truth training data might surely revolutionize current processes in healthcare, despite this implementation is not ready yet.
Collapse
Affiliation(s)
- Janne Cadamuro
- Department of Laboratory Medicine, Paracelsus Medical University Salzburg, Salzburg, Austria
| | - Federico Cabitza
- DISCo, Università degli Studi di Milano-Bicocca, Milano, Italy
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| | - Zeljko Debeljak
- Faculty of Medicine, Josip Juraj Strossmayer University of Osijek, Osijek, Croatia
- Clinical Institute of Laboratory Diagnostics, University Hospital Center Osijek, Osijek, Croatia
| | - Sander De Bruyne
- Department of Laboratory Medicine, Ghent University Hospital, Ghent, Belgium
| | - Glynis Frans
- Department of Laboratory Medicine, University Hospitals Leuven, KU Leuven, Leuven, Belgium
| | - Salomon Martin Perez
- Unidad de Bioquímica Clínica, Hospital Universitario Virgen Macarena, Sevilla, Spain
| | - Habib Ozdemir
- Department of Medical Biochemistry, Faculty of Medicine, Manisa Celal Bayar University, Manisa, Türkiye
| | - Alexander Tolios
- Department of Transfusion Medicine and Cell Therapy, Medical University of Vienna, Vienna, Austria
| | - Anna Carobene
- IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Andrea Padoan
- Department of Medicine (DIMED), University of Padova, Padova, Italy
| |
Collapse
|
197
|
Sorin V, Klang E. Large language models and the emergence phenomena. Eur J Radiol Open 2023; 10:100494. [PMID: 37325497 PMCID: PMC10265451 DOI: 10.1016/j.ejro.2023.100494] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/15/2023] [Accepted: 05/31/2023] [Indexed: 06/17/2023] Open
Abstract
This perspective explores the potential of emergence phenomena in large language models (LLMs) to transform data management and analysis in radiology. We provide a concise explanation of LLMs, define the concept of emergence in machine learning, offer examples of potential applications within the radiology field, and discuss risks and limitations. Our goal is to encourage radiologists to recognize and prepare for the impact this technology may have on radiology and medicine in the near future.
Collapse
Affiliation(s)
- Vera Sorin
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, affiliated to the Sackler School of Medicine, Tel-Aviv University, Israel
- DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Eyal Klang
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, affiliated to the Sackler School of Medicine, Tel-Aviv University, Israel
- DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
- Sami Sagol AI Hub, ARC, Sheba Medical Center, Israel
| |
Collapse
|
198
|
Alqahtani T, Badreldin HA, Alrashed M, Alshaya AI, Alghamdi SS, Bin Saleh K, Alowais SA, Alshaya OA, Rahman I, Al Yami MS, Albekairy AM. The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research. Res Social Adm Pharm 2023:S1551-7411(23)00280-2. [PMID: 37321925 DOI: 10.1016/j.sapharm.2023.05.016] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 05/29/2023] [Accepted: 05/30/2023] [Indexed: 06/17/2023]
Abstract
Artificial Intelligence (AI) has revolutionized various domains, including education and research. Natural language processing (NLP) techniques and large language models (LLMs) such as GPT-4 and BARD have significantly advanced our comprehension and application of AI in these fields. This paper provides an in-depth introduction to AI, NLP, and LLMs, discussing their potential impact on education and research. By exploring the advantages, challenges, and innovative applications of these technologies, this review gives educators, researchers, students, and readers a comprehensive view of how AI could shape educational and research practices in the future, ultimately leading to improved outcomes. Key applications discussed in the field of research include text generation, data analysis and interpretation, literature review, formatting and editing, and peer review. AI applications in academics and education include educational support and constructive feedback, assessment, grading, tailored curricula, personalized career guidance, and mental health support. Addressing the challenges associated with these technologies, such as ethical concerns and algorithmic biases, is essential for maximizing their potential to improve education and research outcomes. Ultimately, the paper aims to contribute to the ongoing discussion about the role of AI in education and research and highlight its potential to lead to better outcomes for students, educators, and researchers.
Collapse
Affiliation(s)
- Tariq Alqahtani
- Department of Pharmaceutical Sciences, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Saudi Arabia; King Abdullah International Medical Research Center, Riyadh, Saudi Arabia.
| | - Hisham A Badreldin
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Mohammed Alrashed
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Abdulrahman I Alshaya
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Sahar S Alghamdi
- Department of Pharmaceutical Sciences, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Saudi Arabia; King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
| | - Khalid Bin Saleh
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Shuroug A Alowais
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Omar A Alshaya
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Ishrat Rahman
- Department of Basic Dental Sciences, College of Dentistry, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | - Majed S Al Yami
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Abdulkareem M Albekairy
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| |
Collapse
|
199
|
Misra DP, Chandwar K. ChatGPT, artificial intelligence and scientific writing: What authors, peer reviewers and editors should know. J R Coll Physicians Edinb 2023; 53:90-93. [PMID: 37305993 DOI: 10.1177/14782715231181023] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/13/2023] Open
Affiliation(s)
- Durga Prasanna Misra
- Department of Clinical Immunology and Rheumatology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, India
| | - Kunal Chandwar
- Department of Clinical Immunology and Rheumatology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, India
| |
Collapse
|
200
|
Tiwari A, Kumar A, Jain S, Dhull KS, Sajjanar A, Puthenkandathil R, Paiwal K, Singh R. Implications of ChatGPT in Public Health Dentistry: A Systematic Review. Cureus 2023; 15:e40367. [PMID: 37456464 PMCID: PMC10340128 DOI: 10.7759/cureus.40367] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 06/10/2023] [Indexed: 07/18/2023] Open
Abstract
An artificial intelligence (AI) program called ChatGPT that generates text in response to typed commands has proven to be highly popular, as evidenced by the fact that OpenAI makes it available online. The goal of the present investigation was to investigate ChatGPT's potential applications as an outstanding instance of large language models (LLMs) in the fields of public dental health schooling, writing for academic use, research in public dental health, and clinical practice in public dental health based on the available data. Importantly, the goals of the current review included locating any drawbacks and issues that might be connected to using ChatGPT in the previously mentioned contexts in healthcare settings. Using search phrases including chatGPT, implications, artificial intelligence (AI), public health dentistry, public health, practice in public health dentistry, education in public health dentistry, academic writing in public health dentistry, etc., a thorough search was carried out on the Pubmed database, the Embase database, the Ovid database, the Global Health database, PsycINFO, and the Web of Science. The dates of publication were not restricted. Systematic searches were carried out for all publications according to inclusion and exclusion criteria between March 31, 2018, and March 31, 2023. Eighty-four papers were obtained through a literature search using search terms. Sixteen similar and duplicate papers were excluded and 68 distinct articles were initially selected. Thirty-three articles were excluded after reviewing abstracts and titles. Thirty-five papers were selected, for which full text was managed. Four extra papers were found manually from references. Thirty-nine articles with full texts were eligible for the study. Eighteen inadequate articles are excluded from the final 21 studies that were finally selected for systemic review. According to previously published studies, ChatGPT has demonstrated its effectiveness in helping scholars with the authoring of scientific research and dental studies. If the right structures are created, ChatGPT can offer suitable responses and more time to concentrate on the phase of experimentation for scientists. Risks include prejudice in the training data, undervaluing human skills, the possibility of fraud in science, as well as legal and reproducibility concerns. It was concluded that practice considering ChatGPT's potential significance, the research's uniqueness, and the premise-the activity of the human brain-remains. While there is no question about the superiority of incorporating ChatGPT into the practice of public health dentistry, it does not, in any way, take the place of a dentist since clinical practice involves more than just making diagnoses; it also involves relating to clinical findings and providing individualized patient care. Even though AI can be useful in a number of ways, a dentist must ultimately make the decision because dentistry is a field that involves several disciplines.
Collapse
Affiliation(s)
- Anushree Tiwari
- Clinical Quality and Value, American Academy of Orthopaedic Surgeons, Rosemont, USA
| | - Amit Kumar
- Department of Dentistry, All India Institute of Medical Sciences, Patna, IND
| | - Shailesh Jain
- Department of Prosthodontics and Crown and Bridge, School of Dental Sciences, Sharda University, Greater Noida, IND
| | - Kanika S Dhull
- Department of Pedodontics and Preventive Dentistry, Kalinga Institute of Dental Sciences (KIIT) Deemed to be University, Bhubaneswar, IND
| | - Arunkumar Sajjanar
- Department of Pediatrics and Preventive Dentistry, Swargiya Dadasaheb Kalmegh Smruti Dental College and Hospital, Nagpur, IND
| | - Rahul Puthenkandathil
- Department of Prosthodontics and Crown and Bridge, AB Shetty Memorial Institute of Dental Sciences (ABSMIDS) Nitte (Deemed to be University), Mangalore, IND
| | - Kapil Paiwal
- Department of Oral and Maxillofacial Pathology, Daswani Dental College and Research Center, Kota, IND
| | - Ramanpal Singh
- Oral Medicine and Radiology, New Horizon Dental College and Research Institute, Bilaspur, IND
| |
Collapse
|