151
|
Verhoeven F, Wendling D, Prati C. ChatGPT: when artificial intelligence replaces the rheumatologist in medical writing. Ann Rheum Dis 2023; 82:1015-1017. [PMID: 37041067 PMCID: PMC10359572 DOI: 10.1136/ard-2023-223936] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 03/06/2023] [Indexed: 04/13/2023]
Abstract
In this editorial we discuss the place of artificial intelligence (AI) in the writing of scientific articles and especially editorials. We asked chatGPT « to write an editorial for Annals of Rheumatic Diseases about how AI may replace the rheumatologist in editorial writing ». chatGPT's response is diplomatic and describes AI as a tool to help the rheumatologist but not replace him. AI is already used in medicine, especially in image analysis, but the domains are infinite and it is possible that AI could quickly help or replace rheumatologists in the writing of scientific articles. We discuss the ethical aspects and the future role of rheumatologists.
Collapse
Affiliation(s)
- Frank Verhoeven
- Rheumatology, CHU Besancon, Besancon, France
- EA 4267 PEPITE, Université de Franche-Comté, Besancon, France
| | - Daniel Wendling
- Rheumatology, CHU Besancon, Besancon, France
- EA4266 EPILAB, Université de Franche-Comté, Besancon, France
| | - Clément Prati
- Rheumatology, CHU Besancon, Besancon, France
- EA 4267 PEPITE, Université de Franche-Comté, Besancon, France
| |
Collapse
|
152
|
Mondal H, Panigrahi M, Mishra B, Behera JK, Mondal S. A pilot study on the capability of artificial intelligence in preparation of patients' educational materials for Indian public health issues. J Family Med Prim Care 2023; 12:1659-1662. [PMID: 37767452 PMCID: PMC10521817 DOI: 10.4103/jfmpc.jfmpc_262_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 06/09/2023] [Accepted: 06/10/2023] [Indexed: 09/29/2023] Open
Abstract
Background Patient education is an essential component of improving public health as it empowers individuals with the knowledge and skills necessary for making informed decisions about their health and well-being. Primary care physicians play a crucial role in patients' education as they are the first contact between the patients and the healthcare system. However, they may not get adequate time to prepare educational material for their patients. An artificial intelligence-based writer like ChatGPT can help write the material for physicians. Aim This study aimed to ascertain the capability of ChatGPT for generating patients' educational materials for common public health issues in India. Materials and Methods This observational study was conducted on the internet using the free research version of ChatGPT, a conversational artificial intelligence that can generate human-like text output. We conversed with the program with the question - "prepare a patients' education material for X in India." In the X, we used the following words or phrases - "air pollution," "malnutrition," "maternal and child health," "mental health," "noncommunicable diseases," "road traffic accidents," "tuberculosis," and "water-borne diseases." The textual response in the conversation was collected and stored for further analysis. The text was analyzed for readability, grammatical errors, and text similarity. Result We generated a total of eight educational documents with a median of 26 (Q1-Q3: 21.5-34) sentences with a median of 349 (Q1-Q3: 329-450.5) words. The median Flesch Reading Ease Score was 48.2 (Q1-Q3: 39-50.65). It indicates that the text can be understood by a college student. The text was grammatically correct with very few (seven errors in 3415 words) errors. The text was very clear in the majority (8 out of 9) of documents with a median score of 85 (Q1-Q3: 82.5-85) in 100. The overall text similarity index was 18% (Q1-Q3: 7.5-26). Conclusion The research version of the ChatGPT (January 30, 2023 version) is capable of generating patients' educational materials for common public health issues in India with a difficulty level ideal for college students with high grammatical accuracy. However, the text similarity should be checked before using it. Primary care physicians can take the help of ChatGPT for generating text for materials used for patients' education.
Collapse
Affiliation(s)
- Himel Mondal
- Department of Physiology, All India Institute of Medical Sciences, Deoghar, Jharkhand, India
| | - Muralidhar Panigrahi
- Department of Pharmacology, Bhima Bhoi Medical College and Hospital, Balangir, Odisha, India
| | - Baidyanath Mishra
- Department of Physiology, Sri Jagannath Medical College and Hospital, Puri, Odisha, India
| | - Joshil K. Behera
- Department of Physiology, Nagaland Institute of Medical Science and Research, Nagaland, India
| | - Shaikat Mondal
- Department of Physiology, Raiganj Government Medical College and Hospital, West Bengal, India
| |
Collapse
|
153
|
Dwivedi YK, Kshetri N, Hughes L, Slade EL, Jeyaraj A, Kar AK, Baabdullah AM, Koohang A, Raghavan V, Ahuja M, Albanna H, Albashrawi MA, Al-Busaidi AS, Balakrishnan J, Barlette Y, Basu S, Bose I, Brooks L, Buhalis D, Carter L, Chowdhury S, Crick T, Cunningham SW, Davies GH, Davison RM, Dé R, Dennehy D, Duan Y, Dubey R, Dwivedi R, Edwards JS, Flavián C, Gauld R, Grover V, Hu MC, Janssen M, Jones P, Junglas I, Khorana S, Kraus S, Larsen KR, Latreille P, Laumer S, Malik FT, Mardani A, Mariani M, Mithas S, Mogaji E, Nord JH, O’Connor S, Okumus F, Pagani M, Pandey N, Papagiannidis S, Pappas IO, Pathak N, Pries-Heje J, Raman R, Rana NP, Rehm SV, Ribeiro-Navarrete S, Richter A, Rowe F, Sarker S, Stahl BC, Tiwari MK, van der Aalst W, Venkatesh V, Viglia G, Wade M, Walton P, Wirtz J, Wright R. “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2023. [DOI: 10.1016/j.ijinfomgt.2023.102642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
|
154
|
Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med 2023; 29:1930-1940. [PMID: 37460753 DOI: 10.1038/s41591-023-02448-8] [Citation(s) in RCA: 737] [Impact Index Per Article: 368.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 06/08/2023] [Indexed: 08/17/2023]
Abstract
Large language models (LLMs) can respond to free-text queries without being specifically trained in the task in question, causing excitement and concern about their use in healthcare settings. ChatGPT is a generative artificial intelligence (AI) chatbot produced through sophisticated fine-tuning of an LLM, and other tools are emerging through similar developmental processes. Here we outline how LLM applications such as ChatGPT are developed, and we discuss how they are being leveraged in clinical settings. We consider the strengths and limitations of LLMs and their potential to improve the efficiency and effectiveness of clinical, educational and research work in medicine. LLM chatbots have already been deployed in a range of biomedical contexts, with impressive but mixed results. This review acts as a primer for interested clinicians, who will determine if and how LLM technology is used in healthcare for the benefit of patients and practitioners.
Collapse
Affiliation(s)
- Arun James Thirunavukarasu
- University of Cambridge School of Clinical Medicine, Cambridge, UK
- Corpus Christi College, University of Cambridge, Cambridge, UK
| | - Darren Shu Jeng Ting
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
- Birmingham and Midland Eye Centre, Birmingham, UK
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, UK
| | - Kabilan Elangovan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Laura Gutierrez
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ting Fang Tan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Department of Ophthalmology and Visual Sciences, Duke-National University of Singapore Medical School, Singapore, Singapore
| | - Daniel Shu Wei Ting
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Department of Ophthalmology and Visual Sciences, Duke-National University of Singapore Medical School, Singapore, Singapore.
- Byers Eye Institute, Stanford University, Palo Alto, CA, USA.
| |
Collapse
|
155
|
Dergaa I, Chamari K, Glenn JM, Ben Aissa M, Guelmami N, Ben Saad H. Towards responsible research: examining the need for preprint policy reassessment in the era of artificial intelligence. EXCLI JOURNAL 2023; 22:686-689. [PMID: 37662707 PMCID: PMC10471843 DOI: 10.17179/excli2023-6324] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 07/20/2023] [Indexed: 09/05/2023]
Affiliation(s)
- Ismail Dergaa
- Primary Health Care Corporation (PHCC), Doha, Qatar
- Research Unit Physical Activity, Sport, and Health, UR18JS01, National Observatory of Sport, Tunis 1003, Tunisia
- High Institute of Sport and Physical Education, University of Sfax, Sfax, Tunisia
| | - Karim Chamari
- Aspetar, Orthopedic and Sports Medicine Hospital, FIFA Medical Center of Excellence, Doha, Qatar
| | | | - Mohamed Ben Aissa
- Department of Human and Social Sciences, High Institute of Sport and Physical Education of Kef, University of Jendouba, Kef, Tunisia
| | - Noomen Guelmami
- Postgraduate School of Public Health, Department of Health Sciences (DISSAL), University of Genoa, Genoa, Italy
| | - Helmi Ben Saad
- University of Sousse, Farhat HACHED Hospital, Service of Physiology and Functional Explorations, Sousse, Tunisia
- University of Sousse, Farhat HACHED Hospital, Research Laboratory LR12SP09 «Heart Failure», Sousse, Tunisia
- University of Sousse, Faculty of Medicine of Sousse, Laboratory of Physiology, Sousse, Tunisia
| |
Collapse
|
156
|
Liang W, Yuksekgonul M, Mao Y, Wu E, Zou J. GPT detectors are biased against non-native English writers. PATTERNS (NEW YORK, N.Y.) 2023; 4:100779. [PMID: 37521038 PMCID: PMC10382961 DOI: 10.1016/j.patter.2023.100779] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/01/2023]
Abstract
GPT detectors frequently misclassify non-native English writing as AI generated, raising concerns about fairness and robustness. Addressing the biases in these detectors is crucial to prevent the marginalization of non-native English speakers in evaluative and educational settings and to create a more equitable digital landscape.
Collapse
Affiliation(s)
- Weixin Liang
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Mert Yuksekgonul
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Yining Mao
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Eric Wu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - James Zou
- Department of Computer Science, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| |
Collapse
|
157
|
Yoo JH. Let's Look on the Bright Side of ChatGPT. J Korean Med Sci 2023; 38:e231. [PMID: 37431546 PMCID: PMC10332949 DOI: 10.3346/jkms.2023.38.e231] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Accepted: 06/19/2023] [Indexed: 07/12/2023] Open
Affiliation(s)
- Jin-Hong Yoo
- Division of Infectious Diseases, Department of Internal Medicine, Bucheon St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
- Deputy Editor of Journal of Korean Medical Science.
| |
Collapse
|
158
|
Deiana G, Dettori M, Arghittu A, Azara A, Gabutti G, Castiglia P. Artificial Intelligence and Public Health: Evaluating ChatGPT Responses to Vaccination Myths and Misconceptions. Vaccines (Basel) 2023; 11:1217. [PMID: 37515033 PMCID: PMC10386180 DOI: 10.3390/vaccines11071217] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 07/04/2023] [Accepted: 07/05/2023] [Indexed: 07/30/2023] Open
Abstract
Artificial intelligence (AI) tools, such as ChatGPT, are the subject of intense debate regarding their possible applications in contexts such as health care. This study evaluates the Correctness, Clarity, and Exhaustiveness of the answers provided by ChatGPT on the topic of vaccination. The World Health Organization's 11 "myths and misconceptions" about vaccinations were administered to both the free (GPT-3.5) and paid version (GPT-4.0) of ChatGPT. The AI tool's responses were evaluated qualitatively and quantitatively, in reference to those myth and misconceptions provided by WHO, independently by two expert Raters. The agreement between the Raters was significant for both versions (p of K < 0.05). Overall, ChatGPT responses were easy to understand and 85.4% accurate although one of the questions was misinterpreted. Qualitatively, the GPT-4.0 responses were superior to the GPT-3.5 responses in terms of Correctness, Clarity, and Exhaustiveness (Δ = 5.6%, 17.9%, 9.3%, respectively). The study shows that, if appropriately questioned, AI tools can represent a useful aid in the health care field. However, when consulted by non-expert users, without the support of expert medical advice, these tools are not free from the risk of eliciting misleading responses. Moreover, given the existing social divide in information access, the improved accuracy of answers from the paid version raises further ethical issues.
Collapse
Affiliation(s)
- Giovanna Deiana
- Department of Biomedical Sciences, University of Sassari, 07100 Sassari, Italy
- Department of Medical, Surgical and Experimental Sciences, University Hospital of Sassari, 07100 Sassari, Italy
| | - Marco Dettori
- Department of Medical, Surgical and Experimental Sciences, University Hospital of Sassari, 07100 Sassari, Italy
- Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy
- Department of Restorative, Pediatric and Preventive Dentistry, University of Bern, 3012 Bern, Switzerland
| | - Antonella Arghittu
- Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy
| | - Antonio Azara
- Department of Medical, Surgical and Experimental Sciences, University Hospital of Sassari, 07100 Sassari, Italy
- Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy
| | - Giovanni Gabutti
- Working Group "Vaccines and Immunization Policies", Italian Society of Hygiene, Preventive Medicine and Public Health, 16030 Cogorno, Italy
| | - Paolo Castiglia
- Department of Medical, Surgical and Experimental Sciences, University Hospital of Sassari, 07100 Sassari, Italy
- Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy
- Working Group "Vaccines and Immunization Policies", Italian Society of Hygiene, Preventive Medicine and Public Health, 16030 Cogorno, Italy
| |
Collapse
|
159
|
Ide K, Hawke P, Nakayama T. Can ChatGPT Be Considered an Author of a Medical Article? J Epidemiol 2023; 33:381-382. [PMID: 37032109 PMCID: PMC10257993 DOI: 10.2188/jea.je20230030] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 02/26/2023] [Indexed: 04/11/2023] Open
Affiliation(s)
- Kazuki Ide
- Division of Scientific Information and Public Policy, Center for Infectious Disease Education and Research (CiDER), Osaka University, Osaka, Japan
- Research Center on Ethical, Legal and Social Issues, Osaka University, Osaka, Japan
- National Institute of Science and Technology Policy (NISTEP), Tokyo, Japan
| | - Philip Hawke
- School of Pharmaceutical Sciences, University of Shizuoka, Shizuoka, Japan
| | - Takeo Nakayama
- Department of Health Informatics, Graduate School of Medicine/School of Public Health, Kyoto University, Kyoto, Japan
| |
Collapse
|
160
|
Fan Z, Yang Q, Xia H, Zhang P, Sun K, Yang M, Yin R, Zhao D, Ma H, Shen Y, Fan J. Artificial intelligence can accurately distinguish IgA nephropathy from diabetic nephropathy under Masson staining and becomes an important assistant for renal pathologists. Front Med (Lausanne) 2023; 10:1066125. [PMID: 37469661 PMCID: PMC10352102 DOI: 10.3389/fmed.2023.1066125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 05/31/2023] [Indexed: 07/21/2023] Open
Abstract
Introduction Hyperplasia of the mesangial area is common in IgA nephropathy (IgAN) and diabetic nephropathy (DN), and it is often difficult to distinguish them by light microscopy alone, especially in the absence of clinical data. At present, artificial intelligence (AI) is widely used in pathological diagnosis, but mainly in tumor pathology. The application of AI in renal pathological is still in its infancy. Methods Patients diagnosed as IgAN or DN by renal biopsy in First Affiliated Hospital of Zhejiang Chinese Medicine University from September 1, 2020 to April 30, 2022 were selected as the training set, and patients who diagnosed from May 1, 2022 to June 30, 2022 were selected as the test set. We focused on the glomerulus and captured the field of the glomerulus in Masson staining WSI at 200x magnification, all in 1,000 × 1,000 pixels JPEG format. We augmented the data from training set through minor affine transformation, and then randomly split the training set into training and adjustment data according to 8:2. The training data and the Yolov5 6.1 algorithm were used to train the AI model with constant adjustment of parameters according to the adjusted data. Finally, we obtained the optimal model, tested this model with test set and compared it with renal pathologists. Results AI can accurately detect the glomeruli. The overall accuracy of AI glomerulus detection was 98.67% and the omission rate was only 1.30%. No Intact glomerulus was missed. The overall accuracy of AI reached 73.24%, among which the accuracy of IgAN reached 77.27% and DN reached 69.59%. The AUC of IgAN was 0.733 and that of DN was 0.627. In addition, compared with renal pathologists, AI can distinguish IgAN from DN more quickly and accurately, and has higher consistency. Discussion We constructed an AI model based on Masson staining images of renal tissue to distinguish IgAN from DN. This model has also been successfully deployed in the work of renal pathologists to assist them in their daily diagnosis and teaching work.
Collapse
Affiliation(s)
- Zhenliang Fan
- Nephrology Department, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Traditional Chinese Medicine), Hangzhou, China
- Academy of Chinese Medical Science, Zhejiang Chinese Medical University, Hangzhou, China
| | - Qiaorui Yang
- Harbin Institute of Physical Education, Harbin, China
| | - Hong Xia
- Nephrology Department, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Traditional Chinese Medicine), Hangzhou, China
| | - Peipei Zhang
- Nephrology Department, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Traditional Chinese Medicine), Hangzhou, China
| | - Ke Sun
- Graduate School, Zhejiang Chinese Medical University, Hangzhou, China
| | - Mengfan Yang
- Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Riping Yin
- Nephrology and Endocrinology Department, Pinghu Hospital of Traditional Chinese Medicine, Pinghu, China
| | - Dongxue Zhao
- Harbin Institute of Physical Education, Harbin, China
| | - Hongzhen Ma
- Nephrology Department, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Traditional Chinese Medicine), Hangzhou, China
| | - Yiwei Shen
- Ningbo Municipal Hospital of Traditional Chinese Medicine (Affiliated Hospital of Zhejiang Chinese Medical University), Ningbo, China
| | - Junfen Fan
- Nephrology Department, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Traditional Chinese Medicine), Hangzhou, China
| |
Collapse
|
161
|
Altmäe S, Sola-Leyva A, Salumets A. Artificial intelligence in scientific writing: a friend or a foe? Reprod Biomed Online 2023; 47:3-9. [PMID: 37142479 DOI: 10.1016/j.rbmo.2023.04.009] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 04/12/2023] [Indexed: 05/06/2023]
Abstract
The generative pre-trained transformer, ChatGPT, is a chatbot that could serve as a powerful tool in scientific writing. ChatGPT is a so-called large language model (LLM) that is trained to mimic the statistical patterns of language in an enormous database of human-generated text combined from text in books, articles and websites across a wide range of domains. ChatGPT can assist scientists with material organization, draft creation and proofreading, making it a valuable tool in research and publishing. This paper discusses the use of this artificial intelligence (AI) chatbot in academic writing by presenting one simplified example. Specifically, it reflects our experience of using ChatGPT to draft a scientific article for Reproductive BioMedicine Online and highlights the pros, cons and concerns associated with using LLM-based AI for generating a manuscript.
Collapse
Affiliation(s)
- Signe Altmäe
- Department of Biochemistry and Molecular Biology, Faculty of Sciences, University of Granada, Granada, Spain; Instituto de Investigación Biosanitaria ibs.GRANADA, Granada, Spain; Division of Obstetrics and Gynecology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet and Karolinska University Hospital, Stockholm, Sweden.
| | - Alberto Sola-Leyva
- Department of Biochemistry and Molecular Biology, Faculty of Sciences, University of Granada, Granada, Spain; Instituto de Investigación Biosanitaria ibs.GRANADA, Granada, Spain
| | - Andres Salumets
- Division of Obstetrics and Gynecology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet and Karolinska University Hospital, Stockholm, Sweden; Competence Centre on Health Technologies, Tartu, Estonia; Department of Obstetrics and Gynaecology, Institute of Clinical Medicine, University of Tartu, Tartu, Estonia
| |
Collapse
|
162
|
Lee SW, Choi WJ. Utilizing ChatGPT in clinical research related to anesthesiology: a comprehensive review of opportunities and limitations. Anesth Pain Med (Seoul) 2023; 18:244-251. [PMID: 37691594 PMCID: PMC10410543 DOI: 10.17085/apm.23056] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 07/15/2023] [Accepted: 07/17/2023] [Indexed: 09/12/2023] Open
Abstract
Chat generative pre-trained transformer (ChatGPT) is a chatbot developed by OpenAI that answers questions in a human-like manner. ChatGPT is a GPT language model that understands and responds to natural language created using a transformer, which is a new artificial neural network algorithm first introduced by Google in 2017. ChatGPT can be used to identify research topics and proofread English writing and R scripts to improve work efficiency and optimize time. Attempts to actively utilize generative artificial intelligence (AI) are expected to continue in clinical settings. However, ChatGPT still has many limitations for widespread use in clinical research, owing to AI hallucination symptoms and its training data constraints. Researchers recommend avoiding scientific writing using ChatGPT in many traditional journals because of the current lack of originality guidelines and plagiarism of content generated by ChatGPT. Further regulations and discussions on these topics are expected in the future.
Collapse
Affiliation(s)
- Sang-Wook Lee
- Department of Anesthesiology and Pain Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Woo-Jong Choi
- Department of Anesthesiology and Pain Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| |
Collapse
|
163
|
Rawashdeh B, Kim J, AlRyalat SA, Prasad R, Cooper M. ChatGPT and Artificial Intelligence in Transplantation Research: Is It Always Correct? Cureus 2023; 15:e42150. [PMID: 37602076 PMCID: PMC10438857 DOI: 10.7759/cureus.42150] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/18/2023] [Indexed: 08/22/2023] Open
Abstract
INTRODUCTION ChatGPT (OpenAI, San Francisco, California, United States) is a chatbot powered by language-based artificial intelligence (AI). It generates text based on the information provided by users. It is currently being evaluated in medical research, publishing, and healthcare. However, there has been no prior study on the evaluation of its ability to help in kidney transplant research. This feasibility study aimed to evaluate the application and accuracy of ChatGPT in the field of kidney transplantation. METHODS On two separate dates, February 21 and March 2, 2023, ChatGPT 3.5 was questioned regarding the medical treatment of kidney transplants and related scientific facts. The responses provided by the chatbot were compiled, and a panel of two specialists reviewed the correctness of each answer. RESULTS We demonstrated that ChatGPT possessed substantial general knowledge of kidney transplantation; however, they lacked sufficient information and had inaccurate information that necessitates a deeper understanding of the topic. Moreover, ChatGPT failed to provide references for any of the scientific data it provided regarding kidney transplants, and when requested for references, it provided inaccurate ones. CONCLUSION The results of this short feasibility study indicate that ChatGPT may have the ability to assist in data collecting when a particular query is posed. However, caution should be exercised and it should not be used in isolation as a supplement to research or decisions regarding healthcare because there are still challenges with data accuracy and missing information.
Collapse
Affiliation(s)
- Badi Rawashdeh
- Transplant Surgery, Medical College of Wisconsin, Milwaukee, USA
| | - Joohyun Kim
- Transplant Surgery, Medical College of Wisconsin, Milwaukee, USA
| | | | - Raj Prasad
- Transplant Surgery, Medical College of Wisconsin, Milwaukee, USA
| | - Matthew Cooper
- Transplant Surgery, Medical College of Wisconsin, Milwaukee, USA
| |
Collapse
|
164
|
Mondal H, Mondal S, Podder I. Using ChatGPT for Writing Articles for Patients' Education for Dermatological Diseases: A Pilot Study. Indian Dermatol Online J 2023; 14:482-486. [PMID: 37521213 PMCID: PMC10373821 DOI: 10.4103/idoj.idoj_72_23] [Citation(s) in RCA: 39] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 04/09/2023] [Accepted: 04/16/2023] [Indexed: 08/01/2023] Open
Abstract
Background Patients' education is a vital strategy for understanding a disease by patients and proper management of the condition. Physicians and academicians frequently make customized education materials for their patients. An artificial intelligence (AI)-based writer can help them write an article. Chat Generative Pre-Trained Transformer (ChatGPT) is a conversational language model developed by OpenAI (openai.com). The model can generate human-like responses. Objective We aimed to evaluate the generated text from ChatGPT for its suitability in patients' education. Materials and Methods We asked the ChatGPT to list common dermatological diseases. It provided a list of 14 diseases. We used the disease names to converse with the application with disease-specific input (e.g., write a patient education guide on acne). The text was copied for checking the number of words, readability, and text similarity by software. The text's accuracy was checked by a dermatologist following the structure of observed learning outcomes (SOLO) taxonomy. For the readability ease score, we compared the observed value with a score of 30. For the similarity index, we compared the observed value with 15% and tested it with a one-sample t-test. Results The ChatGPT generated a paragraph of text of 377.43 ± 60.85 words for a patient education guide on skin diseases. The average text reading ease score was 46.94 ± 8.23 (P < 0.0001), and it indicates that this level of text can easily be understood by a high-school student to a newly joined college student. The text similarity index was higher (27.07 ± 11.46%, P = 0.002) than the expected limit of 15%. The text had a "relational" level of accuracy according to the SOLO taxonomy. Conclusion In its current form, ChatGPT can generate a paragraph of text for patients' educational purposes that can be easily understood. However, the similarity index is high. Hence, doctors should be cautious when using the text generated by ChatGPT and must check for text similarity before using it.
Collapse
Affiliation(s)
- Himel Mondal
- Department of Physiology, All India Institute of Medical Sciences, Deoghar, Jharkhand, India
| | - Shaikat Mondal
- Department of Physiology, Raiganj Government Medical College and Hospital, West Bengal, India
| | - Indrashis Podder
- Department of Dermatology, College of Medicine and Sagore Dutta Hospital, Kolkata, West Bengal, India
| |
Collapse
|
165
|
Park I, Joshi AS, Javan R. Potential role of ChatGPT in clinical otolaryngology explained by ChatGPT. Am J Otolaryngol 2023; 44:103873. [PMID: 37004317 DOI: 10.1016/j.amjoto.2023.103873] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 03/20/2023] [Accepted: 03/25/2023] [Indexed: 03/31/2023]
Affiliation(s)
- Isabel Park
- Division of Otolaryngology-Head and Neck Surgery, George Washington University School of Medicine & Health Sciences, Washington, DC, USA.
| | - Arjun S Joshi
- Division of Otolaryngology-Head and Neck Surgery, George Washington University School of Medicine & Health Sciences, Washington, DC, USA
| | - Ramin Javan
- Department of Radiology, George Washington University Hospital, Washington, DC, USA
| |
Collapse
|
166
|
He Y, Tang H, Wang D, Gu S, Ni G, Wu H. Will ChatGPT/GPT-4 be a Lighthouse to Guide Spinal Surgeons? Ann Biomed Eng 2023; 51:1362-1365. [PMID: 37071280 DOI: 10.1007/s10439-023-03206-0] [Citation(s) in RCA: 37] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 04/11/2023] [Indexed: 04/19/2023]
Abstract
The advent of artificial intelligence (AI), particularly ChatGPT/GPT-4, has led to advancements in various fields, including healthcare. This study explores the prospective role of ChatGPT/GPT-4 in various facets of spinal surgical practice, especially in supporting spinal surgeons during the perioperative management of endoscopic spinal surgery for patients with lumbar disc herniation. The AI-driven chatbot can facilitate communication between spinal surgeons, patients, and their relatives, streamline the collection and analysis of patient data, and contribute to the surgical planning process. Furthermore, ChatGPT/GPT-4 may enhance intraoperative support by providing real-time surgical navigation information and physiological parameter monitoring, as well as aiding in postoperative rehabilitation guidance. However, the appropriate and supervised use of ChatGPT/GPT-4 is essential, considering the potential risks associated with data security and privacy. The study concludes that ChatGPT/GPT-4 can serve as a valuable lighthouse for spinal surgeons if used correctly and responsibly.
Collapse
Affiliation(s)
- Yongbin He
- School of Sport Medicine and Rehabilitation, Beijing Sport University, Beijing, China
| | - Haifeng Tang
- School of Sport Medicine and Rehabilitation, Beijing Sport University, Beijing, China
| | - Dongxue Wang
- School of Sport Medicine and Rehabilitation, Beijing Sport University, Beijing, China
| | - Shuqin Gu
- Duke Human Vaccine Institute, Duke University Medical Center, Durham, NC, USA
| | - Guoxin Ni
- Department of Rehabilitation Medicine, The First Affiliated Hospital of Xiamen University, Xiamen, China.
| | - Haiyang Wu
- Department of Spine Surgery, Tianjin Huanhu Hospital, Graduate School of Tianjin Medical University, Tianjin, China.
- Duke Molecular Physiology Institute, Duke University School of Medicine, Durham, NC, USA.
| |
Collapse
|
167
|
Lingard L. Writing with ChatGPT: An Illustration of its Capacity, Limitations & Implications for Academic Writers. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:261-270. [PMID: 37397181 PMCID: PMC10312253 DOI: 10.5334/pme.1072] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 06/06/2023] [Indexed: 07/04/2023]
|
168
|
Farina M, Lavazza A. ChatGPT in society: emerging issues. Front Artif Intell 2023; 6:1130913. [PMID: 37396972 PMCID: PMC10310434 DOI: 10.3389/frai.2023.1130913] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 05/30/2023] [Indexed: 07/04/2023] Open
Abstract
We review and critically assess several issues arising from the potential -large-scale- implementation or deployment of Large Language Models (LLMs) in society. These include security, political, economic, cultural, and educational issues as well as issues concerning social biases, creativity, copyright, and freedom of speech. We argue, without a preconceived pessimism toward these tools, that they may bring about many benefits. However, we also call for a balance assessment of their downsides. While our work is only preliminary and certainly partial it nevertheless holds some value as one of the first exploratory attempts in the literature.
Collapse
Affiliation(s)
- Mirko Farina
- Faculty of Humanities and Social Science, HMI Lab, Innopolis University, Innopolis, Russia
| | - Andrea Lavazza
- CUI: Neuroethics UNIVP: Brain and Behavioral Sciences, University of Pavia, Pavia, Lombardy, Italy
- Centro Universitario Internazionale, Arezzo, Italy
| |
Collapse
|
169
|
Affiliation(s)
- Andrew S Bi
- NYU Langone Orthopedic Hospital, New York, NY
| |
Collapse
|
170
|
Ilicki J. A Framework for Critically Assessing ChatGPT and Other Large Language Artificial Intelligence Model Applications in Health Care. MAYO CLINIC PROCEEDINGS. DIGITAL HEALTH 2023; 1:185-188. [PMID: 40206723 PMCID: PMC11975638 DOI: 10.1016/j.mcpdig.2023.03.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
|
171
|
Misra DP, Chandwar K. ChatGPT, artificial intelligence and scientific writing: What authors, peer reviewers and editors should know. J R Coll Physicians Edinb 2023; 53:90-93. [PMID: 37305993 DOI: 10.1177/14782715231181023] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/13/2023] Open
Affiliation(s)
- Durga Prasanna Misra
- Department of Clinical Immunology and Rheumatology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, India
| | - Kunal Chandwar
- Department of Clinical Immunology and Rheumatology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, India
| |
Collapse
|
172
|
Ocampo TSC, Silva TP, Alencar-Palha C, Haiter-Neto F, Oliveira ML. ChatGPT and scientific writing: A reflection on the ethical boundaries. Imaging Sci Dent 2023; 53:175-176. [PMID: 37405199 PMCID: PMC10315235 DOI: 10.5624/isd.20230085] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 04/20/2023] [Accepted: 05/03/2023] [Indexed: 07/06/2023] Open
Affiliation(s)
- Thaís Santos Cerqueira Ocampo
- Division of Oral Radiology, Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, SP, Brazil
| | - Thaísa Pinheiro Silva
- Division of Oral Radiology, Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, SP, Brazil
| | - Caio Alencar-Palha
- Division of Oral Radiology, Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, SP, Brazil
| | - Francisco Haiter-Neto
- Division of Oral Radiology, Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, SP, Brazil
| | - Matheus L. Oliveira
- Division of Oral Radiology, Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, SP, Brazil
| |
Collapse
|
173
|
Agrawal A. Laying an equitable data foundation for foundation models. THE LANCET REGIONAL HEALTH. SOUTHEAST ASIA 2023; 13:100221. [PMID: 37383558 PMCID: PMC10305916 DOI: 10.1016/j.lansea.2023.100221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 05/09/2023] [Indexed: 06/30/2023]
|
174
|
Yu H. Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Front Psychol 2023; 14:1181712. [PMID: 37325766 PMCID: PMC10267436 DOI: 10.3389/fpsyg.2023.1181712] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 05/16/2023] [Indexed: 06/17/2023] Open
Affiliation(s)
- Hao Yu
- Faculty of Education, Shaanxi Normal University, Xi'an, Shaanxi, China
| |
Collapse
|
175
|
Abstract
ABSTRACT ChatGPT and other artificial intelligence word prediction large database models are now readily available to the public. Program directors should be aware of the general features of this technology and consider its effect in graduate medical education, including the preparation of materials such as personal statements. The authors provide a sample ChatGPT-generated personal statement and general considerations for program directors and other graduate medical education stakeholders. The authors advocate that programs and applicants will be best served by transparent expectations about how/if programs will accept application materials created using artificial intelligence, starting with this application cycle. Graduate medical education will have many additional factors to consider for the innovative use and safeguards for the ethical application of artificial intelligence in clinical care and educational processes. However, the exponential increase in the application of this technology requires an urgent review for appropriate management of program procedures, iteration of policies, and a meaningful national discussion.
Collapse
Affiliation(s)
- Jennifer M Zumsteg
- From the UW Medicine Valley Medical Center, Renton, Washington (JMZ); Division of Rehabilitation Psychology and Neuropsychology, Department of Rehabilitation Medicine, University of Washington School of Medicine, Seattle, Washington (JMZ); and Department of Rehabilitation Medicine, University of Washington School of Medicine, Seattle, Washington (CJ)
| | | |
Collapse
|
176
|
Hallsworth JE, Udaondo Z, Pedrós‐Alió C, Höfer J, Benison KC, Lloyd KG, Cordero RJB, de Campos CBL, Yakimov MM, Amils R. Scientific novelty beyond the experiment. Microb Biotechnol 2023; 16:1131-1173. [PMID: 36786388 PMCID: PMC10221578 DOI: 10.1111/1751-7915.14222] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 01/09/2023] [Accepted: 01/11/2023] [Indexed: 02/15/2023] Open
Abstract
Practical experiments drive important scientific discoveries in biology, but theory-based research studies also contribute novel-sometimes paradigm-changing-findings. Here, we appraise the roles of theory-based approaches focusing on the experiment-dominated wet-biology research areas of microbial growth and survival, cell physiology, host-pathogen interactions, and competitive or symbiotic interactions. Additional examples relate to analyses of genome-sequence data, climate change and planetary health, habitability, and astrobiology. We assess the importance of thought at each step of the research process; the roles of natural philosophy, and inconsistencies in logic and language, as drivers of scientific progress; the value of thought experiments; the use and limitations of artificial intelligence technologies, including their potential for interdisciplinary and transdisciplinary research; and other instances when theory is the most-direct and most-scientifically robust route to scientific novelty including the development of techniques for practical experimentation or fieldwork. We highlight the intrinsic need for human engagement in scientific innovation, an issue pertinent to the ongoing controversy over papers authored using/authored by artificial intelligence (such as the large language model/chatbot ChatGPT). Other issues discussed are the way in which aspects of language can bias thinking towards the spatial rather than the temporal (and how this biased thinking can lead to skewed scientific terminology); receptivity to research that is non-mainstream; and the importance of theory-based science in education and epistemology. Whereas we briefly highlight classic works (those by Oakes Ames, Francis H.C. Crick and James D. Watson, Charles R. Darwin, Albert Einstein, James E. Lovelock, Lynn Margulis, Gilbert Ryle, Erwin R.J.A. Schrödinger, Alan M. Turing, and others), the focus is on microbiology studies that are more-recent, discussing these in the context of the scientific process and the types of scientific novelty that they represent. These include several studies carried out during the 2020 to 2022 lockdowns of the COVID-19 pandemic when access to research laboratories was disallowed (or limited). We interviewed the authors of some of the featured microbiology-related papers and-although we ourselves are involved in laboratory experiments and practical fieldwork-also drew from our own research experiences showing that such studies can not only produce new scientific findings but can also transcend barriers between disciplines, act counter to scientific reductionism, integrate biological data across different timescales and levels of complexity, and circumvent constraints imposed by practical techniques. In relation to urgent research needs, we believe that climate change and other global challenges may require approaches beyond the experiment.
Collapse
Affiliation(s)
- John E. Hallsworth
- Institute for Global Food Security, School of Biological SciencesQueen's University BelfastBelfastUK
| | - Zulema Udaondo
- Department of Biomedical InformaticsUniversity of Arkansas for Medical SciencesLittle RockArkansasUSA
| | - Carlos Pedrós‐Alió
- Department of Systems BiologyCentro Nacional de Biotecnología (CSIC)MadridSpain
| | - Juan Höfer
- Escuela de Ciencias del MarPontificia Universidad Católica de ValparaísoValparaísoChile
| | - Kathleen C. Benison
- Department of Geology and GeographyWest Virginia UniversityMorgantownWest VirginiaUSA
| | - Karen G. Lloyd
- Microbiology DepartmentUniversity of TennesseeKnoxvilleTennesseeUSA
| | - Radamés J. B. Cordero
- Department of Molecular Microbiology and ImmunologyJohns Hopkins Bloomberg School of Public HealthBaltimoreMarylandUSA
| | - Claudia B. L. de Campos
- Institute of Science and TechnologyUniversidade Federal de Sao Paulo (UNIFESP)São José dos CamposSPBrazil
| | | | - Ricardo Amils
- Department of Molecular Biology, Centro de Biología Molecular Severo Ochoa (CSIC‐UAM)Nicolás Cabrera n° 1, Universidad Autónoma de MadridMadridSpain
- Department of Planetology and HabitabilityCentro de Astrobiología (INTA‐CSIC)Torrejón de ArdozSpain
| |
Collapse
|
177
|
Levin G, Meyer R, Kadoch E, Brezinov Y. Identifying ChatGPT-written OBGYN abstracts using a simple tool. Am J Obstet Gynecol MFM 2023; 5:100936. [PMID: 36931435 DOI: 10.1016/j.ajogmf.2023.100936] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 03/13/2023] [Accepted: 03/13/2023] [Indexed: 03/17/2023]
Affiliation(s)
- Gabriel Levin
- Department of Obstetrics and Gynecology, Hadassah-Hebrew University Medical Center, Jerusalem, Israel; Lady Davis Institute for Cancer Research, Jewish General Hospital, McGill University, Quebec, Canada.
| | - Raanan Meyer
- Division of Minimally Invasive Gynecologic Surgery, Department of Obstetrics and Gynecology, Cedars-Sinai Medical Center, Los Angeles, CA; The Dr. Pinchas Bornstein Talpiot Medical Leadership Program, Sheba Medical Center, Tel Hashomer, Ramat-Gan, Israel
| | - Eva Kadoch
- Lady Davis Institute for Cancer Research, Jewish General Hospital, McGill University, Quebec, Canada
| | - Yoav Brezinov
- Department of Experimental Surgery, McGill University, Quebec, Canada
| |
Collapse
|
178
|
Shoja MM, Van de Ridder JMM, Rajput V. The Emerging Role of Generative Artificial Intelligence in Medical Education, Research, and Practice. Cureus 2023; 15:e40883. [PMID: 37492829 PMCID: PMC10363933 DOI: 10.7759/cureus.40883] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 06/24/2023] [Indexed: 07/27/2023] Open
Abstract
Recent breakthroughs in generative artificial intelligence (GAI) and the emergence of transformer-based large language models such as Chat Generative Pre-trained Transformer (ChatGPT) have the potential to transform healthcare education, research, and clinical practice. This article examines the current trends in using GAI models in medicine, outlining their strengths and limitations. It is imperative to develop further consensus-based guidelines to govern the appropriate use of GAI, not only in medical education but also in research, scholarship, and clinical practice.
Collapse
Affiliation(s)
| | | | - Vijay Rajput
- Medical Education, Dr. Kiran C. Patel College of Allopathic Medicine, Nova Southeastern University, Fort Lauderdale, USA
| |
Collapse
|
179
|
Abstract
The OpenAI chatbot ChatGPT is an artificial intelligence (AI) application that uses state-of-the-art language processing AI. It can perform a vast number of tasks, from writing poetry and explaining complex quantum mechanics, to translating language and writing research articles with a human-like understanding and legitimacy. Since its initial release to the public in November 2022, ChatGPT has garnered considerable attention due to its ability to mimic the patterns of human language, and it has attracted billion-dollar investments from Microsoft and PricewaterhouseCoopers. The scope of ChatGPT and other large language models appears infinite, but there are several important limitations. This editorial provides an introduction to the basic functionality of ChatGPT and other large language models, their current applications and limitations, and the associated implications for clinical practice and research.
Collapse
Affiliation(s)
- Kyle N Kunze
- Department of Orthopaedic Surgery, Hospital for Special Surgery, New York, New York, USA
| | - Seong J Jang
- Weill Cornell Medical College, New York, New York, USA
| | | | - Jonathan M Vigdorchik
- Department of Orthopaedic Surgery, Hospital for Special Surgery, New York, New York, USA
- Adult Reconstruction and Joint Replacement Service, Hospital for Special Surgery, New York, New York, USA
| | - Fares S Haddad
- The Bone & Joint Journal , London, UK
- University College London Hospitals, and The NIHR Biomedical Research Centre at UCLH, London, UK
- Princess Grace Hospital, London, UK
| |
Collapse
|
180
|
Májovský M, Černý M, Kasal M, Komarc M, Netuka D. Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora's Box Has Been Opened. J Med Internet Res 2023; 25:e46924. [PMID: 37256685 PMCID: PMC10267787 DOI: 10.2196/46924] [Citation(s) in RCA: 94] [Impact Index Per Article: 47.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 04/25/2023] [Accepted: 05/03/2023] [Indexed: 06/01/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) has advanced substantially in recent years, transforming many industries and improving the way people live and work. In scientific research, AI can enhance the quality and efficiency of data analysis and publication. However, AI has also opened up the possibility of generating high-quality fraudulent papers that are difficult to detect, raising important questions about the integrity of scientific research and the trustworthiness of published papers. OBJECTIVE The aim of this study was to investigate the capabilities of current AI language models in generating high-quality fraudulent medical articles. We hypothesized that modern AI models can create highly convincing fraudulent papers that can easily deceive readers and even experienced researchers. METHODS This proof-of-concept study used ChatGPT (Chat Generative Pre-trained Transformer) powered by the GPT-3 (Generative Pre-trained Transformer 3) language model to generate a fraudulent scientific article related to neurosurgery. GPT-3 is a large language model developed by OpenAI that uses deep learning algorithms to generate human-like text in response to prompts given by users. The model was trained on a massive corpus of text from the internet and is capable of generating high-quality text in a variety of languages and on various topics. The authors posed questions and prompts to the model and refined them iteratively as the model generated the responses. The goal was to create a completely fabricated article including the abstract, introduction, material and methods, discussion, references, charts, etc. Once the article was generated, it was reviewed for accuracy and coherence by experts in the fields of neurosurgery, psychiatry, and statistics and compared to existing similar articles. RESULTS The study found that the AI language model can create a highly convincing fraudulent article that resembled a genuine scientific paper in terms of word usage, sentence structure, and overall composition. The AI-generated article included standard sections such as introduction, material and methods, results, and discussion, as well a data sheet. It consisted of 1992 words and 17 citations, and the whole process of article creation took approximately 1 hour without any special training of the human user. However, there were some concerns and specific mistakes identified in the generated article, specifically in the references. CONCLUSIONS The study demonstrates the potential of current AI language models to generate completely fabricated scientific articles. Although the papers look sophisticated and seemingly flawless, expert readers may identify semantic inaccuracies and errors upon closer inspection. We highlight the need for increased vigilance and better detection methods to combat the potential misuse of AI in scientific research. At the same time, it is important to recognize the potential benefits of using AI language models in genuine scientific writing and research, such as manuscript preparation and language editing.
Collapse
Affiliation(s)
- Martin Májovský
- Department of Neurosurgery and Neurooncology, First Faculty of Medicine, Charles University, Prague, Czech Republic
| | - Martin Černý
- Department of Neurosurgery and Neurooncology, First Faculty of Medicine, Charles University, Prague, Czech Republic
| | - Matěj Kasal
- Department of Psychiatry, Faculty of Medicine in Pilsen, Charles University, Pilsen, Czech Republic
| | - Martin Komarc
- Institute of Biophysics and Informatics, First Faculty of Medicine, Charles University, Prague, Czech Republic
- Department of Methodology, Faculty of Physical Education and Sport, Charles University, Prague, Czech Republic
| | - David Netuka
- Department of Neurosurgery and Neurooncology, First Faculty of Medicine, Charles University, Prague, Czech Republic
| |
Collapse
|
181
|
Ballester PL. Open Science and Software Assistance: Commentary on "Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora's Box Has Been Opened". J Med Internet Res 2023; 25:e49323. [PMID: 37256656 DOI: 10.2196/49323] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 05/24/2023] [Indexed: 06/01/2023] Open
Abstract
Májovský and colleagues have investigated the important issue of ChatGPT being used for the complete generation of scientific works, including fake data and tables. The issues behind why ChatGPT poses a significant concern to research reach far beyond the model itself. Once again, the lack of reproducibility and visibility of scientific works creates an environment where fraudulent or inaccurate work can thrive. What are some of the ways in which we can handle this new situation?
Collapse
Affiliation(s)
- Pedro L Ballester
- Neuroscience Graduate Program, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
182
|
Sorin V, Klang E, Sklair-Levy M, Cohen I, Zippel DB, Balint Lahat N, Konen E, Barash Y. Large language model (ChatGPT) as a support tool for breast tumor board. NPJ Breast Cancer 2023; 9:44. [PMID: 37253791 PMCID: PMC10229606 DOI: 10.1038/s41523-023-00557-8] [Citation(s) in RCA: 85] [Impact Index Per Article: 42.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 05/23/2023] [Indexed: 06/01/2023] Open
Abstract
Large language models (LLM) such as ChatGPT have gained public and scientific attention. The aim of this study is to evaluate ChatGPT as a support tool for breast tumor board decisions making. We inserted into ChatGPT-3.5 clinical information of ten consecutive patients presented in a breast tumor board in our institution. We asked the chatbot to recommend management. The results generated by ChatGPT were compared to the final recommendations of the tumor board. They were also graded independently by two senior radiologists. Grading scores were between 1-5 (1 = completely disagree, 5 = completely agree), and in three different categories: summarization, recommendation, and explanation. The mean age was 49.4, 8/10 (80%) of patients had invasive ductal carcinoma, one patient (1/10, 10%) had a ductal carcinoma in-situ and one patient (1/10, 10%) had a phyllodes tumor with atypia. In seven out of ten cases (70%), ChatGPT's recommendations were similar to the tumor board's decisions. Mean scores while grading the chatbot's summarization, recommendation and explanation by the first reviewer were 3.7, 4.3, and 4.6 respectively. Mean values for the second reviewer were 4.3, 4.0, and 4.3, respectively. In this proof-of-concept study, we present initial results on the use of an LLM as a decision support tool in a breast tumor board. Given the significant advancements, it is warranted for clinicians to be familiar with the potential benefits and harms of the technology.
Collapse
Affiliation(s)
- Vera Sorin
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, Tel Hashomer, Israel.
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel.
- DeepVision Lab, Chaim Sheba Medical Center, Tel Hashomer, Israel.
| | - Eyal Klang
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, Tel Hashomer, Israel
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
- DeepVision Lab, Chaim Sheba Medical Center, Tel Hashomer, Israel
- Sami Sagol AI Hub, ARC, Chaim Sheba Medical Center, Tel Hashomer, Israel
| | - Miri Sklair-Levy
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, Tel Hashomer, Israel
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Israel Cohen
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, Tel Hashomer, Israel
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Douglas B Zippel
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
- Department of General and Oncological Surgery- Surgery C, Chaim Sheba Medical Center, Tel Hashomer, Israel
| | - Nora Balint Lahat
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
- Department of Pathology, Chaim Sheba Medical Center, Tel Hashomer, Israel
| | - Eli Konen
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, Tel Hashomer, Israel
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Yiftach Barash
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, Tel Hashomer, Israel
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
- DeepVision Lab, Chaim Sheba Medical Center, Tel Hashomer, Israel
| |
Collapse
|
183
|
Sanmarchi F, Bucci A, Nuzzolese AG, Carullo G, Toscano F, Nante N, Golinelli D. A step-by-step researcher's guide to the use of an AI-based transformer in epidemiology: an exploratory analysis of ChatGPT using the STROBE checklist for observational studies. ZEITSCHRIFT FUR GESUNDHEITSWISSENSCHAFTEN = JOURNAL OF PUBLIC HEALTH 2023:1-36. [PMID: 37361298 PMCID: PMC10215032 DOI: 10.1007/s10389-023-01936-y] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 05/03/2023] [Indexed: 06/28/2023]
Abstract
Objective This study aims at investigating how AI-based transformers can support researchers in designing and conducting an epidemiological study. To accomplish this, we used ChatGPT to reformulate the STROBE recommendations into a list of questions to be answered by the transformer itself. We then qualitatively evaluated the coherence and relevance of the transformer's outputs. Study design Descriptive study. Methods We first chose a study to be used as a basis for the simulation. We then used ChatGPT to transform each STROBE checklist's item into specific prompts. Each answer to the respective prompt was evaluated by independent researchers in terms of coherence and relevance. Results The mean scores assigned to each prompt were heterogeneous. On average, for the coherence domain, the overall mean score was 3.6 out of 5.0, and for relevance it was 3.3 out of 5.0. The lowest scores were assigned to items belonging to the Methods section of the checklist. Conclusions ChatGPT can be considered as a valuable support for researchers in conducting an epidemiological study, following internationally recognized guidelines and standards. It is crucial for the users to have knowledge on the subject and a critical mindset when evaluating the outputs. The potential benefits of AI in scientific research and publishing are undeniable, but it is crucial to address the risks, and the ethical and legal consequences associated with its use.
Collapse
Affiliation(s)
- Francesco Sanmarchi
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum – University of Bologna, Via San Giacomo 12, 40126 Bologna, Italy
| | - Andrea Bucci
- Department of Economics and Law, University of Macerata, Macerata, Italy
| | | | - Gherardo Carullo
- Department of Italian and Supranational Public Law, University of Milan, Milan, Italy
| | | | - Nicola Nante
- Present Address: Department of Molecular and Developmental Medicine, University of Siena, Siena, Italy
| | - Davide Golinelli
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum – University of Bologna, Via San Giacomo 12, 40126 Bologna, Italy
- Present Address: Department of Molecular and Developmental Medicine, University of Siena, Siena, Italy
| |
Collapse
|
184
|
Sajjad M, Saleem R. Evolution of Healthcare with ChatGPT: A Word of Caution. Ann Biomed Eng 2023:10.1007/s10439-023-03225-x. [PMID: 37219697 DOI: 10.1007/s10439-023-03225-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/24/2023]
Affiliation(s)
- Mariam Sajjad
- Dow University of Health Sciences (DUHS), Karachi, Pakistan
| | - Rida Saleem
- Dow University of Health Sciences (DUHS), Karachi, Pakistan.
| |
Collapse
|
185
|
Darkhabani M, Alrifaai MA, Elsalti A, Dvir YM, Mahroum N. ChatGPT and autoimmunity - A new weapon in the battlefield of knowledge. Autoimmun Rev 2023:103360. [PMID: 37211242 DOI: 10.1016/j.autrev.2023.103360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 05/18/2023] [Indexed: 05/23/2023]
Abstract
The field of medical research has been always full of innovation and huge leaps revolutionizing the scientific world. In the recent years, we have witnessed this firsthand by the evolution of Artificial Intelligence (AI), with ChatGPT being the most recent example. ChatGPT is a language chat bot which generates human-like texts based on data from the internet. If viewed from a medical point view, ChatGPT has shown capabilities of composing medical texts similar to those depicted by experienced authors, to solve clinical cases, to provide medical solutions, among other fascinating performances. Nevertheless, the value of the results, limitations, and clinical implications still need to be carefully evaluated. In our current paper on the role of ChatGPT in clinical medicine, particularly in the field of autoimmunity, we aimed to illustrate the implication of this technology alongside the latest utilization and limitations. In addition, we included an expert opinion on the cyber-related aspects of the bot potentially contributing to the risks attributed to its use, alongside proposed defense mechanisms. All of that, while taking into consideration the rapidity of the continuous improvement AI experiences on a daily basis.
Collapse
Affiliation(s)
- Mohammad Darkhabani
- International School of Medicine, Istanbul Medipol University, Istanbul, Turkey
| | | | - Abdulrahman Elsalti
- International School of Medicine, Istanbul Medipol University, Istanbul, Turkey
| | | | - Naim Mahroum
- International School of Medicine, Istanbul Medipol University, Istanbul, Turkey.
| |
Collapse
|
186
|
Piccoli GB. The future is now: artificial intelligence and beyond. J Nephrol 2023; 36:937-940. [PMID: 37178399 DOI: 10.1007/s40620-023-01671-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
|
187
|
Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell 2023; 6:1169595. [PMID: 37215063 PMCID: PMC10192861 DOI: 10.3389/frai.2023.1169595] [Citation(s) in RCA: 492] [Impact Index Per Article: 246.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Accepted: 04/14/2023] [Indexed: 05/24/2023] Open
Abstract
This paper presents an analysis of the advantages, limitations, ethical considerations, future prospects, and practical applications of ChatGPT and artificial intelligence (AI) in the healthcare and medical domains. ChatGPT is an advanced language model that uses deep learning techniques to produce human-like responses to natural language inputs. It is part of the family of generative pre-training transformer (GPT) models developed by OpenAI and is currently one of the largest publicly available language models. ChatGPT is capable of capturing the nuances and intricacies of human language, allowing it to generate appropriate and contextually relevant responses across a broad spectrum of prompts. The potential applications of ChatGPT in the medical field range from identifying potential research topics to assisting professionals in clinical and laboratory diagnosis. Additionally, it can be used to help medical students, doctors, nurses, and all members of the healthcare fraternity to know about updates and new developments in their respective fields. The development of virtual assistants to aid patients in managing their health is another important application of ChatGPT in medicine. Despite its potential applications, the use of ChatGPT and other AI tools in medical writing also poses ethical and legal concerns. These include possible infringement of copyright laws, medico-legal complications, and the need for transparency in AI-generated content. In conclusion, ChatGPT has several potential applications in the medical and healthcare fields. However, these applications come with several limitations and ethical considerations which are presented in detail along with future prospects in medicine and healthcare.
Collapse
Affiliation(s)
- Tirth Dave
- Internal Medicine, Bukovinian State Medical University, Chernivtsi, Ukraine
| | | | - Satyam Singh
- GSVM Medical College, Kanpur, Uttar Pradesh, India
| |
Collapse
|
188
|
Mondal H, Mondal S. How artificial intelligence can help researchers in the promotion of their articles? Indian J Ophthalmol 2023; 71:2293-2294. [PMID: 37202979 PMCID: PMC10391428 DOI: 10.4103/ijo.ijo_296_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/20/2023] Open
Affiliation(s)
- Himel Mondal
- Department of Physiology, All India Institute of Medical Sciences, Deoghar, Jharkhand, India
| | - Shaikat Mondal
- Department of Physiology, Raiganj Government Medical College and Hospital, West Bengal, India
| |
Collapse
|
189
|
Seth I, Sinkjær Kenney P, Bulloch G, Hunter-Smith DJ, Bo Thomsen J, Rozen WM. Artificial or Augmented Authorship? A Conversation with a Chatbot on Base of Thumb Arthritis. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2023; 11:e4999. [PMID: 37250832 PMCID: PMC10219695 DOI: 10.1097/gox.0000000000004999] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 03/27/2023] [Indexed: 05/31/2023]
Abstract
ChatGPT is an open artificial intelligence chat box that could revolutionize academia and augment research writing. This study had an open conversation with ChatGPT and invited the platform to evaluate this article through series of five questions on base of thumb arthritis to test if its contributions and contents merely add artificial unusable input or help us augment the quality of the article. The information ChatGPT-3 provided was accurate, albeit surface-level, and lacks analytical ability to dissect for important limitations about base of thumb arthritis, which would not be conducive to potentiating creative ideas and solutions in plastic surgery. ChatGPT failed to provide relevant references and even "created" references instead of indicating its inability to perform the task. This highlights that as an AI-generator for medical publishing text, ChatGPT-3 should be used cautiously.
Collapse
Affiliation(s)
- Ishith Seth
- From the Faculty of Science, Medicine, and Health, Monash University, Victoria, Australia
- Faculty of Science, Medicine, and Health, University of Melbourne, Victoria, Australia
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Peninsula Clinical School, Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| | - Peter Sinkjær Kenney
- Department of Plastic Surgery, Odense University Hospital, Odense, Denmark
- Department of Plastic and Breast Surgery, Aarhus University Hospital, Aarhus N, Denmark
| | - Gabriella Bulloch
- Faculty of Science, Medicine, and Health, University of Melbourne, Victoria, Australia
| | - David J. Hunter-Smith
- From the Faculty of Science, Medicine, and Health, Monash University, Victoria, Australia
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Peninsula Clinical School, Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| | - Jørn Bo Thomsen
- Department of Plastic Surgery, Odense University Hospital, Odense, Denmark
- Research unit for Plastic Surgery, Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Warren M. Rozen
- From the Faculty of Science, Medicine, and Health, Monash University, Victoria, Australia
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Peninsula Clinical School, Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| |
Collapse
|
190
|
Alberts IL, Mercolli L, Pyka T, Prenosil G, Shi K, Rominger A, Afshar-Oromieh A. Large language models (LLM) and ChatGPT: what will the impact on nuclear medicine be? Eur J Nucl Med Mol Imaging 2023; 50:1549-1552. [PMID: 36892666 PMCID: PMC9995718 DOI: 10.1007/s00259-023-06172-w] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 02/19/2023] [Indexed: 03/10/2023]
Affiliation(s)
- Ian L Alberts
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland.
| | - Lorenzo Mercolli
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - Thomas Pyka
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - George Prenosil
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - Ali Afshar-Oromieh
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| |
Collapse
|
191
|
Cheng K, Sun Z, He Y, Gu S, Wu H. The potential impact of ChatGPT/GPT-4 on surgery: will it topple the profession of surgeons? Int J Surg 2023; 109:1545-1547. [PMID: 37037587 PMCID: PMC10389652 DOI: 10.1097/js9.0000000000000388] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 03/27/2023] [Indexed: 04/12/2023]
Affiliation(s)
- Kunming Cheng
- Department of Intensive Care Unit, The Second Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan
| | - Zaijie Sun
- Department of Orthopaedic Surgery, Xiangyang Central Hospital, Affiliated Hospital of Hubei University of Arts and Science, Xiangyang
| | - Yongbin He
- School of Sports Medicine and Rehabilitation, Beijing Sport University, Beijing
| | - Shuqin Gu
- Duke Human Vaccine Institute, Duke University Medical Center
| | - Haiyang Wu
- Clinical College of Neurology, Neurosurgery and Neurorehabilitation, Tianjin Medical University, Tianjin, People’s Republic of China
- Duke Molecular Physiology Institute, Duke University School of Medicine, Durham, North Carolina, USA
| |
Collapse
|
192
|
De Angelis L, Baglivo F, Arzilli G, Privitera GP, Ferragina P, Tozzi AE, Rizzo C. ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Front Public Health 2023; 11:1166120. [PMID: 37181697 PMCID: PMC10166793 DOI: 10.3389/fpubh.2023.1166120] [Citation(s) in RCA: 182] [Impact Index Per Article: 91.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 04/11/2023] [Indexed: 05/16/2023] Open
Abstract
Large Language Models (LLMs) have recently gathered attention with the release of ChatGPT, a user-centered chatbot released by OpenAI. In this perspective article, we retrace the evolution of LLMs to understand the revolution brought by ChatGPT in the artificial intelligence (AI) field. The opportunities offered by LLMs in supporting scientific research are multiple and various models have already been tested in Natural Language Processing (NLP) tasks in this domain. The impact of ChatGPT has been huge for the general public and the research community, with many authors using the chatbot to write part of their articles and some papers even listing ChatGPT as an author. Alarming ethical and practical challenges emerge from the use of LLMs, particularly in the medical field for the potential impact on public health. Infodemic is a trending topic in public health and the ability of LLMs to rapidly produce vast amounts of text could leverage misinformation spread at an unprecedented scale, this could create an "AI-driven infodemic," a novel public health threat. Policies to contrast this phenomenon need to be rapidly elaborated, the inability to accurately detect artificial-intelligence-produced text is an unresolved issue.
Collapse
Affiliation(s)
- Luigi De Angelis
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Francesco Baglivo
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Guglielmo Arzilli
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Gaetano Pierpaolo Privitera
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
- Training Office, National Institute of Health, Rome, Italy
| | - Paolo Ferragina
- Department of Computer Science, University of Pisa, Pisa, Italy
| | - Alberto Eugenio Tozzi
- Fetal, Neonatal and Cardiologic Science Research Area, Predictive and Preventive Medicine Research Unit, Bambino Gesù Children’s Hospital, IRCCS, Rome, Italy
| | - Caterina Rizzo
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| |
Collapse
|
193
|
Corsello A, Santangelo A. May Artificial Intelligence Influence Future Pediatric Research?-The Case of ChatGPT. CHILDREN (BASEL, SWITZERLAND) 2023; 10:757. [PMID: 37190006 PMCID: PMC10136583 DOI: 10.3390/children10040757] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 04/17/2023] [Accepted: 04/19/2023] [Indexed: 05/17/2023]
Abstract
BACKGROUND In recent months, there has been growing interest in the potential of artificial intelligence (AI) to revolutionize various aspects of medicine, including research, education, and clinical practice. ChatGPT represents a leading AI language model, with possible unpredictable effects on the quality of future medical research, including clinical decision-making, medical education, drug development, and better research outcomes. AIM AND METHODS In this interview with ChatGPT, we explore the potential impact of AI on future pediatric research. Our discussion covers a range of topics, including the potential positive effects of AI, such as improved clinical decision-making, enhanced medical education, faster drug development, and better research outcomes. We also examine potential negative effects, such as bias and fairness concerns, safety and security issues, overreliance on technology, and ethical considerations. CONCLUSIONS While AI continues to advance, it is crucial to remain vigilant about the possible risks and limitations of these technologies and to consider the implications of these technologies and their use in the medical field. The development of AI language models represents a significant advancement in the field of artificial intelligence and has the potential to revolutionize daily clinical practice in every branch of medicine, both surgical and clinical. Ethical and social implications must also be considered to ensure that these technologies are used in a responsible and beneficial manner.
Collapse
Affiliation(s)
- Antonio Corsello
- Department of Clinical Sciences and Community Health, University of Milan, 20122 Milan, Italy
| | - Andrea Santangelo
- Department of Pediatrics, Santa Chiara Hospital, University of Pisa, 56126 Pisa, Italy
| |
Collapse
|
194
|
Nguyen Y, Costedoat-Chalumeau N. [Artificial intelligence and internal medicine: The example of hydroxychloroquine according to ChatGPT]. Rev Med Interne 2023; 44:218-226. [PMID: 37062612 DOI: 10.1016/j.revmed.2023.03.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 03/22/2023] [Accepted: 03/31/2023] [Indexed: 04/18/2023]
Abstract
Artificial intelligence (AI) using deep learning is revolutionizing several fields, including medicine, with a wide range of applications. Available since the end of 2022, ChatGPT is a conversational AI or "chatbot", using artificial intelligence to dialogue with its users in all fields. Through the example of hydroxychloroquine (HCQ), we discuss its use for patients, clinicians, or researchers, and discuss its performance and limitations, particularly in relation to algorithmic bias. If AI tools using deep learning do not dispense with the expertise and experience of a clinician (at least, for the moment), they have a potential to improve or simplify our daily practice.
Collapse
Affiliation(s)
- Y Nguyen
- Service de médecine interne, hôpital Cochin, AP-HP centre, Université Paris cité, 75014 Paris, France; Centre de recherche en épidémiologie et statistiques (CRESS), unité Inserm 1153, Université de Paris cité, Paris, France.
| | - N Costedoat-Chalumeau
- Service de médecine interne, hôpital Cochin, AP-HP centre, Université Paris cité, 75014 Paris, France; Centre de recherche en épidémiologie et statistiques (CRESS), unité Inserm 1153, Université de Paris cité, Paris, France
| |
Collapse
|
195
|
Sevgi UT, Erol G, Doğruel Y, Sönmez OF, Tubbs RS, Güngor A. The role of an open artificial intelligence platform in modern neurosurgical education: a preliminary study. Neurosurg Rev 2023; 46:86. [PMID: 37059815 DOI: 10.1007/s10143-023-01998-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 02/09/2023] [Accepted: 04/08/2023] [Indexed: 04/16/2023]
Abstract
The use of artificial intelligence in neurosurgical education has been growing in recent times. ChatGPT, a free and easily accessible language model, has been gaining popularity as an alternative education method. It is necessary to explore the potential of this program in neurosurgery education and to evaluate its reliability. This study aimed to show the reliability of ChatGPT by asking various questions to the chat engine, how it can contribute to neurosurgery education by preparing case reports or questions, and its contributions when writing academic articles. The results of the study showed that while ChatGPT provided intriguing and interesting responses, it should not be considered a dependable source of information. The absence of citations for scientific queries raises doubts about the credibility of the answers provided. Therefore, it is not advisable to solely rely on ChatGPT as an educational resource. With further updates and more specific prompts, it may be possible to improve its accuracy. In conclusion, while ChatGPT has potential as an educational tool, its reliability needs to be further evaluated and improved before it can be widely adopted in neurosurgical education.
Collapse
Affiliation(s)
- Umut Tan Sevgi
- Department of Neurosurgery, University of Health Sciences, Tepecik Training and Research Hospital, Izmir, Turkey
- Department of Neurosurgery, Yeditepe University Microsurgical Neuroanatomy Laboratory, Istanbul, Turkey
| | - Gökberk Erol
- Department of Neurosurgery, Yeditepe University Microsurgical Neuroanatomy Laboratory, Istanbul, Turkey
- Department of Neurosurgery, Faculty of Medicine, Gazi University, Ankara, Turkey
| | - Yücel Doğruel
- Department of Neurosurgery, Yeditepe University Microsurgical Neuroanatomy Laboratory, Istanbul, Turkey
- The Neurosurgical Atlas, Carmel, IN, USA
| | - Osman Fikret Sönmez
- Department of Neurosurgery, University of Health Sciences, Tepecik Training and Research Hospital, Izmir, Turkey
| | - Richard Shane Tubbs
- Department of Neurosurgery, Tulane Center for Clinical Neurosciences, Tulane University School of Medicine, New Orleans, LA, USA
- Department of Anatomical Sciences, St. George's University, St. George's, West Indies, Grenada
- Department of Structural and Cellular Biology, Tulane University School of Medicine, New Orleans, LA, USA
- Department of Neurosurgery and Ochsner Neuroscience Institute, Ochsner Health System, New Orleans, LA, USA
- Department of Neurology, Tulane University School of Medicine, New Orleans, LA, USA
| | - Abuzer Güngor
- Department of Neurosurgery, Yeditepe University Microsurgical Neuroanatomy Laboratory, Istanbul, Turkey.
- Department of Neurosurgery, University of Health Sciences, Bakirkoy Research and Training Hospital for Neurology, Neurosurgery and Psychiatry, Istanbul, Turkey.
| |
Collapse
|
196
|
Zimmerman A. A Ghostwriter for the Masses: ChatGPT and the Future of Writing. Ann Surg Oncol 2023; 30:3170-3173. [PMID: 37029868 DOI: 10.1245/s10434-023-13436-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 02/12/2023] [Indexed: 04/09/2023]
|
197
|
Wittmann J. Science Fact vs Science Fiction: A ChatGPT Immunological Review Experiment Gone Awry. Immunol Lett 2023; 256-257:42-47. [PMID: 37031907 DOI: 10.1016/j.imlet.2023.04.002] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 03/23/2023] [Accepted: 04/03/2023] [Indexed: 04/11/2023]
Abstract
Artificial intelligence (AI) has made great progress in recent years. The latest chatbot to make a splash is ChatGPT. To see if this type of AI could also be helpful in creating an immunological review article, I put a planned review on different classes of small RNAs during murine B cell development to the test. Although the general wording sounded very polished and convincing, ChatGPT encountered great difficulties when asked for details and references and made many incorrect statements, leading me to conclude that this kind of AI is not (yet?) suitable for assisting in the writing of scientific articles.
Collapse
Affiliation(s)
- Jürgen Wittmann
- Division of Molecular Immunology, Department of Internal Medicine III, Nikolaus-Fiebiger-Center of Molecular Medicine (NFZ), Friedrich-Alexander-Universität Erlangen-Nürnberg, Glückstraße 6, D-91054 Erlangen, Germany.
| |
Collapse
|
198
|
Valentín Bravo FJ, Mateos Álvarez E. Impact of artificial intelligence and language models in medicine. ARCHIVOS DE LA SOCIEDAD ESPANOLA DE OFTALMOLOGIA 2023:S2173-5794(23)00046-4. [PMID: 37031735 DOI: 10.1016/j.oftale.2023.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/11/2023]
Affiliation(s)
| | - E Mateos Álvarez
- Hospital Clínico Universitario de Valladolid, Valladolid, Castilla y León, Spain
| |
Collapse
|
199
|
Saliba T, Boitsios G. ChatGPT, a radiologist's perspective. Pediatr Radiol 2023; 53:813-815. [PMID: 37017719 DOI: 10.1007/s00247-023-05656-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 03/13/2023] [Accepted: 03/23/2023] [Indexed: 04/06/2023]
Affiliation(s)
- Thomas Saliba
- Hopital Universitaire des Enfants Reine Fabiola (HUDERF), Av. Jean Joseph Crocq 15, 1020, Brussels, Belgium.
| | - Grammatina Boitsios
- Hopital Universitaire des Enfants Reine Fabiola (HUDERF), Av. Jean Joseph Crocq 15, 1020, Brussels, Belgium
| |
Collapse
|
200
|
Affiliation(s)
- Keri Draganic
- Keri Draganic is the Director of the Adult-Gerontology Acute Care Nurse Practitioner Program at The University of Texas at Arlington College of Nursing & Health Innovation, Arlington, Tex
| |
Collapse
|