1
|
Eravsar NB, Aydin M, Eryilmaz A, Turemis C, Surucu S, Jimenez AE. Is ChatGPT a more academic source than google searches for patient questions about hip arthroscopy? An analysis of the most frequently asked questions. J ISAKOS 2025; 12:100892. [PMID: 40324563 DOI: 10.1016/j.jisako.2025.100892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2025] [Revised: 04/07/2025] [Accepted: 04/28/2025] [Indexed: 05/07/2025]
Abstract
OBJECTIVES The purpose of this study was to compare the reliability and accuracy of responses provided to patients about hip arthroscopy (HA) by Chat Generative Pre-Trained Transformer (ChatGPT), an artificial intelligence (AI) and large language model (LLM) online program, with those obtained through a contemporary Google Search for frequently asked questions (FAQs) regarding HA. METHODS "HA" was entered into Google Search and ChatGPT, and the 15 most common FAQs and the answers were determined. In Google Search, the FAQs were obtained from the "People also ask" section. ChatGPT was queried to provide the 15 most common FAQs and subsequent answers. The Rothwell system groups the questions under 10 subheadings. Responses of ChatGPT and Google Search engines were compared. RESULTS Timeline of recovery (23.3%) and technical details (20%) were the most common categories of questions. ChatGPT produced significantly more data in the technical details category (33.3% vs. 6.6%; p-value = 0.0455) than in the other categories. The most FAQs were academic in nature for both Google web search (46.6%) and ChatGPT (93.3%). ChatGPT provided significantly more academic references than Google web searches (93.3% vs. 46.6%). Conversely, Google web search cited more medical practice references (20% vs. 0%), single surgeon websites (26% vs. 0%), and government websites (6% vs. 0%) more frequently than ChatGPT. CONCLUSION ChatGPT performed similarly to Google searches for information about HA. Compared to Google, ChatGPT provided significantly more academic sources for its answers to patient questions. LEVEL OF EVIDENCE Level IV.
Collapse
Affiliation(s)
- Necati Bahadir Eravsar
- Johns Hopkins University, Department of Orthopaedic Surgery, Baltimore, MD, USA; S.B.U. Haydarpasa Numune Training and Research Hospital, Istanbul 34668, Turkey
| | - Mahmud Aydin
- Sisli Memorial Hospital, Istanbul, 34384, Turkey
| | - Atahan Eryilmaz
- S.B.U. Haseki Training and Research Hospital, Istanbul 34096, Turkey
| | | | - Serkan Surucu
- Yale University, Department of Orthopaedics and Rehabilitation, New Haven, 06510, CT, USA.
| | - Andrew E Jimenez
- Yale University, Department of Orthopaedics and Rehabilitation, New Haven, 06510, CT, USA
| |
Collapse
|
2
|
Spaan J, Streepy J, Hodakowski A, Hummel A, Mowers C, Schundler S, McCormick JR, Riboh J, Piasecki D, Chahla J. Individuals Frequently Search Google With Questions About the Management of Meniscal Tears and the Indications for and Technical Details of Surgery but the Quality of the Information Is Suboptimal. Arthrosc Sports Med Rehabil 2025; 7:101061. [PMID: 40297101 PMCID: PMC12034050 DOI: 10.1016/j.asmr.2024.101061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Accepted: 11/25/2024] [Indexed: 04/30/2025] Open
Abstract
Purpose To analyze the frequently asked questions that patients search online regarding meniscal tears and meniscal surgery and evaluate the quality of websites used to answer these common queries. Methods This study used Google's People Also Ask function to extract the most common 300 questions and associated Web pages regarding meniscal tears and meniscal surgery. Questions on both meniscal tear and meniscal surgery were categorized using the Rothwell classification, and websites were evaluated with The Journal of the American Medical Association (JAMA) criteria. Results The Rothwell classification of questions on meniscal tear/surgery was 54.0%/55% fact, 37.7%/30.0% policy, and 8.3%/15.0% value. The meniscal tear cohort asked significantly more questions related to policy (P = .047), whereas the meniscal surgery cohort asked significantly more questions about value (P = .011). Academic (31.7% and 27.3%), medical practice (23.0% and 25.3%), and single-surgeon (12.3% and 13.3%) websites were the most common types of sites encountered. The mean total JAMA score was 1.3 of 4, with journals (mean, 3.4) having the highest score. Single-surgeon practice (mean, 0.5) and legal (mean, 0) sites had the lowest JAMA scores. The most frequently encountered query in the meniscal tear cohort was "What are three signs of a meniscus tear in the knee?" In contrast, in the meniscal surgery cohort, it was a tie between "What is the fastest way to recover from meniscus surgery?" and "Should meniscus surgery be done over 65?" Conclusions The quality of online information related to meniscal tears and surgery is often suboptimal based on objective measures of value. Individuals frequently search for insights into indications, management, and technical details. Clinical Relevance Understanding common themes in online searches can provide valuable insights that could improve patient education. Surgeons can use this information to anticipate potential questions, establish appropriate patient expectations, and enhance informed decisions.
Collapse
Affiliation(s)
- Jonathan Spaan
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, Illinois, U.S.A
| | - John Streepy
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, Illinois, U.S.A
| | - Alexander Hodakowski
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, Illinois, U.S.A
| | - Amelia Hummel
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, Illinois, U.S.A
| | - Colton Mowers
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, Illinois, U.S.A
| | - Sabrina Schundler
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, Illinois, U.S.A
| | - Johnathon R. McCormick
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, Illinois, U.S.A
| | - Jonathan Riboh
- OrthoCarolina Sports Medicine Center, Charlotte, North Carolina, U.S.A
| | - Dana Piasecki
- OrthoCarolina Sports Medicine Center, Charlotte, North Carolina, U.S.A
| | - Jorge Chahla
- Department of Orthopedic Surgery, Rush University Medical Center, Chicago, Illinois, U.S.A
| |
Collapse
|
3
|
Henry JP, Tamer P, Suderi GR. Internet-Based Patient Portals Increase Patient Connectivity Following Total Knee Arthroplasty. J Knee Surg 2025. [PMID: 40169134 DOI: 10.1055/a-2542-7427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 04/03/2025]
Abstract
Many healthcare-related processes have undergone substantial transformation by the internet since the turn of the century. This technological revolution has fostered a fundamental shift from medical paternalism to patient autonomy and empowerment via a "patient-centric approach." Patient portals, or internet-enabled access to an electronic medical record, permit patients to access, manage, and share their health-related information. Patient connectivity following total knee arthroplasty (TKA) has the potential to positively influence overall outcomes, patient experience, and satisfaction. To understand current trends in patient portal usage, modalities of connectivity, and the implications following TKA. A systematic literature review was performed by searching PubMed and Google Scholar. Articles specific to portal usage and connectivity after TKA or total joint arthroplasty were subsequently identified for further review. Patient portals and internet-based digital connectivity platforms enable physicians, team members, and patients to communicate in the perioperative period both directly and indirectly. Communication can be through web-based patient portals, messaging services/apps, preprogrammed alerts (e.g., mobile applications or wearable devices), audio mediums, or videoconferencing. The spectrum and utilization of available patient engagement platforms continues to expand as the importance and implications of patient engagement and connectivity continue to be elucidated. Connectivity through patient portals or other mediums will continue to have an expanding role in all aspects of orthopedic surgery, patient care, and engagement. This includes preoperative education, postoperative rehabilitation, patient care, and, perhaps most importantly, collection of outcome measures. The level of evidence is V (expert opinion).
Collapse
Affiliation(s)
- James P Henry
- Department of Orthopaedic Surgery, Huntington Hospital, Northwell Health, Huntington, New York
| | - Pierre Tamer
- Department of Orthopaedic Surgery, Lenox Hill Hospital, Northwell Health, New York, New York
| | - Giles R Suderi
- Department of Orthopaedic Surgery, Lenox Hill Hospital, Northwell Health, New York, New York
| |
Collapse
|
4
|
Hunter N, Payne C, Vanodia R, Mundluru S. Assessing Online Material Related to Scoliosis: What Do Patients Want to Know? Musculoskeletal Care 2025; 23:e70069. [PMID: 39988570 DOI: 10.1002/msc.70069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2025] [Revised: 02/04/2025] [Accepted: 02/07/2025] [Indexed: 02/25/2025]
Abstract
INTRODUCTION No data describe what patients search for related to scoliosis. We aimed to quantify the Google search volume for scoliosis, identify the most sought-after information, and evaluate the associated online resources. METHODS Search volume and 'People Also Ask' (PAA) questions were documented for the following terms: scoliosis, idiopathic scoliosis, adolescent idiopathic scoliosis, congenital scoliosis, and neuromuscular scoliosis. PAA questions were categorised based on intent and websites were categorised on source. Quality and readability of the sources were determined using the JAMA criteria, Flesch Reading Ease (FRE) score, and Flesch-Kincaid Grade Level (FKGL). ETHICAL APPROVAL This investigation was exempted from Institutional Review Board approval. RESULTS Search volume for 'scoliosis' has significantly increased since 2015, with an average monthly search volume of 219,055 (p < 0.0001). 182 PAA questions were extracted. Most were related to technical details, the evaluation of current treatments, or alternative treatments. Academic websites were the most common resource, followed by medical practices and government websites. Only 4% of websites met the criteria for universal readability. DISCUSSION AND CONCLUSION Scoliosis is a relatively common condition and a popular topic among Google users. However, only 4% of online resources provided by Google were written at an appropriate reading level. The lack of patient-friendly resources related to scoliosis is concerning, particularly given that this patient population has been shown to search for information online at twice the rate of others. This data provides a framework for healthcare professionals to begin addressing common questions related to scoliosis in a patient-centred manner.
Collapse
Affiliation(s)
- Nathaniel Hunter
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Cole Payne
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Rohini Vanodia
- Department of Orthopedic Surgery, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Surya Mundluru
- Department of Orthopedic Surgery, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| |
Collapse
|
5
|
Hunter N, Allen D, Xiao D, Cox M, Jain K. Patient education resources for oral mucositis: a google search and ChatGPT analysis. Eur Arch Otorhinolaryngol 2025; 282:1609-1618. [PMID: 39198303 DOI: 10.1007/s00405-024-08913-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 08/11/2024] [Indexed: 09/01/2024]
Abstract
PURPOSE Oral mucositis affects 90% of patients receiving chemotherapy or radiation for head and neck malignancies. Many patients use the internet to learn about their condition and treatments; however, the quality of online resources is not guaranteed. Our objective was to determine the most common Google searches related to "oral mucositis" and assess the quality and readability of available resources compared to ChatGPT-generated responses. METHODS Data related to Google searches for "oral mucositis" were analyzed. People Also Ask (PAA) questions (generated by Google) related to searches for "oral mucositis" were documented. Google resources were rated on quality, understandability, ease of reading, and reading grade level using the Journal of the American Medical Association benchmark criteria, Patient Education Materials Assessment Tool, Flesch Reading Ease Score, and Flesh-Kincaid Grade Level, respectively. ChatGPT-generated responses to the most popular PAA questions were rated using identical metrics. RESULTS Google search popularity for "oral mucositis" has significantly increased since 2004. 78% of the Google resources answered the associated PAA question, and 6% met the criteria for universal readability. 100% of the ChatGPT-generated responses answered the prompt, and 20% met the criteria for universal readability when asked to write for the appropriate audience. CONCLUSION Most resources provided by Google do not meet the criteria for universal readability. When prompted specifically, ChatGPT-generated responses were consistently more readable than Google resources. After verification of accuracy by healthcare professionals, ChatGPT could be a reasonable alternative to generate universally readable patient education resources.
Collapse
Affiliation(s)
- Nathaniel Hunter
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, USA
| | - David Allen
- Department of Otorhinolaryngology-Head and Neck Surgery, The University of Texas Health Science Center at Houston, Houston, TX, USA.
| | - Daniel Xiao
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Madisyn Cox
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Kunal Jain
- Department of Otorhinolaryngology-Head and Neck Surgery, The University of Texas Health Science Center at Houston, Houston, TX, USA
| |
Collapse
|
6
|
von Sneidern M, Saeedi A, Varelas AN, Eytan DF. Characterizing the Online Discourse on Facial Paralysis: What Patients Are Asking and Where They Find Answers. Facial Plast Surg Aesthet Med 2025; 27:129-135. [PMID: 39093987 DOI: 10.1089/fpsam.2023.0277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/04/2024] Open
Abstract
Background: With the rising popularity of online search tools, patients seeking information on facial palsy are increasingly turning to the Internet for medical knowledge. Objective: To categorize the most common online questions about Bell's palsy or facial paralysis and the sources that provide answers to those queries. Methods: Query volumes for terms pertaining to facial palsy were obtained using Google Search trends. The top 40 keywords associated with the terms "Bell's palsy" and "facial paralysis" were extracted. People Also Ask (PAA) Questions-a Google search engine response page feature-were used to identify the top questions associated with each keyword. Results: A total of 151 PAA Questions pertaining to the top 40 keywords associated with "Bell's palsy" and "facial paralysis" were identified. Etiology questions were most frequent (n = 50, 33.1%), meanwhile those pertaining to treatment were most accessible (119.5 average search engine response pages/question, 35.5%). Most sources were academic (n = 81, 53.6%). Medical practice group sites were most accessible (211.9 average search engine response pages/website, 44.8%). Conclusion: Most PAA questions pertained to etiology and were sourced by academic sites. Questions regarding treatment and medical practice sites appeared on more search engine response pages when compared with all other categories.
Collapse
Affiliation(s)
- Manuela von Sneidern
- Department of Otolaryngology-Head and Neck Surgery, NYU Grossman School of Medicine, New York, New York, USA
| | - Arman Saeedi
- Department of Otolaryngology-Head and Neck Surgery, NYU Grossman School of Medicine, New York, New York, USA
| | - Antonios N Varelas
- Department of Otolaryngology-Head and Neck Surgery, Division of Facial Plastic and Reconstructive Surgery, New York University Grossman School of Medicine, New York, New York, USA
| | - Danielle F Eytan
- Department of Otolaryngology-Head and Neck Surgery, Division of Facial Plastic and Reconstructive Surgery, New York University Grossman School of Medicine, New York, New York, USA
| |
Collapse
|
7
|
Kolac UC, Karademir OM, Ayik G, Kaymakoglu M, Familiari F, Huri G. Can popular AI large language models provide reliable answers to frequently asked questions about rotator cuff tears? JSES Int 2025; 9:390-397. [PMID: 40182256 PMCID: PMC11962600 DOI: 10.1016/j.jseint.2024.11.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2025] Open
Abstract
Background Rotator cuff tears are common upper-extremity injuries that significantly impair shoulder function, leading to pain, reduced range of motion, and a decrease in quality of life. With the increasing reliance on artificial intelligence large language models (AI LLMs) for health information, it is crucial to evaluate the quality and readability of the information provided by these models. Methods A pool of 50 questions was generated related to rotator cuff tear by querying popular AI LLMs (ChatGPT 3.5, ChatGPT 4, Gemini, and Microsoft CoPilot) and using Google search. After that, responses from the AI LLMs were saved and evaluated. For information quality the DISCERN tool and a Likert Scale was used, for readability the Patient Education Materials Assessment Tool for Printable Materials (PEMAT) Understandability Score and the Flesch-Kincaid Reading Ease Score was used. Two orthopedic surgeons assessed the responses, and discrepancies were resolved by a senior author. Results Out of 198 answers, the median DISCERN score was 40, with 56.6% considered sufficient. The Likert Scale showed 96% sufficiency. The median PEMAT Understandability score was 83.33, with 77.3% sufficiency, while the Flesch-Kincaid Reading Ease score had a median of 42.05 with 88.9% sufficiency. Overall, 39.8% of the answers were sufficient in both information quality and readability. Differences were found among AI models in DISCERN, Likert, PEMAT Understandability, and Flesch-Kincaid scores. Conclusion AI LLMs generally cannot offer sufficient information quality and readability. While they are not ready for use in medical field, they show a promising future. There is a necessity for continuous re-evaluation of these models due to their rapid evolution. Developing new, comprehensive tools for evaluating medical information quality and readability is crucial for ensuring these models can effectively support patient education. Future research should focus on enhancing readability and consistent information quality to better serve patients.
Collapse
Affiliation(s)
- Ulas Can Kolac
- Department of Orthopedics and Traumatology, Hacettepe University Faculty of Medicine, Ankara, Turkey
| | | | - Gokhan Ayik
- Department of Orthopedics and Traumatology, Yuksek Ihtisas University Faculty of Medicine, Ankara, Turkey
| | - Mehmet Kaymakoglu
- Department of Orthopedics and Traumatology, Faculty of Medicine, Izmir University of Economics, Izmir, Turkey
| | - Filippo Familiari
- Department of Orthopaedics, Magna Graecia University of Catanzaro, Italy, Catanzaro, Italy
- Research Center on Musculoskeletal Health, MusculoSkeletalHealth@UMG, Magna Graecia University, Catanzaro, Italy
| | - Gazi Huri
- Department of Orthopedics and Traumatology, Hacettepe University Faculty of Medicine, Ankara, Turkey
- Aspetar, Orthopedic and Sports Medicine Hospital, FIFA Medical Center of Excellence, Doha, Qatar
| |
Collapse
|
8
|
Obana KK, Lind DR, Luzzi AJ, O’Connor MJ, LeVasseur MR, Levine WN. Online patients questions regarding reverse total shoulder arthroplasty pertain to timeline of recovery, specific activities, and limitations. JSES REVIEWS, REPORTS, AND TECHNIQUES 2025; 5:7-13. [PMID: 39872341 PMCID: PMC11764610 DOI: 10.1016/j.xrrt.2024.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 01/30/2025]
Abstract
Background Reverse total shoulder arthroplasty (rTSA) demonstrates favorable long-term data and has outpaced anatomic total shoulder arthroplasty and hemiarthroplasty as the most-performed shoulder arthroplasty procedure. As indications and outcomes continue to favor rTSA, patients may turn to the internet as an efficient modality to answer various questions or concerns. This study investigates online patient questions pertaining to rTSA and the quality of the websites providing information. Hypotheses (1) Questions will pertain to surgical indications, timeline of recovery, and postoperative restrictions; (2) the quality and transparency of online information is largely heterogenous. Methods Three rTSA searches were entered into the Google Web Search. Questions under the "People also ask" tab were expanded sequentially and 100 consecutive results for each query were included for analysis (300 in total). Questions were categorized based on Rothwell's Classification and subcategorized by topic. Websites were categorized by source. Website quality was evaluated by the Journal of the American Medical Association (JAMA) Benchmark Criteria. Results Most questions fell into the Rothwell Fact category (49.7%). The most common question topics were Timeline of Recovery (17.3%), Specific Activities (14.7%), and Restrictions (11.3%). The least common question topics were Anatomy/Function (0.0%), Cost (0.3%), and Diagnoses/Evaluation (0.3%). The most common websites were Medical Practice (45.0%), Academic (22.3%), and Single Surgeon (12.3%). PubMed articles consisted of 41.2% of Government websites. The average JAMA score for all websites was 1.48 ± 1.27. Government websites had the highest JAMA score (3.11 ± 1.01) and constituted 55.9% of all websites with a score of 4/4. Medical Practice websites had the lowest JAMA score (0.99 ± 0.91). Conclusion Patients are interested in the timeline of recovery, ability to perform specific activities after surgery, and short-term and long-term restrictions following rTSA. Although all patients will benefit from education on ways to perform activities of daily living while abiding by postoperative restrictions, physicians should set preoperative expectations regarding return-to-activity following rTSA in younger, more active patients. Finally, surgeons should provide patients with physical booklets and online information available on their websites to avoid reliance on low-quality online sources.
Collapse
Affiliation(s)
- Kyle K. Obana
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Medical Center, New York, NY, USA
| | - Dane R.G. Lind
- Center for Regenerative and Personalized Medicine, Steadman Philippon Research Institute, Vail, CO, USA
| | - Andrew J. Luzzi
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Medical Center, New York, NY, USA
| | - Michaela J. O’Connor
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Medical Center, New York, NY, USA
| | - Matthew R. LeVasseur
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Medical Center, New York, NY, USA
| | - William N. Levine
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Medical Center, New York, NY, USA
| |
Collapse
|
9
|
Benaim EH, O’Rourke SP, Dillon MT. What Do People Want to Know About Cochlear Implants: A Google Analytic Study. Laryngoscope 2025; 135:840-847. [PMID: 39192469 PMCID: PMC11729566 DOI: 10.1002/lary.31741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 07/19/2024] [Accepted: 08/19/2024] [Indexed: 08/29/2024]
Abstract
OBJECTIVE Identify the questions most frequently asked online about cochlear implants (CI) and assess the readability and quality of the content. METHODS A Google search engine observational study was conducted via a search response optimization (SEO) tool. The SEO tool listed the questions generated by Google's "People Also Ask" (PAA) feature for the search queries "cochlear implant" and "cochlear implant surgery." The top 50 PAA questions for each query were conceptually classified. Sourced websites were evaluated for readability, transparency and information quality, and ability to answer the question. Readability and accuracy in answering questions were also compared to the responses from ChatGPT 3.5. RESULTS The PAA questions were commonly related to technical details (21%), surgical factors (18%), and postoperative experiences (12%). Sourced websites mainly were from academic institutions, followed by commercial companies. Among all types of websites, readability, on average, did not meet the recommended standard for health-related patient education materials. Only two websites were at or below the 8th-grade level. Responses by ChatGPT had significantly poorer readability compared to the websites (p < 0.001). These online resources were not significantly different in the percentage of accurately answering the questions (websites: 78%, ChatGPT: 85%, p = 0.136). CONCLUSIONS The most searched topics were technical details about devices, surgical factors, and the postoperative experience. Unfortunately, most websites did not meet the ideal criteria of readability, quality, and credibility for patient education. These results highlight potential knowledge gaps for patients, deficits in current online education materials, and possible tools to better support CI candidate decision-making. LEVEL OF EVIDENCE NA Laryngoscope, 135:840-847, 2025.
Collapse
Affiliation(s)
- Ezer H. Benaim
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| | - Samuel P. O’Rourke
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| | - Margaret T. Dillon
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| |
Collapse
|
10
|
Balachandran U, Ren R, Vicioso C, Park J, Nietsch KS, Sacks B, Busigo Torres R, Ranade SC. What are patients asking and reading online? An analysis of online patient searches about treatments for developmental dysplasia of the hip. J Child Orthop 2025; 19:92-98. [PMID: 39802482 PMCID: PMC11724399 DOI: 10.1177/18632521241310318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/04/2024] [Accepted: 12/11/2024] [Indexed: 01/16/2025] Open
Abstract
Purpose We aimed to analyze frequently searched questions through Google's "People Also Ask" feature related to four common treatments for developmental dysplasia of the hip (DDH): the Pavlik harness, rhino brace, closed reduction surgery and open reduction surgery. Methods Search terms for each treatment were entered into Google Web Search using a clean-install Google Chrome browser. The top frequently asked questions and associated websites were extracted. Questions were categorized using the Rothwell classification model. Websites were evaluated using the JAMA Benchmark Criteria. Chi-square tests were performed. Results The initial search yielded 828 questions. Of 479 included questions, the most popular topics were specific activities that patients with DDH can/cannot do (32.8%), technical details about treatments (30.9%) and indications for treatments (18.2%). Websites were commonly academic (59.3%), commercial (40.5%) and governmental (12.3%). There were statistically significant more specific activity questions about Pavlik harnesses than about rhino braces (χ 2 = 7.1, p = 0.008), closed reduction (χ 2 = 56.5, p < 0.001) and open reduction (χ 2 = 14.7, p < 0.001). There were statistically significant more technical details questions about Pavlik harnesses than about closed reduction (χ 2 = 4.1, p = 0.04). Conclusions This study provides insights into common concerns that parents have about their children's DDH treatment, enabling orthopaedic surgeons to provide more effective and targeted consultations. This is particularly important for DDH because affected patients are often diagnosed within the first few months of life, leaving parents overwhelmed by caring for a newborn child and simultaneously coping with this diagnosis.
Collapse
Affiliation(s)
- Uma Balachandran
- Leni and Peter W. May Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Renee Ren
- Leni and Peter W. May Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Camila Vicioso
- Leni and Peter W. May Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Jiwoo Park
- Leni and Peter W. May Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Katrina S Nietsch
- Leni and Peter W. May Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Brittany Sacks
- Leni and Peter W. May Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Rodnell Busigo Torres
- Leni and Peter W. May Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Sheena C Ranade
- Leni and Peter W. May Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
11
|
Fahy S, Oehme S, Milinkovic DD, Bartek B. Enhancing patient education on the role of tibial osteotomy in the management of knee osteoarthritis using a customized ChatGPT: a readability and quality assessment. Front Digit Health 2025; 6:1480381. [PMID: 39830641 PMCID: PMC11738919 DOI: 10.3389/fdgth.2024.1480381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2024] [Accepted: 12/02/2024] [Indexed: 01/22/2025] Open
Abstract
Introduction Knee osteoarthritis (OA) significantly impacts the quality of life of those afflicted, with many patients eventually requiring surgical intervention. While Total Knee Arthroplasty (TKA) is common, it may not be suitable for younger patients with unicompartmental OA, who might benefit more from High Tibial Osteotomy (HTO). Effective patient education is crucial for informed decision-making, yet most online health information has been found to be too complex for the average patient to understand. AI tools like ChatGPT may offer a solution, but their outputs often exceed the public's literacy level. This study assessed whether a customised ChatGPT could be utilized to improve readability and source accuracy in patient education on Knee OA and tibial osteotomy. Methods Commonly asked questions about HTO were gathered using Google's "People Also Asked" feature and formatted to an 8th-grade reading level. Two ChatGPT-4 models were compared: a native version and a fine-tuned model ("The Knee Guide") optimized for readability and source citation through Instruction-Based Fine-Tuning (IBFT) and Reinforcement Learning from Human Feedback (RLHF). The responses were evaluated for quality using the DISCERN criteria and readability using the Flesch Reading Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL). Results The native ChatGPT-4 model scored a mean DISCERN score of 38.41 (range 25-46), indicating poor quality, while "The Knee Guide" scored 45.9 (range 33-66), indicating moderate quality. Cronbach's Alpha was 0.86, indicating good interrater reliability. "The Knee Guide" achieved better readability with a mean FKGL of 8.2 (range 5-10.7, ±1.42) and a mean FRES of 60 (range 47-76, ±7.83), compared to the native model's FKGL of 13.9 (range 11-16, ±1.39) and FRES of 32 (range 14-47, ±8.3). These differences were statistically significant (p < 0.001). Conclusions Fine-tuning ChatGPT significantly improved the readability and quality of HTO-related information. "The Knee Guide" demonstrated the potential of customized AI tools in enhancing patient education by making complex medical information more accessible and understandable.
Collapse
Affiliation(s)
- Stephen Fahy
- Centrum für Muskuloskeletale Chirurgie, Charité Universitätsmedizin Berlin, Berlin, Germany
| | | | | | | |
Collapse
|
12
|
Mastrokostas PG, Mastrokostas LE, Emara AK, Wellington IJ, Ginalis E, Houten JK, Khalsa AS, Saleh A, Razi AE, Ng MK. Modern Internet Search Analytics: Is There a Difference in What Patients are Searching Regarding the Operative and Nonoperative Management of Scoliosis? Global Spine J 2025; 15:103-111. [PMID: 38613478 PMCID: PMC11571444 DOI: 10.1177/21925682241248110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 04/15/2024] Open
Abstract
STUDY DESIGN Observational Study. OBJECTIVES This study aimed to investigate the most searched types of questions and online resources implicated in the operative and nonoperative management of scoliosis. METHODS Six terms related to operative and nonoperative scoliosis treatment were searched on Google's People Also Ask section on October 12, 2023. The Rothwell classification was used to sort questions into fact, policy, or value categories, and associated websites were classified by type. Fischer's exact tests compared question type and websites encountered between operative and nonoperative questions. Statistical significance was set at the .05 level. RESULTS The most common questions concerning operative and nonoperative management were fact (53.4%) and value (35.5%) questions, respectively. The most common subcategory pertaining to operative and nonoperative questions were specific activities/restrictions (21.7%) and evaluation of treatment (33.3%), respectively. Questions on indications/management (13.2% vs 31.2%, P < .001) and evaluation of treatment (10.1% vs 33.3%, P < .001) were associated with nonoperative scoliosis management. Medical practice websites were the most common website to which questions concerning operative (31.9%) and nonoperative (51.4%) management were directed to. Operative questions were more likely to be directed to academic websites (21.7% vs 10.0%, P = .037) and less likely to be directed to medical practice websites (31.9% vs 51.4%, P = .007) than nonoperative questions. CONCLUSIONS During scoliosis consultations, spine surgeons should emphasize the postoperative recovery process and efficacy of conservative treatment modalities for the operative and nonoperative management of scoliosis, respectively. Future research should assess the impact of website encounters on patients' decision-making.
Collapse
Affiliation(s)
- Paul G. Mastrokostas
- College of Medicine, State University of New York (SUNY) Downstate, Brooklyn, NY, USA
| | | | - Ahmed K. Emara
- Department of Orthopaedic Surgery, Cleveland Clinic, Cleveland, OH, USA
| | - Ian J. Wellington
- Department of Orthopaedic Surgery, University of Connecticut, Hartford, CT, USA
| | | | - John K. Houten
- Department of Neurosurgery, Mount Sinai School of Medicine, New York, NY, USA
| | - Amrit S. Khalsa
- Department of Orthopaedic Surgery, University of Pennsylvania, Philadelphia, PA, USA
| | - Ahmed Saleh
- Maimonides Medical Center, Department of Orthopaedic Surgery, Brooklyn, NY, USA
| | - Afshin E. Razi
- Maimonides Medical Center, Department of Orthopaedic Surgery, Brooklyn, NY, USA
| | - Mitchell K. Ng
- Maimonides Medical Center, Department of Orthopaedic Surgery, Brooklyn, NY, USA
| |
Collapse
|
13
|
Singh S, Errampalli E, Errampalli N, Miran MS. Enhancing Patient Education on Cardiovascular Rehabilitation with Large Language Models. MISSOURI MEDICINE 2025; 122:67-71. [PMID: 39958590 PMCID: PMC11827661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 02/18/2025]
Abstract
Introduction There are barriers that exist for individuals to adhere to cardiovascular rehabilitation programs. A key driver to patient adherence is appropriately educating patients. A growing education tool is using large language models to answer patient questions. Methods The primary objective of this study was to evaluate the readability quality of educational responses provided by large language models for questions regarding cardiac rehabilitation using Gunning Fog, Flesh Kincaid, and Flesch Reading Ease scores. Results The findings of this study demonstrate that the mean Gunning Fog, Flesch Kincaid, and Flesch Reading Ease scores do not meet US grade reading level recommendations across three models: ChatGPT 3.5, Copilot, and Gemini. The Gemini and Copilot models demonstrated greater ease of readability compared to ChatGPT 3.5. Conclusions Large language models could serve as educational tools on cardiovascular rehabilitation, but there remains a need to improve the text readability for these to effectively educate patients.
Collapse
Affiliation(s)
- Som Singh
- University of Missouri-Kansas City, Kansas City, Missouri, and the University of Texas Health Science Center, Houston, Texas
| | | | - Nathan Errampalli
- George Washington University School of Medicine and Health Sciences, Washington, DC
| | | |
Collapse
|
14
|
Dubin J, Sudah SY, Moverman MA, Pagani NR, Puzzitiello RN, Menendez ME, Guss MS. Google Search Analytics for Lateral Epicondylitis. Hand (N Y) 2025; 20:32-36. [PMID: 37746689 PMCID: PMC11653332 DOI: 10.1177/15589447231199799] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
BACKGROUND The use of online search engines for health information is becoming common practice. We analyzed Google search queries to identify the most frequently asked topics and questions related to lateral epicondylitis ("tennis elbow") and the Web sites provided to address these questions. METHODS Four search terms for lateral epicondylitis were entered into Google Web Search. A list of the most frequently asked questions along with their associated Web sites was extracted and categorized by 2 independent reviewers. RESULTS A total of 400 questions were extracted with 168 associated Web sites. The most popular question topics were related to indications/management (39.0%), risks/complications (19.5%), and the ability to perform specific activities (18.8%). Frequently asked questions had to do with the duration of symptoms, self-management strategies (eg, brace use and self-massage), and the indications for surgery. The most common Web sites provided to address these questions were social media (27.5%), commercial (24.5%), academic (16.5%), and medical practice (16.3%). CONCLUSION The most frequently asked questions about lateral epicondylitis on Google centered around symptom duration and management, with most information originating from social media and commercial Web sites. Our data can be used to anticipate patient concerns and set expectations regarding the prognosis and management of lateral epicondylitis.
Collapse
Affiliation(s)
| | - Suleiman Y. Sudah
- Department of Orthopedic Surgery, Monmouth Medical Center, Long Branch, NJ, USA
| | - Michael A. Moverman
- Department of Orthopedic Surgery, Tufts Medical Center, Tufts University School of Medicine, Boston, MA, USA
| | - Nicholas R. Pagani
- Department of Orthopedic Surgery, Tufts Medical Center, Tufts University School of Medicine, Boston, MA, USA
| | - Richard N. Puzzitiello
- Department of Orthopedic Surgery, Tufts Medical Center, Tufts University School of Medicine, Boston, MA, USA
| | | | - Michael S. Guss
- Department of Orthopaedic Surgery, Hand Surgery, Newton-Wellesley Hospital, Tufts University School of Medicine, Newton, MA, USA
| |
Collapse
|
15
|
Hunter N, Wright A, Jin V, Tritter A. Retrograde Cricopharyngeal Dysfunction: A Google Search Analysis. Otolaryngol Head Neck Surg 2024; 171:1808-1815. [PMID: 39413313 DOI: 10.1002/ohn.1022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 07/20/2024] [Accepted: 08/04/2024] [Indexed: 10/18/2024]
Abstract
OBJECTIVE No studies describe what patients search for online in relation to retrograde cricopharyngeal dysfunction (RCPD). Our objectives were to describe the Google search volume for RCPD, identify the most common queries related to RCPD, and evaluate the available online resources. STUDY DESIGN Observational. SETTING Google Database. METHODS Using Ahrefs and Search Response, Google search volume for RCPD and "People Also Ask" (PAA) questions were documented. PAA questions were categorized based on intent, and the websites were categorized on source. The quality and readability of the sources were determined using the Journal of the American Medical Association (JAMA) benchmark criteria, Flesch Reading Ease score, and Flesch-Kincaid Grade Level. RESULTS Search volume for RCPD-related content has continually increased since 2021, with a combined average volume of 6287 searches per month. Most PAA questions were related to technical details (61.07%) and treatments (32.06%) for RCPD. Websites provided to answer these questions were most often from academic (25.95%) and commercial (22.14%) sources. None of the sources met the criteria for universal readability, and only 15% met all quality metrics set forth by JAMA. CONCLUSION Interest in RCPD is at an all-time high, with information related to its diagnosis and treatment most popular among Google users. Significantly, none of the resources provided by Google met the criteria for universal readability, preventing many patients from fully comprehending the information presented. Future work should aim to address questions related to RCPD in a suitable way for all patient demographics.
Collapse
Affiliation(s)
| | - Aidan Wright
- UTHealth Houston-McGovern Medical School, Houston, Texas, USA
| | - Vivian Jin
- UTHealth Houston-McGovern Medical School, Houston, Texas, USA
- Department of Otorhinolaryngology-Head and Neck Surgery, UTHealth Houston, Houston, Texas, USA
| | - Andrew Tritter
- UTHealth Houston-McGovern Medical School, Houston, Texas, USA
- Department of Otorhinolaryngology-Head and Neck Surgery, UTHealth Houston, Houston, Texas, USA
| |
Collapse
|
16
|
Eryilmaz A, Aydin M, Turemis C, Surucu S. ChatGPT-4.0 vs. Google: Which Provides More Academic Answers to Patients' Questions on Arthroscopic Meniscus Repair? Cureus 2024; 16:e76380. [PMID: 39867098 PMCID: PMC11760333 DOI: 10.7759/cureus.76380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/25/2024] [Indexed: 01/28/2025] Open
Abstract
Purpose The purpose of this study was to evaluate the ability of a Chat Generative Pre-trained Transformer (ChatGPT) to provide academic answers to frequently asked questions using a comparison with Google web search FAQs and answers. This study attempted to determine what patients ask on Google and ChatGPT and whether ChatGPT and Google provide factual information for patients about arthroscopic meniscus repair. Method A cleanly installed Google Chrome browser and ChatGPT were used to ensure no individual cookies, browsing history, other side data, or sponsored sites. The term "arthroscopic meniscus repair" was entered into the Google Chrome browser and ChatGPT. The first 15 frequently asked questions (FAQs), answers, and sources of answers to FAQs were identified from both ChatGPT and Google search engines. Results Timeline of recovery (20%) and technical details (20%) were the most commonly asked question categories of a total of 30 questions. Technical details and timeline of recovery questions were more commonly asked on ChatGPT compared to Google (technical detail: 33.3% vs. 6.6%, p=0.168; timeline of recovery: 26.6% vs. 13.3%, p=0.651). Answers to questions were more commonly from academic websites in website categories in ChatGPT compared to Google (93.3% vs. 20%, p=0.0001). The most common answers to frequently asked questions were academic (20%) and commercial (20%) in Google. Conclusion Compared to Google, ChatGPT provided significantly fewer references to commercial content and offered responses that were more aligned with academic sources. ChatGPT may be a valuable adjunct in patient education when used under physician supervision, ensuring information aligns with evidence-based practices.
Collapse
Affiliation(s)
- Atahan Eryilmaz
- Orthopedic Surgery, Haseki Training and Research Hospital, Istanbul, TUR
| | - Mahmud Aydin
- Orthopedic Surgery, Sisli Memorial Hospital, Istanbul, TUR
| | - Cihangir Turemis
- Orthopedic Surgery, Cesme Alper Cizgenakat State Hospital, Izmir, TUR
| | - Serkan Surucu
- Orthopedics and Rehabilitation, Yale University, New Haven, USA
| |
Collapse
|
17
|
Obana KK, Law C, Mastroianni MA, Abdelaziz A, Alexander FJ, Ahmad CS, Trofa DP. Patients With Posterior Cruciate Ligament Injuries Obtain Information Regarding Diagnosis, Management, and Recovery from Low-Quality Online Resources. PHYSICIAN SPORTSMED 2024; 52:601-607. [PMID: 38651524 DOI: 10.1080/00913847.2024.2346462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 04/19/2024] [Indexed: 04/25/2024]
Abstract
OBJECTIVES This study investigates the most common online patient questions pertaining to posterior cruciate ligament (PCL) injuries and the quality of the websites providing information. METHODS Four PCL search queries were entered into the Google Web Search. Questions under the 'People also ask' tab were expanded in order and 100 results for each query were included (400 total). Questions were categorized based on Rothwell's Classification of Questions (Fact, Policy, Value). Websites were categorized by source (Academic, Commercial, Government, Medical Practice, Single Surgeon Personal, Social Media). Website quality was evaluated based on the Journal of the American Medical Association (JAMA) Benchmark Criteria. Pearson's chi-squared was used to assess categorical data. Cohen's kappa was used to assess inter-rater reliability. RESULTS Most questions fell into the Rothwell Fact category (54.3%). The most common question topics were Diagnosis/Evaluation (18.0%), Indications/Management (15.5%), and Timeline of Recovery (15.3%). The least common question topics were Technical Details of Procedure (1.5%), Cost (0.5%), and Longevity (0.5%). The most common websites were Medical Practice (31.8%) and Commercial (24.3%), while the least common were Government (8.5%) and Social Media (1.5%). The average JAMA score for websites was 1.49 ± 1.36. Government websites had the highest JAMA score (3.00 ± 1.26) and constituted 42.5% of all websites with a score of 4/4. Comparatively, Single Surgeon Personal websites had the lowest JAMA score (0.76 ± 0.87, range [0-2]). PubMed articles constituted 70.6% (24/34) of Government websites, 70.8% (17/24) had a JAMA score of 4 and 20.8% (5/24) had a score of 3. CONCLUSION Patients search the internet for information regarding diagnosis, treatment, and recovery of PCL injuries and are less interested in the details of the procedure, cost, and longevity of treatment. The low JAMA score reflects the heterogenous quality and transparency of online information. Physicians can use this information to help guide patient expectations pre- and post-operatively.
Collapse
Affiliation(s)
- Kyle K Obana
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Christian Law
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Michael A Mastroianni
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Abed Abdelaziz
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Frank J Alexander
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Christopher S Ahmad
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - David P Trofa
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| |
Collapse
|
18
|
George G, Abbas MJ, Castle JP, Gaudiani MA, Gasparro M, Akioyamen NO, Corsi M, Pratt B, Muh SJ, Lynch TS. Patients With Shoulder Labral Tears Search the Internet to Understand Their Diagnoses and Treatment Options. Arthrosc Sports Med Rehabil 2024; 6:100983. [PMID: 39776505 PMCID: PMC11701988 DOI: 10.1016/j.asmr.2024.100983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 07/15/2024] [Indexed: 01/11/2025] Open
Abstract
Purpose To analyze the most frequently searched questions associated with shoulder labral pathology and to evaluate the source-type availability and quality. Methods Common shoulder labral pathology-related search terms were entered into Google, and the suggested frequently asked questions were compiled and categorized. In addition, suggested sources were recorded, categorized, and scored for quality of information using JAMA (The Journal of the American Medical Association) benchmark criteria. Statistical analysis was performed to compare the types of questions and their associated sources, as well as the quality of sources. Results In this study, 513 questions and 170 sources were identified and categorized. The most popular topics were diagnosis/evaluation (21.5%) and indications/management (21.1%.). The most common website types were academic (27.9%), commercial (25.2%), and medical practice (22.5%). Multiple statistically significant associations were found between specific question categories and their associated source types. The average JAMA quality score for all sources was 1.56, and medical websites had significantly lower quality scores than nonmedical sites (1.05 vs 2.12, P < .001). Conclusions Patients searching the internet for information regarding shoulder labral pathology often look for facts regarding the diagnosis and management of their conditions. They use various source types to better understand their conditions, with government sources being of the highest quality, whereas medical sites showed statistically lower quality. Across the spectrum of questions, the quality of readily available resources varies substantially. Clinical Relevance The use of online resources in health care is expanding. It is important to understand the most commonly asked questions and the quality of information available to patients.
Collapse
Affiliation(s)
- Gary George
- Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, U.S.A
| | - Muhammad J. Abbas
- Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, U.S.A
| | - Joshua P. Castle
- Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, U.S.A
| | - Michael A. Gaudiani
- Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, U.S.A
| | - Matthew Gasparro
- Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, U.S.A
| | - Noel O. Akioyamen
- Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, U.S.A
| | - Matthew Corsi
- Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, U.S.A
| | - Brittaney Pratt
- Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, U.S.A
| | - Stephanie J. Muh
- Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, U.S.A
| | - T. Sean Lynch
- Department of Orthopedic Surgery, Henry Ford Health System, Detroit, Michigan, U.S.A
| |
Collapse
|
19
|
Monos S, Fritz C, Go B, Rajasekaran K. Facial Filler Injections: Questions Patients Ask and Where They Find Answers. ORL J Otorhinolaryngol Relat Spec 2024; 86:164-173. [PMID: 39557039 DOI: 10.1159/000541497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 08/26/2024] [Indexed: 11/20/2024]
Abstract
INTRODUCTION The most pressing questions patients ask about facial fillers and the sources to which patients are directed remain incompletely understood. METHODS The search engine optimization tool Ahrefs was utilized to extract Google metadata on searches performed in the USA. The most frequently asked questions were categorized by topic, while websites were categorized by authoring organization. JAMA benchmark criteria were used for website information quality assessment. RESULTS A total of 300 questions for the term "fillers" were extracted. The majority of search queries (24.0%) and monthly search volume (39.3%) pertained to procedural costs. The mean JAMA score for private practice sources (1.1 ± 0.57) was significantly lower than that of corporate sources (2.6 ± 0.55, p = 0.0003) but not significantly lower than academic pages (1.6 ± 1.34, p = 0.483). With respect to monthly search volume, queries concerning lip fillers have been increasingly asked at a rate that exceeds other injection sites. CONCLUSION Online searches for facial fillers often involve the topic of cost and frequently direct patients to websites that contain inadequate information on authorship, attribution, disclosure, and currency. When compared to other anatomic sites, search queries involving lip fillers have increased over the last 3 years.
Collapse
Affiliation(s)
- Stylianos Monos
- Lewis Katz School of Medicine at Temple University, Philadelphia, Pennsylvania, USA,
| | - Christian Fritz
- Department of Otorhinolaryngology - Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Beatrice Go
- Department of Otorhinolaryngology - Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Karthik Rajasekaran
- Department of Otorhinolaryngology - Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
20
|
Mastrokostas PG, Mastrokostas LE, Emara AK, Wellington IJ, Ginalis E, Houten JK, Khalsa AS, Saleh A, Razi AE, Ng MK. GPT-4 as a Source of Patient Information for Anterior Cervical Discectomy and Fusion: A Comparative Analysis Against Google Web Search. Global Spine J 2024; 14:2389-2398. [PMID: 38513636 PMCID: PMC11529100 DOI: 10.1177/21925682241241241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/23/2024] Open
Abstract
STUDY DESIGN Comparative study. OBJECTIVES This study aims to compare Google and GPT-4 in terms of (1) question types, (2) response readability, (3) source quality, and (4) numerical response accuracy for the top 10 most frequently asked questions (FAQs) about anterior cervical discectomy and fusion (ACDF). METHODS "Anterior cervical discectomy and fusion" was searched on Google and GPT-4 on December 18, 2023. Top 10 FAQs were classified according to the Rothwell system. Source quality was evaluated using JAMA benchmark criteria and readability was assessed using Flesch Reading Ease and Flesch-Kincaid grade level. Differences in JAMA scores, Flesch-Kincaid grade level, Flesch Reading Ease, and word count between platforms were analyzed using Student's t-tests. Statistical significance was set at the .05 level. RESULTS Frequently asked questions from Google were varied, while GPT-4 focused on technical details and indications/management. GPT-4 showed a higher Flesch-Kincaid grade level (12.96 vs 9.28, P = .003), lower Flesch Reading Ease score (37.07 vs 54.85, P = .005), and higher JAMA scores for source quality (3.333 vs 1.800, P = .016). Numerically, 6 out of 10 responses varied between platforms, with GPT-4 providing broader recovery timelines for ACDF. CONCLUSIONS This study demonstrates GPT-4's ability to elevate patient education by providing high-quality, diverse information tailored to those with advanced literacy levels. As AI technology evolves, refining these tools for accuracy and user-friendliness remains crucial, catering to patients' varying literacy levels and information needs in spine surgery.
Collapse
Affiliation(s)
- Paul G. Mastrokostas
- College of Medicine, State University of New York (SUNY) Downstate, Brooklyn, NY, USA
| | | | - Ahmed K. Emara
- Department of Orthopaedic Surgery, Cleveland Clinic, Cleveland, OH, USA
| | - Ian J. Wellington
- Department of Orthopaedic Surgery, University of Connecticut, Hartford, CT, USA
| | | | - John K. Houten
- Department of Neurosurgery, Mount Sinai School of Medicine, New York, NY, USA
| | - Amrit S. Khalsa
- Department of Orthopaedic Surgery, University of Pennsylvania, Philadelphia, PA, USA
| | - Ahmed Saleh
- Department of Orthopaedic Surgery, Maimonides Medical Center, Brooklyn, NY, USA
| | - Afshin E. Razi
- Department of Orthopaedic Surgery, Maimonides Medical Center, Brooklyn, NY, USA
| | - Mitchell K. Ng
- Department of Orthopaedic Surgery, Maimonides Medical Center, Brooklyn, NY, USA
| |
Collapse
|
21
|
Lim B, Sen S. A cross-sectional quantitative analysis of the readability and quality of online resources regarding thumb carpometacarpal joint replacement surgery. J Hand Microsurg 2024; 16:100119. [PMID: 39234384 PMCID: PMC11369735 DOI: 10.1016/j.jham.2024.100119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 06/18/2024] [Accepted: 06/20/2024] [Indexed: 09/06/2024] Open
Abstract
Background Thumb carpometacarpal (CMC) joint osteoarthritis is a common degenerative condition that affects up to 15 % of the population older than 30 years. Poor readability of online health resources has been associated with misinformation, inappropriate care, incorrect self-treatment, worse health outcomes, and increased healthcare resource waste. This study aims to assess the readability and quality of online information regarding thumb carpometacarpal (CMC) joint replacement surgery. Methods The terms "thumb joint replacement surgery", "thumb carpometacarpal joint replacement surgery", "thumb cmc joint replacement surgery", "thumb arthroplasty", "thumb carpometacarpal arthroplasty", and "thumb cmc arthroplasty" were searched in Google and Bing. Readability was determined using the Flesch Reading Ease Score (FRES) and the Flesch-Kincaid Reading Grade Level (FKGL). FRES >65 or a grade level score of sixth grade and under was considered acceptable. Quality was assessed using the Patient Education Materials Assessment Tool (PEMAT) and a modified DISCERN tool. PEMAT scores below 70 were considered poorly understandable and poorly actionable. Results A total of 34 websites underwent qualitative analysis. The average FRES was 54.60 ± 7.91 (range 30.30-67.80). Only 3 (8.82 %) websites had a FRES score >65. The average FKGL score was 8.19 ± 1.80 (range 5.60-12.90). Only 3 (8.82 %) websites were written at or below a sixth-grade level. The average PEMAT percentage score for understandability and actionability was 76.82 ± 9.43 (range 61.54-93.75) and 36.18 ± 24.12 (range 0.00-60.00) respectively. Although 22 (64.71 %) of websites met the acceptable standard of 70 % for understandability, none of the websites met the acceptable standard of 70 % for actionability. The average total DISCERN score was 32.00 ± 4.29 (range 24.00-42.00). Conclusions Most websites reviewed were written above recommended reading levels. Most showed acceptable understandability but none showed acceptable actionability. To avoid the negative outcomes of poor patient understanding of online resources, providers of these resources should optimise accessibility to the average reader by using simple words, avoiding jargon, and analysing texts with readability software before publishing the materials online. Websites should also utilise visual aids and provide clearer pre-operative and post-operative instructions.
Collapse
Affiliation(s)
- Brandon Lim
- Trinity College Dublin, School of Medicine, Dublin, Ireland
| | - Suddhajit Sen
- Department of Trauma and Orthopaedic Surgery, Raigmore Hospital, Inverness, UK
| |
Collapse
|
22
|
Gutta N, Singh S, Patel D, Jamal A, Qureshi F. Digital Education on Hospital Nutrition Diets: What Do Patients Want to Know? Nutrients 2024; 16:3314. [PMID: 39408281 PMCID: PMC11478968 DOI: 10.3390/nu16193314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 09/14/2024] [Accepted: 09/18/2024] [Indexed: 10/20/2024] Open
Abstract
INTRODUCTION Therapeutic nutrition plays an imperative role during a patient's hospital course. There is a tremendous body of literature that emphasizes the systematic delivery of information regarding hospital nutrition diets. A major component of delivering healthcare information is the principle of providing quality healthcare information, but this has not yet been investigated on hospital nutrition diets. This study aimed to evaluate the comprehension and readability of patient education materials regarding therapeutic hospital diets. METHODOLOGY The methodology employed the use of publicly available questions regarding hospital nutrition diets and categorized them per Rothwell's Classification of Questions. Additionally, the questions were extracted online and have an associated digital article linked to the question. These articles underwent analysis for readability scores. RESULTS This study's findings reveal that most hospital diets do not meet the recommended grade-reading levels. CONCLUSIONS This underscores the need for healthcare providers to enhance patient education regarding hospital diets. The prevalence of "Fact" questions showcases the importance of clearly explaining diets and dietary restrictions to patients.
Collapse
Affiliation(s)
- Neha Gutta
- Department of Internal Medicine, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA
| | - Som Singh
- Department of Internal Medicine, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA
| | - Dharti Patel
- Department of Internal Medicine, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA
| | - Aleena Jamal
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA 19107, USA
| | - Fawad Qureshi
- Department of Nephrology and Hypertension, Mayo Clinic Alix School of Medicine, Rochester, MN 55905, USA
| |
Collapse
|
23
|
Ren R, Busigó Torres R, Sabo GC, Arroyave JS, Stern BZ, Chen DD, Hayden BL, Poeran J, Moucha CS. Characteristics and Quality of Online Searches for Direct Anterior Versus Posterior Approach for Total Hip Arthroplasty. J Arthroplasty 2024; 39:2329-2335.e1. [PMID: 38582372 DOI: 10.1016/j.arth.2024.03.048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 03/16/2024] [Accepted: 03/19/2024] [Indexed: 04/08/2024] Open
Abstract
BACKGROUND Online resources are important for patient self-education and reflect public interest. We described commonly asked questions regarding the direct anterior versus posterior approach (DAA, PA) to total hip arthroplasty (THA) and the quality of associated websites. METHODS We extracted the top 200 questions and websites in Google's "People Also Ask" section for 8 queries on January 8, 2023, and grouped websites and questions into DAA, PA, or comparison. Questions were categorized using Rothwell's classification (fact, policy, value) and THA-relevant subtopics. Websites were evaluated by information source, Journal of the American Medical Association Benchmark Criteria (credibility), DISCERN survey (information quality), and readability. RESULTS We included 429 question/website combinations (questions: 52.2% DAA, 21.2% PA, 26.6% comparison; websites: 39.0% DAA, 11.0% PA, 9.6% comparison). Per Rothwell's classification, 56.2% of questions were fact, 31.7% value, 10.0% policy, and 2.1% unrelated. The THA-specific question subtopics differed between DAA and PA (P < .001), specifically for recovery timeline (DAA 20.5%, PA 37.4%), indications/management (DAA 13.4%, PA 1.1%), and technical details (DAA 13.8%, PA 5.5%). Information sources differed between DAA (61.7% medical practice/surgeon) and PA websites (44.7% government; P < .001). The median Journal of the American Medical Association Benchmark score was 1 (limited credibility, interquartile range 1 to 2), with the lowest scores for DAA websites (P < .001). The median DISCERN score was 55 ("good" quality, interquartile range 43 to 65), with the highest scores for comparison websites (P < .001). Median Flesch-Kincaid Grade Level scores were 12th grade level for both DAA and PA (P = .94). CONCLUSIONS Patients' informational interests can guide counseling. Internet searches that explicitly compare THA approaches yielded websites that provide higher-quality information. Providers may also advise patients that physician websites and websites only describing the DAA may have less balanced perspectives, and limited information regarding surgical approaches is available from social media resources.
Collapse
Affiliation(s)
- Renee Ren
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Rodnell Busigó Torres
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Graham C Sabo
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Juan Sebastian Arroyave
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Brocha Z Stern
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, New York; Department of Population Health Science and Policy, Institute for Health Care Delivery Science, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Darwin D Chen
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Brett L Hayden
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Jashvant Poeran
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, New York; Department of Population Health Science and Policy, Institute for Health Care Delivery Science, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Calin S Moucha
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, New York
| |
Collapse
|
24
|
Yamamura A, Watanabe S, Yamaguchi S, Iwata K, Kimura S, Mikami Y, Toguchi K, Sakamoto T, Ito R, Nakajima H, Sasho T, Ohtori S. Readability and quality of online patient resources regarding knee osteoarthritis and lumbar spinal stenosis in Japan. J Orthop Sci 2024; 29:1313-1318. [PMID: 37599135 DOI: 10.1016/j.jos.2023.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 06/10/2023] [Accepted: 08/02/2023] [Indexed: 08/22/2023]
Abstract
BACKGROUND This study aimed to quantify the readability and quality of online patient resources on knee osteoarthritis and lumbar spinal stenosis in Japan. METHODS Three search engines (Google, Yahoo, and Bing) were searched for the terms knee osteoarthritis and lumbar spinal stenosis. The first 30 websites of each search were screened. Duplicate websites and those unrelated to the searched diseases were excluded. The remaining 125 websites (62 on knee osteoarthritis, 63 on lumbar spinal stenosis) were analyzed. The text readability was assessed using two web-based programs (Obi-3 and Readability Research Lab) and lexical density. Website quality was evaluated using the DISCERN score, Clear Communication Index, and Journal of American Medical Association benchmark criteria. RESULTS Readability scores were high, indicating that the texts were difficult to understand. Only 24 (19%) and six (5%) websites were classified as average difficulty readability according to Obi-3 and Readability Research Lab, respectively. The overall quality of information was low, with only four (3%) being rated as having sufficient quality based on the Clear Communication Index and Journal of American Medical Association benchmark criteria. None of the websites satisfied the DISCERN quality criteria. CONCLUSIONS Patient information on Japanese websites regarding knee osteoarthritis and lumbar spinal stenosis were difficult to understand. Moreover, the quality of the websites was insufficient. Orthopaedic surgeons should contribute to the creation of high-quality easy-to-read websites to facilitate patient-physician communication.
Collapse
Affiliation(s)
- Atsushi Yamamura
- Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba, Japan
| | - Shotaro Watanabe
- Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba, Japan
| | - Satoshi Yamaguchi
- Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba, Japan; Graduate School of Global and Transdisciplinary Studies, Chiba University, Chiba, Japan.
| | - Kazunari Iwata
- Department of Japanese Language and Literature, University of the Sacred Heart, Tokyo, Japan
| | - Seji Kimura
- Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba, Japan
| | - Yukio Mikami
- Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba, Japan
| | - Kaoru Toguchi
- Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba, Japan
| | - Takuya Sakamoto
- Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba, Japan
| | - Ryu Ito
- Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba, Japan
| | - Hirofumi Nakajima
- Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba, Japan
| | - Takahisa Sasho
- Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba, Japan; Center for Preventive Medical Sciences, Chiba University, Chiba, Japan
| | - Seiji Ohtori
- Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba, Japan
| |
Collapse
|
25
|
Singh SP, Jamal A, Qureshi F, Zaidi R, Qureshi F. Leveraging Generative Artificial Intelligence Models in Patient Education on Inferior Vena Cava Filters. Clin Pract 2024; 14:1507-1514. [PMID: 39194925 DOI: 10.3390/clinpract14040121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Revised: 06/13/2024] [Accepted: 07/23/2024] [Indexed: 08/29/2024] Open
Abstract
Background: Inferior Vena Cava (IVC) filters have become an advantageous treatment modality for patients with venous thromboembolism. As the use of these filters continues to grow, it is imperative for providers to appropriately educate patients in a comprehensive yet understandable manner. Likewise, generative artificial intelligence models are a growing tool in patient education, but there is little understanding of the readability of these tools on IVC filters. Methods: This study aimed to determine the Flesch Reading Ease (FRE), Flesch-Kincaid, and Gunning Fog readability of IVC Filter patient educational materials generated by these artificial intelligence models. Results: The ChatGPT cohort had the highest mean Gunning Fog score at 17.76 ± 1.62 and the lowest at 11.58 ± 1.55 among the Copilot cohort. The difference between groups for Flesch Reading Ease scores (p = 8.70408 × 10-8) was found to be statistically significant albeit with priori power found to be low at 0.392. Conclusions: The results of this study indicate that the answers generated by the Microsoft Copilot cohort offers a greater degree of readability compared to ChatGPT cohort regarding IVC filters. Nevertheless, the mean Flesch-Kincaid readability for both cohorts does not meet the recommended U.S. grade reading levels.
Collapse
Affiliation(s)
- Som P Singh
- Department of Internal Medicine, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA
| | - Aleena Jamal
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA 19107, USA
| | - Farah Qureshi
- Lake Erie College of Osteopathic Medicine, Erie, PA 16509, USA
| | - Rohma Zaidi
- Department of Internal Medicine, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA
| | - Fawad Qureshi
- Department of Nephrology and Hypertension, Mayo Clinic Alix School of Medicine, Rochester, MN 55905, USA
| |
Collapse
|
26
|
Shepard S, Sajjadi NB, Checketts JX, Hughes G, Ottwell R, Chalkin B, Hartwell M, Vassar M. Examining the Public's Most Frequently Asked Questions About Carpal Tunnel Syndrome and Appraising Online Information About Treatment. Hand (N Y) 2024; 19:768-775. [PMID: 36564990 PMCID: PMC11284989 DOI: 10.1177/15589447221142895] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
BACKGROUND Carpal tunnel syndrome (CTS) is the most common entrapment neuropathy. Patients often search online for health information regarding common musculoskeletal complaints. Thus, the purpose of this study was to use language processing information from Google to assess the content of CTS frequently asked questions (FAQs) searched online and the transparency and quality of online CTS information. METHODS On March 11, 2021, we searched Google for 3 terms "carpal tunnel syndrome treatment," "carpal tunnel syndrome surgical treatment," and "carpal tunnel syndrome non-surgical treatment" until a minimum of 100 FAQs and their answer links were extracted from each search. We used Rothwell classification to categorize the FAQs. The Journal of the American Medical Association's benchmark criteria were used to assess information transparency. Information quality was assessed using the Brief DISCERN tool. RESULTS Our Google search returned 124 unique FAQs. Fifty-six (45.2%) were value based and most were related to the evaluation of treatment options (45/56, 80.4%). The most common source type was medical practices (26.6%). Nearly half of the answer sources (45.9%) were found to be lacking in transparency. One-way analysis of variance revealed a significant difference in mean Brief DISCERN scores among the 5 source types, F(4, 119) = 5.93, P = .0002, with medical practices averaging the worst score (13.73/30). CONCLUSIONS Patients are most commonly searching Google to gain information regarding CTS treatment options. Online sources such as medical practices should use widely accepted rubrics for ensuring transparency and quality prior to publishing CTS information.
Collapse
Affiliation(s)
- Samuel Shepard
- Oklahoma State University Center for Health Sciences, Tulsa, USA
| | | | | | - Griffin Hughes
- Oklahoma State University Center for Health Sciences, Tulsa, USA
| | | | - Brian Chalkin
- Oklahoma State University Medical Center, Tulsa, USA
| | - Micah Hartwell
- Oklahoma State University Center for Health Sciences, Tulsa, USA
| | - Matt Vassar
- Oklahoma State University Center for Health Sciences, Tulsa, USA
| |
Collapse
|
27
|
Kasthuri VS, Glueck J, Pham H, Daher M, Balmaceno-Criss M, McDonald CL, Diebo BG, Daniels AH. Assessing the Accuracy and Reliability of AI-Generated Responses to Patient Questions Regarding Spine Surgery. J Bone Joint Surg Am 2024; 106:1136-1142. [PMID: 38335266 DOI: 10.2106/jbjs.23.00914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/12/2024]
Abstract
BACKGROUND In today's digital age, patients increasingly rely on online search engines for medical information. The integration of large language models such as GPT-4 into search engines such as Bing raises concerns over the potential transmission of misinformation when patients search for information online regarding spine surgery. METHODS SearchResponse.io, a database that archives People Also Ask (PAA) data from Google, was utilized to determine the most popular patient questions regarding 4 specific spine surgery topics: anterior cervical discectomy and fusion, lumbar fusion, laminectomy, and spinal deformity. Bing's responses to these questions, along with the cited sources, were recorded for analysis. Two fellowship-trained spine surgeons assessed the accuracy of the answers on a 6-point scale and the completeness of the answers on a 3-point scale. Inaccurate answers were re-queried 2 weeks later. Cited sources were categorized and evaluated against Journal of the American Medical Association (JAMA) benchmark criteria. Interrater reliability was measured with use of the kappa statistic. A linear regression analysis was utilized to explore the relationship between answer accuracy and the type of source, number of sources, and mean JAMA benchmark score. RESULTS Bing's responses to 71 PAA questions were analyzed. The average completeness score was 2.03 (standard deviation [SD], 0.36), and the average accuracy score was 4.49 (SD, 1.10). Among the question topics, spinal deformity had the lowest mean completeness score. Re-querying the questions that initially had answers with low accuracy scores resulted in responses with improved accuracy. Among the cited sources, commercial sources were the most prevalent. The JAMA benchmark score across all sources averaged 2.63. Government sources had the highest mean benchmark score (3.30), whereas social media had the lowest (1.75). CONCLUSIONS Bing's answers were generally accurate and adequately complete, with incorrect responses rectified upon re-querying. The plurality of information was sourced from commercial websites. The type of source, number of sources, and mean JAMA benchmark score were not significantly correlated with answer accuracy. These findings underscore the importance of ongoing evaluation and improvement of large language models to ensure reliable and informative results for patients seeking information regarding spine surgery online amid the integration of these models in the search experience.
Collapse
Affiliation(s)
- Viknesh S Kasthuri
- Warren Alpert Medical School of Brown University, Providence, Rhode Island
| | - Jacob Glueck
- Warren Alpert Medical School of Brown University, Providence, Rhode Island
| | - Han Pham
- Warren Alpert Medical School of Brown University, Providence, Rhode Island
| | - Mohammad Daher
- Department of Orthopedic Surgery, Warren Alpert Medical School of Brown University, Providence, Rhode Island
| | - Mariah Balmaceno-Criss
- Warren Alpert Medical School of Brown University, Providence, Rhode Island
- Department of Orthopedic Surgery, Warren Alpert Medical School of Brown University, Providence, Rhode Island
| | - Christopher L McDonald
- Department of Orthopedic Surgery, Warren Alpert Medical School of Brown University, Providence, Rhode Island
| | - Bassel G Diebo
- Warren Alpert Medical School of Brown University, Providence, Rhode Island
- Department of Orthopedic Surgery, Warren Alpert Medical School of Brown University, Providence, Rhode Island
| | - Alan H Daniels
- Warren Alpert Medical School of Brown University, Providence, Rhode Island
- Department of Orthopedic Surgery, Warren Alpert Medical School of Brown University, Providence, Rhode Island
| |
Collapse
|
28
|
Lim B, Chai A, Shaalan M. A Cross-Sectional Analysis of the Readability of Online Information Regarding Hip Osteoarthritis. Cureus 2024; 16:e60536. [PMID: 38887325 PMCID: PMC11181007 DOI: 10.7759/cureus.60536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/17/2024] [Indexed: 06/20/2024] Open
Abstract
Introduction Osteoarthritis (OA) is an age-related degenerative joint disease. There is a 25% risk of symptomatic hip OA in patients who live up to 85 years of age. It can impair a person's daily activities and increase their reliance on healthcare services. It is primarily managed with education, weight loss and exercise, supplemented with pharmacological interventions. Poor health literacy is associated with negative treatment outcomes and patient dissatisfaction. A literature search found there are no previously published studies examining the readability of online information about hip OA. Objectives To assess the readability of healthcare websites regarding hip OA. Methods The terms "hip pain", "hip osteoarthritis", "hip arthritis", and "hip OA" were searched on Google and Bing. Of 240 websites initially considered, 74 unique websites underwent evaluation using the WebFX online readability software (WebFX®, Harrisburg, USA). Readability was determined using the Flesch Reading Ease Score (FRES), Flesch-Kincaid Reading Grade Level (FKGL), Gunning Fog Index (GFI), Simple Measure of Gobbledygook (SMOG), Coleman-Liau Index (CLI), and Automated Readability Index (ARI). In line with recommended guidelines and previous studies, FRES >65 or a grade level score of sixth grade and under was considered acceptable. Results The average FRES was 56.74±8.18 (range 29.5-79.4). Only nine (12.16%) websites had a FRES score >65. The average FKGL score was 7.62±1.69 (range 4.2-12.9). Only seven (9.46%) websites were written at or below a sixth-grade level according to the FKGL score. The average GFI score was 9.20±2.09 (range 5.6-16.5). Only one (1.35%) website was written at or below a sixth-grade level according to the GFI score. The average SMOG score was 7.29±1.41 (range 5.4-12.0). Only eight (10.81%) websites were written at or below a sixth-grade level according to the SMOG score. The average CLI score was 13.86±1.75 (range 9.6-19.7). All 36 websites were written above a sixth-grade level according to the CLI score. The average ARI score was 6.91±2.06 (range 3.1-14.0). Twenty-eight (37.84%) websites were written at or below a sixth-grade level according to the ARI score. One-sample t-tests showed that FRES (p<0.001, CI -10.2 to -6.37), FKGL (p<0.001, CI 1.23 to 2.01), GFI (p<0.001, CI 2.72 to 3.69), SMOG (p<0.001, CI 0.97 to 1.62), CLI (p<0.001, CI 7.46 to 8.27), and ARI (p<0.001, CI 0.43 to 1.39) scores were significantly different from the accepted standard. One-way analysis of variance (ANOVA) testing of FRES scores (p=0.009) and CLI scores (p=0.009) showed a significant difference between categories. Post hoc testing showed a significant difference between academic and non-profit categories for FRES scores (p=0.010, CI -15.17 to -1.47) and CLI scores (p=0.008, CI 0.35 to 3.29). Conclusions Most websites regarding hip OA are written above recommended reading levels, hence exceeding the comprehension levels of the average patient. Readability of these resources must be improved to improve patient access to online healthcare information which can lead to improved patient understanding of their own condition and treatment outcomes.
Collapse
Affiliation(s)
- Brandon Lim
- Department of Medicine, School of Medicine, Trinity College Dublin, Dublin, IRL
| | - Ariel Chai
- Department of Medicine, School of Medicine, Trinity College Dublin, Dublin, IRL
| | - Mohamed Shaalan
- Department of Orthopaedics and Traumatology, The Mater Misericordiae University Hospital, Dublin, IRL
- Department of Trauma and Orthopaedics, St James's Hospital, Dublin, IRL
| |
Collapse
|
29
|
Obana KK, Lind DR, Mastroianni MA, Rondon AJ, Alexander FJ, Levine WN, Ahmad CS. What are our patients asking Google about acromioclavicular joint injuries?-frequently asked online questions and the quality of online resources. JSES REVIEWS, REPORTS, AND TECHNIQUES 2024; 4:175-181. [PMID: 38706686 PMCID: PMC11065754 DOI: 10.1016/j.xrrt.2024.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 05/07/2024]
Abstract
Background Management of acromioclavicular (AC) joint injuries has been an ongoing source of debate, with over 150 variations of surgery described in the literature. Without a consensus on surgical technique, patients are seeking answers to common questions through internet resources. This study investigates the most common online patient questions pertaining to AC joint injuries and the quality of the websites providing information. Hypothesis 1) Question topics will pertain to surgical indications, pain management, and success of surgery and 2) the quality and transparency of online information are largely heterogenous. Methods Three AC joint search queries were entered into the Google Web Search. Questions under the "People also ask" tab were expanded in order and 100 results for each query were included (300 total). Questions were categorized based on Rothwell's classification. Websites were categorized by source. Website quality was evaluated by the Journal of the American Medical Association (JAMA) Benchmark Criteria. Results Most questions fell into the Rothwell Fact category (48.0%). The most common question topics were surgical indications (28.0%), timeline of recovery (13.0%), and diagnosis/evaluation (12.0%). The least common question topics were anatomy/function (3.3%), evaluation of surgery (3.3%), injury comparison (1.0%), and cost (1.0%). The most common websites were medical practice (44.0%), academic (22.3%), and single surgeon personal (12.3%). The average JAMA score for all websites was 1.0 ± 1.3. Government websites had the highest JAMA score (4.0 ± 0.0) and constituted 45.8% of all websites with a score of 4/4. PubMed articles constituted 63.6% (7/11) of government website. Comparatively, medical practice websites had the lowest JAMA score (0.3 ± 0.7, range [0-3]). Conclusion Online patient AC joint injury questions pertain to surgical indications, timeline of recovery, and diagnosis/evaluation. Government websites and PubMed articles provide the highest-quality sources of reliable, up-to-date information but constitute the smallest proportion of resources. In contrast, medical practice represents the most visited websites, however, recorded the lowest quality score. Physicians should utilize this information to answer frequently asked questions, guide patient expectations, and help provide and identify reliable online resources.
Collapse
Affiliation(s)
- Kyle K. Obana
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Dane R.G. Lind
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Michael A. Mastroianni
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Alexander J. Rondon
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Frank J. Alexander
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - William N. Levine
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Christopher S. Ahmad
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| |
Collapse
|
30
|
Yang J, Ardavanis KS, Slack KE, Fernando ND, Della Valle CJ, Hernandez NM. Chat Generative Pretrained Transformer (ChatGPT) and Bard: Artificial Intelligence Does not yet Provide Clinically Supported Answers for Hip and Knee Osteoarthritis. J Arthroplasty 2024; 39:1184-1190. [PMID: 38237878 DOI: 10.1016/j.arth.2024.01.029] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 01/08/2024] [Accepted: 01/11/2024] [Indexed: 02/22/2024] Open
Abstract
BACKGROUND Advancements in artificial intelligence (AI) have led to the creation of large language models (LLMs), such as Chat Generative Pretrained Transformer (ChatGPT) and Bard, that analyze online resources to synthesize responses to user queries. Despite their popularity, the accuracy of LLM responses to medical questions remains unknown. This study aimed to compare the responses of ChatGPT and Bard regarding treatments for hip and knee osteoarthritis with the American Academy of Orthopaedic Surgeons (AAOS) Evidence-Based Clinical Practice Guidelines (CPGs) recommendations. METHODS Both ChatGPT (Open AI) and Bard (Google) were queried regarding 20 treatments (10 for hip and 10 for knee osteoarthritis) from the AAOS CPGs. Responses were classified by 2 reviewers as being in "Concordance," "Discordance," or "No Concordance" with AAOS CPGs. A Cohen's Kappa coefficient was used to assess inter-rater reliability, and Chi-squared analyses were used to compare responses between LLMs. RESULTS Overall, ChatGPT and Bard provided responses that were concordant with the AAOS CPGs for 16 (80%) and 12 (60%) treatments, respectively. Notably, ChatGPT and Bard encouraged the use of non-recommended treatments in 30% and 60% of queries, respectively. There were no differences in performance when evaluating by joint or by recommended versus non-recommended treatments. Studies were referenced in 6 (30%) of the Bard responses and none (0%) of the ChatGPT responses. Of the 6 Bard responses, studies could only be identified for 1 (16.7%). Of the remaining, 2 (33.3%) responses cited studies in journals that did not exist, 2 (33.3%) cited studies that could not be found with the information given, and 1 (16.7%) provided links to unrelated studies. CONCLUSIONS Both ChatGPT and Bard do not consistently provide responses that align with the AAOS CPGs. Consequently, physicians and patients should temper expectations on the guidance AI platforms can currently provide.
Collapse
Affiliation(s)
- JaeWon Yang
- Department of Orthopaedic Surgery, University of Washington, Seattle, Washington
| | - Kyle S Ardavanis
- Department of Orthopaedic Surgery, Madigan Medical Center, Tacoma, Washington
| | - Katherine E Slack
- Elson S. Floyd College of Medicine, Washington State University, Spokane, Washington
| | - Navin D Fernando
- Department of Orthopaedic Surgery, University of Washington, Seattle, Washington
| | - Craig J Della Valle
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, Illinois
| | - Nicholas M Hernandez
- Department of Orthopaedic Surgery, University of Washington, Seattle, Washington
| |
Collapse
|
31
|
Ferrell SC, Ferrell MC, Nelson CM, Lippard JS, Beaman J, Vassar M. Understanding Public Perception of Naloxone: A Study of FAQs and Answer Source Credibility. Subst Use Misuse 2024; 59:1352-1356. [PMID: 38688898 DOI: 10.1080/10826084.2024.2341319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
PURPOSE The most commonly used intervention for opioid overdoses is naloxone. With naloxone soon to be sold over-the-counter in the United States, the goal of this paper is to categorize frequently asked questions (FAQs) and answers about naloxone using internet sources in a cross-sectional fashion. METHODS Terms "narcan" and "naloxone" were searched on a clean Google Chrome browser using the "People also asked" tab to find FAQs and their answer sources. We classified questions and sources and assessed each website's quality and credibility grading with JAMA benchmark criteria. The Kruskal-Wallis H test was used to determine variance of mean JAMA score by source type and Post-Hoc Dunn's test with Bonferroni corrected alpha of 0.005 used to compare source types. RESULTS Of the 305 unique questions, 202 (66.2%) were classified as facts, 78 (25.6%) were policy, and 25 (8.2%) were value. Of the 144 unique answer sources, the two most common included 55 (38.2%) which were government entities and 47 (32.6%) which were commercial entities. Ninety-two (of 144, 63.9%) sources met three or more JAMA benchmark criteria. Statistical analysis showed a significant difference between the JAMA benchmark scores by source type H(4) = 12.75, p = 0.0126 and between the mean rank of academic and government sources (p = 0.0036). CONCLUSION We identified FAQs and their citations about naloxone, highlighting potential lack of understanding and knowledge of this important intervention. We recommend updating websites to accurately reflect current and useful information for those that may require naloxone.
Collapse
Affiliation(s)
- Sydney C Ferrell
- Department of Internal Medicine, The University of Vermont Medical Center, Burlington, Vermont, USA
| | - Matthew C Ferrell
- Department of Internal Medicine, The University of Vermont Medical Center, Burlington, Vermont, USA
| | - Cole M Nelson
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, Oklahoma, USA
| | - Justin S Lippard
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, Oklahoma, USA
| | - Jason Beaman
- Department of Psychiatry and Behavioral Sciences, Oklahoma State University Center for Health Sciences, Tulsa, Oklahoma, USA
| | - Matt Vassar
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, Oklahoma, USA
- Department of Psychiatry and Behavioral Sciences, Oklahoma State University Center for Health Sciences, Tulsa, Oklahoma, USA
| |
Collapse
|
32
|
Zhang S, Liau ZQG, Tan KLM, Chua WL. Evaluating the accuracy and relevance of ChatGPT responses to frequently asked questions regarding total knee replacement. Knee Surg Relat Res 2024; 36:15. [PMID: 38566254 PMCID: PMC10986046 DOI: 10.1186/s43019-024-00218-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 03/12/2024] [Indexed: 04/04/2024] Open
Abstract
BACKGROUND Chat Generative Pretrained Transformer (ChatGPT), a generative artificial intelligence chatbot, may have broad applications in healthcare delivery and patient education due to its ability to provide human-like responses to a wide range of patient queries. However, there is limited evidence regarding its ability to provide reliable and useful information on orthopaedic procedures. This study seeks to evaluate the accuracy and relevance of responses provided by ChatGPT to frequently asked questions (FAQs) regarding total knee replacement (TKR). METHODS A list of 50 clinically-relevant FAQs regarding TKR was collated. Each question was individually entered as a prompt to ChatGPT (version 3.5), and the first response generated was recorded. Responses were then reviewed by two independent orthopaedic surgeons and graded on a Likert scale for their factual accuracy and relevance. These responses were then classified into accurate versus inaccurate and relevant versus irrelevant responses using preset thresholds on the Likert scale. RESULTS Most responses were accurate, while all responses were relevant. Of the 50 FAQs, 44/50 (88%) of ChatGPT responses were classified as accurate, achieving a mean Likert grade of 4.6/5 for factual accuracy. On the other hand, 50/50 (100%) of responses were classified as relevant, achieving a mean Likert grade of 4.9/5 for relevance. CONCLUSION ChatGPT performed well in providing accurate and relevant responses to FAQs regarding TKR, demonstrating great potential as a tool for patient education. However, it is not infallible and can occasionally provide inaccurate medical information. Patients and clinicians intending to utilize this technology should be mindful of its limitations and ensure adequate supervision and verification of information provided.
Collapse
Affiliation(s)
- Siyuan Zhang
- Department of Orthopaedic Surgery, National University Health System, Level 11, NUHS Tower Block, 1E Kent Ridge Road, Singapore, 119228, Singapore.
| | - Zi Qiang Glen Liau
- Department of Orthopaedic Surgery, National University Health System, Level 11, NUHS Tower Block, 1E Kent Ridge Road, Singapore, 119228, Singapore
| | - Kian Loong Melvin Tan
- Department of Orthopaedic Surgery, National University Health System, Level 11, NUHS Tower Block, 1E Kent Ridge Road, Singapore, 119228, Singapore
| | - Wei Liang Chua
- Department of Orthopaedic Surgery, National University Health System, Level 11, NUHS Tower Block, 1E Kent Ridge Road, Singapore, 119228, Singapore
| |
Collapse
|
33
|
Suresh N, Fritz C, De Ravin E, Rajasekaran K. Modern internet search analytics and thyroidectomy: What are patients asking? World J Otorhinolaryngol Head Neck Surg 2024; 10:49-58. [PMID: 38560040 PMCID: PMC10979046 DOI: 10.1002/wjo2.117] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2022] [Accepted: 05/01/2023] [Indexed: 04/04/2024] Open
Abstract
Objectives Thyroidectomy is among the most commonly performed head and neck surgeries, however, limited existing information is available on topics of interest and concern to patients. Study Design Observational. Setting Online. Methods A search engine optimization tool was utilized to extract metadata on Google-suggested questions that "People Also Ask" (PAA) pertaining to "thyroidectomy" and "thyroid surgery." These questions were categorized by Rothwell criteria and topics of interest. The Journal of the American Medical Association (JAMA) benchmark criteria enabled quality assessment. Results A total of 250 PAA questions were analyzed. Future-oriented PAA questions describing what to expect during and after the surgery on topics such as postoperative management, risks or complications of surgery, and technical details were significantly less popular among the "thyroid surgery" group (P < 0.001, P = 0.005, and P < 0.001, respectively). PAA questions about scarring and hypocalcemia were nearly threefold more popular than those related to pain (335 and 319 vs. 113 combined search engine response page count, respectively). The overall JAMA quality score remained low (2.50 ± 1.07), despite an increasing number of patients searching for "thyroidectomy" (r(77) = 0.30, P = 0.007). Conclusions Patients searching for the nonspecific term "thyroid surgery" received a curated collection of PAA questions that were significantly less likely to educate them on what to expect during and after surgery, as compared to patients with higher health literacy who search with the term "thyroidectomy." This suggests that the content of PAA questions differs based on the presumed health literacy of the internet user.
Collapse
Affiliation(s)
- Neeraj Suresh
- Department of Otorhinolaryngology–Head & Neck SurgeryUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Christian Fritz
- Department of Otorhinolaryngology–Head & Neck SurgeryUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Emma De Ravin
- Department of Otorhinolaryngology–Head & Neck SurgeryUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
- Perelman School of Medicine at the University of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Karthik Rajasekaran
- Department of Otorhinolaryngology–Head & Neck SurgeryUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
- Leonard Davis Institute of Health EconomicsUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| |
Collapse
|
34
|
McCormick JR, Harkin WE, Hodakowski AJ, Streepy JT, Khan ZA, Mowers CC, Urie BR, Jawanda HS, Jackson GR, Chahla J, Garrigues GE, Verma NN. Analysis of patient-directed search content and online resource quality for ulnar collateral ligament injury and surgery. JSES Int 2024; 8:384-388. [PMID: 38464434 PMCID: PMC10920115 DOI: 10.1016/j.jseint.2023.11.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2024] Open
Abstract
Background Patients use the Internet to learn information about injuries, yet online content remains largely unstudied. This study analyzed patient questions posed online regarding ulnar collateral ligament (UCL) tears or UCL surgical management. Methods Three separate search strings about UCL tear and UCL surgery were queried on the Google search engine. The 300 most commonly asked questions were compiled for each topic and associated webpage information was collected from the "People also ask" section. Questions were categorized using the Rothwell classification and webpages by Journal of the American Medical Association (JAMA) benchmark criteria. Results The most frequent UCL tear questions were "how long does it take to heal a torn UCL?" and "what is nonsurgical treatment for the UCL?" The most frequent UCL surgery question was "can you retear your UCL after surgery?" The Rothwell classification of questions for UCL tear/UCL surgery was 55%/32% policy, 38%/57% fact, and 7%/11% value with highest subcategories being indications/management (46%/25%) and technical details (24%/25%). The most common webpages were academic (39%/29%) and medical practice (24%/26%). Mean JAMA score for all 600 webpages was low (1.2), with journals (mean = 3.4) having the highest score. Medical practice (mean = 0.5) and legal websites (mean = 0.0) had the lowest JAMA scores. Only 30% of webpages provided UCL-specific information. Conclusion Online UCL patient questions commonly pertain to technical details and injury management. Webpages suggested by search engines contain information specific to UCL tears and surgery only one-third of the time. The quality of most webpages provided to patients is poor, with minimal source transparency.
Collapse
Affiliation(s)
| | - William E. Harkin
- Rush University Medical Center, Department of Orthopedic Surgery, Chicago, IL, USA
| | | | - John T. Streepy
- Rush University Medical Center, Department of Orthopedic Surgery, Chicago, IL, USA
| | - Zeeshan A. Khan
- Rush University Medical Center, Department of Orthopedic Surgery, Chicago, IL, USA
| | - Colton C. Mowers
- Rush University Medical Center, Department of Orthopedic Surgery, Chicago, IL, USA
| | - Braedon R. Urie
- Rush University Medical Center, Department of Orthopedic Surgery, Chicago, IL, USA
| | - Harkirat S. Jawanda
- Rush University Medical Center, Department of Orthopedic Surgery, Chicago, IL, USA
| | - Garrett R. Jackson
- Rush University Medical Center, Department of Orthopedic Surgery, Chicago, IL, USA
| | - Jorge Chahla
- Rush University Medical Center, Department of Orthopedic Surgery, Chicago, IL, USA
| | - Grant E. Garrigues
- Rush University Medical Center, Department of Orthopedic Surgery, Chicago, IL, USA
| | - Nikhil N. Verma
- Rush University Medical Center, Department of Orthopedic Surgery, Chicago, IL, USA
| |
Collapse
|
35
|
Wright BM, Bodnar MS, Moore AD, Maseda MC, Kucharik MP, Diaz CC, Schmidt CM, Mir HR. Is ChatGPT a trusted source of information for total hip and knee arthroplasty patients? Bone Jt Open 2024; 5:139-146. [PMID: 38354748 PMCID: PMC10867788 DOI: 10.1302/2633-1462.52.bjo-2023-0113.r1] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/16/2024] Open
Abstract
Aims While internet search engines have been the primary information source for patients' questions, artificial intelligence large language models like ChatGPT are trending towards becoming the new primary source. The purpose of this study was to determine if ChatGPT can answer patient questions about total hip (THA) and knee arthroplasty (TKA) with consistent accuracy, comprehensiveness, and easy readability. Methods We posed the 20 most Google-searched questions about THA and TKA, plus ten additional postoperative questions, to ChatGPT. Each question was asked twice to evaluate for consistency in quality. Following each response, we responded with, "Please explain so it is easier to understand," to evaluate ChatGPT's ability to reduce response reading grade level, measured as Flesch-Kincaid Grade Level (FKGL). Five resident physicians rated the 120 responses on 1 to 5 accuracy and comprehensiveness scales. Additionally, they answered a "yes" or "no" question regarding acceptability. Mean scores were calculated for each question, and responses were deemed acceptable if ≥ four raters answered "yes." Results The mean accuracy and comprehensiveness scores were 4.26 (95% confidence interval (CI) 4.19 to 4.33) and 3.79 (95% CI 3.69 to 3.89), respectively. Out of all the responses, 59.2% (71/120; 95% CI 50.0% to 67.7%) were acceptable. ChatGPT was consistent when asked the same question twice, giving no significant difference in accuracy (t = 0.821; p = 0.415), comprehensiveness (t = 1.387; p = 0.171), acceptability (χ2 = 1.832; p = 0.176), and FKGL (t = 0.264; p = 0.793). There was a significantly lower FKGL (t = 2.204; p = 0.029) for easier responses (11.14; 95% CI 10.57 to 11.71) than original responses (12.15; 95% CI 11.45 to 12.85). Conclusion ChatGPT answered THA and TKA patient questions with accuracy comparable to previous reports of websites, with adequate comprehensiveness, but with limited acceptability as the sole information source. ChatGPT has potential for answering patient questions about THA and TKA, but needs improvement.
Collapse
Affiliation(s)
- Benjamin M. Wright
- Morsani College of Medicine, University of South Florida, Tampa, Florida, USA
| | - Michael S. Bodnar
- Morsani College of Medicine, University of South Florida, Tampa, Florida, USA
| | - Andrew D. Moore
- Department of Orthopaedic Surgery, University of South Florida, Tampa, Florida, USA
| | - Meghan C. Maseda
- Department of Orthopaedic Surgery, University of South Florida, Tampa, Florida, USA
| | - Michael P. Kucharik
- Department of Orthopaedic Surgery, University of South Florida, Tampa, Florida, USA
| | - Connor C. Diaz
- Department of Orthopaedic Surgery, University of South Florida, Tampa, Florida, USA
| | - Christian M. Schmidt
- Department of Orthopaedic Surgery, University of South Florida, Tampa, Florida, USA
| | - Hassan R. Mir
- Orthopaedic Trauma Service, Florida Orthopedic Institute, Tampa, Florida, USA
| |
Collapse
|
36
|
Scuderi GR, Layson JT, Mont MA. The Rise of Social Media in Total Joint Arthroplasty: An Editorial Viewpoint. J Arthroplasty 2024; 39:283-284. [PMID: 37995982 DOI: 10.1016/j.arth.2023.11.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2023] Open
|
37
|
Gaudiani MA, Castle JP, Gasparro MA, Halkias EL, Adjemian A, McGee A, Fife J, Moutzouros V, Lynch TS. What Do Patients Encounter When Searching Online About Meniscal Surgery? An Analysis of Internet Trends. Orthop J Sports Med 2024; 12:23259671231219014. [PMID: 38274014 PMCID: PMC10809868 DOI: 10.1177/23259671231219014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 06/29/2023] [Indexed: 01/27/2024] Open
Abstract
Background Many patients use the internet to learn about their orthopaedic conditions and find answers to their common questions. However, the sources and quality of information available to patients regarding meniscal surgery have not been fully evaluated. Purpose To determine the most frequently searched questions associated with meniscal surgery based on question type and topic, as well as to assess the website source type and quality. Study Design Cross-sectional study. Methods The following search terms were entered into a web search (www.google.com) using a clean-install browser: "meniscal tear,""meniscus repair,""meniscectomy,""knee scope,""meniscus surgery," and "knee arthroscopy." The Rothwell classification system was used to categorize questions and sort them into 1 of 13 topics relevant to meniscal surgery. Websites were also categorized by source into groups. The Journal of the American Medical Association (JAMA) benchmark criteria (medians and interquartile ranges [IQRs]) were used to measure website quality. Results A total of 337 unique questions associated with 234 websites were extracted and categorized. The most popular questions were "What is the fastest way to recover from meniscus surgery?" and "What happens if a meniscus tear is left untreated?" Academic websites were associated more commonly with diagnosis questions (41.9%, P < .01). Commercial websites were associated more commonly with cost (71.4%, P = .03) and management (47.6%, P = .02). Government websites addressed a higher proportion of questions regarding timeline of recovery (22.2%, P < .01). Websites associated with medical practices were associated more commonly with risks/complications (43.8%, P = .01) while websites associated with single surgeons were associated more commonly with pain (19.4%, P = .03). Commercial and academic websites had the highest median JAMA benchmark scores (4 [IQR, 3-4] and 3 [IQR, 2-4], respectively) while websites associated with a single surgeon or categorized as "other" had the lowest scores (1 [IQR 1-2] and 1 [IQR 1-1.5], respectively). Conclusion Our study found that the most common questions regarding meniscal surgery were associated with diagnosis of meniscal injury, followed by activities and restrictions after meniscal surgery. Academic websites were associated significantly with diagnosis questions. The highest quality websites were commercial and academic websites.
Collapse
Affiliation(s)
| | - Joshua P. Castle
- Department of Orthopedic Surgery, Henry Ford Health, Detroit, Michigan, USA
| | | | | | | | - Anna McGee
- Department of Orthopedic Surgery, Henry Ford Health, Detroit, Michigan, USA
| | - Jonathan Fife
- Department of Orthopedic Surgery, Henry Ford Health, Detroit, Michigan, USA
| | | | - T. Sean Lynch
- Department of Orthopedic Surgery, Henry Ford Health, Detroit, Michigan, USA
| |
Collapse
|
38
|
Singh SP, Ramprasad A, Luu A, Zaidi R, Siddiqui Z, Pham T. Health Literacy Analytics of Accessible Patient Resources in Cardiovascular Medicine: What are Patients Wanting to Know? Kans J Med 2023; 16:309-315. [PMID: 38298385 PMCID: PMC10829858 DOI: 10.17161/kjm.vol16.20554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 12/21/2023] [Indexed: 02/02/2024] Open
Abstract
Introduction There remains an increasing utilization of internet-based resources as a first line of medical knowledge. Among patients with cardiovascular disease, these resources often are relied upon for numerous diagnostic and therapeutic modalities. However, the reliability of this information is not fully understood. The aim of this study was to provide a descriptive profile on the literacy quality, readability, and transparency of publicly available educational resources in cardiology. Methods The frequently asked questions and associated online educational articles on common cardiovascular diagnostic and therapeutic interventions were investigated using publicly available data from the Google RankBrain machine learning algorithm after applying inclusion and exclusion criteria. Independent raters evaluated questions for Rothwell's Classification and readability calculations. Results Collectively, 520 questions and articles were evaluated across 13 cardiac interventions, resulting in 3,120 readability scores. The sources of articles were most frequently from academic institutions followed by commercial sources. Most questions were classified as "Fact" at 76.0% (n = 395), and questions regarding "Technical Details" of each intervention were the most common subclassification at 56.3% (n = 293). Conclusions Our data show that patients most often are using online search query programs to seek information regarding specific knowledge of each cardiovascular intervention rather than form an evaluation of the intervention. Additionally, these online patient educational resources continue to not meet grade-level reading recommendations.
Collapse
Affiliation(s)
- Som P Singh
- University of Missouri-Kansas City School of Medicine, Kansas City, MO
- University of Texas Health Sciences Center at Houston, Houston, TX
| | - Aarya Ramprasad
- University of Missouri-Kansas City School of Medicine, Kansas City, MO
| | - Anh Luu
- University of Missouri-Kansas City School of Medicine, Kansas City, MO
| | - Rohma Zaidi
- University of Missouri-Kansas City School of Medicine, Kansas City, MO
| | - Zoya Siddiqui
- University of Missouri-Kansas City School of Medicine, Kansas City, MO
| | - Trung Pham
- University of Missouri-Kansas City School of Medicine, Kansas City, MO
| |
Collapse
|
39
|
Singh SP, Ramprasad A, Qureshi FM, Baig FA, Qureshi F. A Cross-Sectional Study of Graduate Medical Education in Radiological Fellowships using Accessible Content. Curr Probl Diagn Radiol 2023; 52:528-533. [PMID: 37246039 DOI: 10.1067/j.cpradiol.2023.05.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 03/16/2023] [Accepted: 05/08/2023] [Indexed: 05/30/2023]
Abstract
Graduate medical education in radiology serves an imperative role in training the next generation of specialists. Given the regularity of virtual interviews, the website of a fellowship programs remains a critical first-line source of information of applicants. The aim of this study is to systematically evaluate 7 radiology fellowship programs utilizing a systematic process. A cross-sectional descriptive 286 graduate medical education fellowship programs in radiology were screened from the Fellowship and Residency Electronic Interactive Database (FREIDA). Extracted data was evaluated for comprehensiveness using 20 content criteria, and a readability score is calculated. The mean comprehensiveness among all fellowship program websites was 55.8% (n = 286), and the average FRE among the program overview sections was 11.9 (n = 214). ANOVA revealed no statistical significance in program website comprehensiveness between radiology fellowships (P = 0.33). The quality of a program's website data continues to serve an important role in an applicant's decision-making. Fellowship programs have improved in their content availability overtime, but content reevaluation needs to be continued for tangible improvement.
Collapse
Affiliation(s)
- Som P Singh
- University of Missouri Kansas City School of Medicine, Kansas City, MO..
| | - Aarya Ramprasad
- University of Missouri Kansas City School of Medicine, Kansas City, MO
| | - Fahad M Qureshi
- University of Missouri Kansas City School of Medicine, Kansas City, MO
| | - Farhan A Baig
- University of Missouri Kansas City School of Medicine, Kansas City, MO
| | | |
Collapse
|
40
|
Wu MP, Miller LE, Meyer CD, Feng AL, Richmon JD. Online Searches Related to Total Laryngectomy. Laryngoscope 2023; 133:2971-2976. [PMID: 36883665 DOI: 10.1002/lary.30643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 01/29/2023] [Accepted: 02/21/2023] [Indexed: 03/09/2023]
Abstract
OBJECTIVE To identify the most frequently asked questions regarding "laryngectomy" through an assessment of online search data. METHODS Google Search data based on the search term "laryngectomy" were analyzed using Google Trends and Search Response. The most common People Also Ask (PAA) questions were identified and classified by the concept. Each website linked to its respective PAA question was rated for understandability, ease of reading, and reading grade level. RESULTS Search popularity for the term "laryngectomy" remained stable between 2017 and 2022. The most popular PAA themes were post-laryngectomy speech, laryngectomy comparison to tracheostomy, stoma and stoma care, survival/recurrence, and post-laryngectomy eating. Of the 32 websites linked to the top 50 PAA's, eleven (34%) were at or below an 8th grade reading level. CONCLUSION Post-laryngectomy speech, eating, survival, the stoma, and the difference between laryngectomy and tracheostomy are the most common topics searched online in relation to "laryngectomy." These are important areas for both patient and healthcare provider education. LEVEL OF EVIDENCE N/A Laryngoscope, 133:2971-2976, 2023.
Collapse
Affiliation(s)
- Michael P Wu
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, U.S.A
| | - Lauren E Miller
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, U.S.A
| | - Charles D Meyer
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, U.S.A
| | - Allen L Feng
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, U.S.A
| | - Jeremy D Richmon
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, U.S.A
| |
Collapse
|
41
|
Khalil LS, Castle JP, Akioyamen NO, Corsi MP, Cominos ND, Dubé M, Lynch TS. What are patients asking and reading online? An analysis of online patient searches for rotator cuff repair. J Shoulder Elbow Surg 2023; 32:2245-2255. [PMID: 37263485 DOI: 10.1016/j.jse.2023.04.021] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 04/09/2023] [Accepted: 04/12/2023] [Indexed: 06/03/2023]
Abstract
BACKGROUND Patients undergoing rotator cuff surgery often search the internet for information regarding the procedure. One popular source, Google, compiles frequently asked questions and links to websites that may provide answers. This study provides an analysis of the most frequently searched questions associated with rotator cuff surgery. We hypothesize that there will be distinct search patterns associated with online queries about rotator cuff surgery that could provide unique insights into patient concerns. METHODS A set of search terms were entered into Google Web Search using a clean-install Google Chrome browser. Frequently associated questions and their webpages were extracted to a database via a data mining extension. Questions were categorized by topics relevant for rotator cuff arthroscopy. Websites were categorized by source and scored for quality using the JAMA Benchmark Criteria. Pearson's χ2 tests were used to analyze nominal data. Student t tests were performed to compare JAMA Benchmark Scores. RESULTS Of the 595 questions generated from the initial search, 372 unique questions associated with 293 websites were extracted and categorized. The most popular question topics were activities/restrictions (20.7%), pain (18.8%), and indications/management (13.2%). The 2 most common websites searched were academic (35.2%) and medical practice (27.4%). Commercial websites were significantly more likely to be associated with questions about cost (57.1% of all cost questions, P = .01), anatomy/function (62.5%, P = .001), and evaluation of surgery (47.6%, P < .001). Academic websites were more likely to be associated with questions about technical details of surgery (58.1%, P < .001). Medical practice and social media websites were more likely associated with activities/restrictions (48.1%, P < .001, and 15.6%, P < .001, respectively). Government websites were more likely associated with timeline of recovery (12.8%, P = .01). On a scale of 0-4, commercial and academic websites had the highest JAMA scores (3.06 and 2.39, respectively). CONCLUSION Patients seeking information regarding rotator cuff repair primarily use the Google search engine to ask questions regarding postoperative activity and restrictions, followed by pain, indications, and management. Academic websites, which were associated with technical details of surgery, and medical practice websites, which were associated with activities/restrictions, were the 2 most commonly searched resources. These results emphasize the need for orthopedic surgeons to provide detailed and informative instructions to patients undergoing rotator cuff repair, especially in the postoperative setting.
Collapse
Affiliation(s)
- Lafi S Khalil
- McLaren Flint Hospital, Department of Orthopaedic Surgery, Flint, MI, USA.
| | - Joshua P Castle
- Department of Orthopaedic Surgery, Henry Ford Hospital, Detroit, MI, USA
| | - Noel O Akioyamen
- Department of Orthopaedic Surgery, Montefiore Medical Center, The Bronx, NY, USA
| | | | | | - Michael Dubé
- Northeast Ohio Medical University, Rootstown, OH, USA
| | - T Sean Lynch
- Department of Orthopaedic Surgery, Henry Ford Hospital, Detroit, MI, USA
| |
Collapse
|
42
|
Smith SR, Hodakowski A, McCormick JR, Spaan J, Streepy J, Mowers C, Simcock X. Patient-Directed Online Education for Carpal Tunnel Syndrome and Release: Analysis of What Patients Ask and Quality of Resources. JOURNAL OF HAND SURGERY GLOBAL ONLINE 2023; 5:818-822. [PMID: 38106941 PMCID: PMC10721500 DOI: 10.1016/j.jhsg.2023.07.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 07/19/2023] [Indexed: 12/19/2023] Open
Abstract
Purpose This study classifies common questions searched by patients from the Google search engine and categorizes the types and quality of online education resources used by patients regarding carpal tunnel syndrome (CTS) and carpal tunnel release (CTR). Methods Google's results were extracted and compiled using the "People also ask" function for frequent questions and associated web pages for CTS and CTR. Questions were categorized using Rothwell's classification with further topic subcategorization. Web pages were evaluated by using Journal of the American Medical Association Benchmark Criteria for source quality. Results Of the 600 questions evaluated, "How do I know if I have carpal tunnel or tendonitis?" and "What causes carpal tunnel to flare up?" were the most commonly investigated questions for CTS. For CTR, frequent questions investigated included "How long after hand surgery can I drive" and "How do you wipe after carpal tunnel surgery." The most common questions for CTS by Rothwell classification were policy (51%), fact (41%), and value (8%) with the highest subcategories being indications/management (46%) and technical details (27%). For CTR, the most common questions entailed fact (54%), policy (34%), and value (11%) with the highest subcategories as technical details (31%) and indications/management (26%). The most common web pages were academic and medical practice. The mean Journal of the American Medical Association score for all 600 web pages was 1.43, with journals (mean = 3.91) having the highest score and legal (mean = 0.52) and single surgeon practice websites (mean = 0.28) having the lowest scores. Conclusions Patients frequently inquire online about etiology, precipitating factors, diagnostic criteria, and activity restrictions regarding CTS/CTR. Overall, the quality of online resources for this topic was poor, especially from single surgeon practices and legal websites. Clinical relevance Understanding the type and quality of information patients are accessing assists physicians in tailoring counseling to patient concerns and facilitates informed decision-making regarding CTS/CTR as well as guiding patients to high-quality online searches.
Collapse
Affiliation(s)
- Shelby R Smith
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL
| | | | | | - Jonathan Spaan
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL
| | - John Streepy
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL
| | - Colton Mowers
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL
| | - Xavier Simcock
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL
| |
Collapse
|
43
|
Castle JP, Khalil LS, Tramer JS, Huyke-Hernández FA, Haddad J, Fife J, Esho Y, Gasparro MA, Moutzouros V, Lynch TS. Indications for Surgery, Activities After Surgery, and Pain Are the Most Commonly Asked Questions in Anterior Cruciate Ligament Injury and Reconstruction. Arthrosc Sports Med Rehabil 2023; 5:100805. [PMID: 37753188 PMCID: PMC10518323 DOI: 10.1016/j.asmr.2023.100805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 08/18/2023] [Indexed: 09/28/2023] Open
Abstract
Purpose To leverage Google's search algorithms to summarize the most commonly asked questions regarding anterior cruciate ligament (ACL) injuries and surgery. Methods Six terms related to ACL tear and/or surgery were searched on a clean-installed Google Chrome browser. The list of questions and their associated websites on the Google search page were extracted after multiple search iterations performed in January of 2022. Questions and websites were categorized according to Rothwell's criteria. The Journal of the American Medical Association (JAMA) Benchmark criteria were used to grade website quality and transparency. Descriptive statistics were provided. χ2 and Student t-tests identified for categorical differences and differences in JAMA score, respectively (significance set at P < .05). Results A total of 273 unique questions associated with 204 websites were identified. The most frequently asked questions involved Indications/Management (20.2%), Specific Activities (15.8%), and Pain (10.3%). The most common websites were Medical Practice (27.9%), Academic (23.5%), and Commercial (19.5%). In Academic websites, questions regarding Specific Activities were seldom included (4.7%) whereas questions regarding Pain were frequently addressed (39.3%, P = .027). Although average JAMA score was relatively high for Academic websites, the average combined score for medical and governmental websites was lower (P < .001) than nonmedical websites. Conclusions The most searched questions on Google regarding ACL tears or surgery related to indications for surgery, pain, and activities postoperatively. Health information resources stemmed from Medical Practice (27.9%) followed by Academic (23.5%) and Commercial (19.5%) websites. Medical websites had lower JAMA quality scores compared with nonmedical websites. Clinical Relevance These findings presented may assist physicians in addressing the most frequently searched questions while also guiding their patients to greater-quality resources regarding ACL injuries and surgery.
Collapse
Affiliation(s)
- Joshua P. Castle
- Department of Orthopaedic Surgery, Henry Ford Hospital, Detroit, Michigan, U.S.A
| | - Lafi S. Khalil
- Department of Orthopaedic Surgery, McLaren Hospital, Flint, Michigan, U.S.A
| | - Joseph S. Tramer
- Department of Orthopaedic Surgery, Division of Sports Medicine, Cleveland Clinic Foundation, Cleveland, Ohio, U.S.A
| | | | - Jamil Haddad
- Michigan State University College of Osteopathic Medicine, East Lansing, Michigan, U.S.A
| | - Johnathan Fife
- Department of Orthopaedic Surgery, Henry Ford Hospital, Detroit, Michigan, U.S.A
| | - Yousif Esho
- Oakland University William Beaumont School of Medicine, Rochester, Michigan, U.S.A
| | - Matthew A. Gasparro
- Department of Orthopaedic Surgery, Henry Ford Hospital, Detroit, Michigan, U.S.A
| | - Vasilios Moutzouros
- Department of Orthopaedic Surgery, Henry Ford Hospital, Detroit, Michigan, U.S.A
| | - T. Sean Lynch
- Department of Orthopaedic Surgery, Henry Ford Hospital, Detroit, Michigan, U.S.A
| |
Collapse
|
44
|
Fotouhi AR, Chiang SN, Said AM, Skolnick GB, Snyder-Warwick AK. What do patients want to know about gender-affirming surgery? Analysis of common patient concerns and online health materials. J Plast Reconstr Aesthet Surg 2023; 85:55-58. [PMID: 37473642 DOI: 10.1016/j.bjps.2023.06.060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 05/15/2023] [Accepted: 06/25/2023] [Indexed: 07/22/2023]
Abstract
PURPOSE Patients considering gender-affirming surgery often utilize online health materials to obtain information about procedures. However, the distribution of patient concerns and content of online resources for gender-affirming surgery have not been examined. We aimed to quantify and comprehensively analyze the most searched questions of patients seeking gender-affirming surgery and to examine the quality and readability of associated websites providing the answers. METHODS Questions were extracted from Google using the search phrases "gender-affirming surgery," "transgender surgery," "top surgery," and "bottom surgery." Questions were categorized by topic and average search volume per month was determined. Websites linked to questions were categorized by type, and quality of the health information was evaluated utilizing the DISCERN instrument (16-80). Readability was assessed with the Flesch Reading Ease Score and Flesch-Kincaid Grade Level. RESULTS Ninety questions and associated websites were analyzed. Common questions were most frequently answered by academic websites (30%). Topics included cost (27%), technical details of surgery (23%), and preoperative considerations (11%). Median (interquartile range) DISCERN score across all website categories was 42 (18). The mean readability was of a 12th-grade level, well above the grade six reading level recommended by the American Medical Association. CONCLUSIONS Online gender-affirming surgery materials are difficult to comprehend and of poor quality. To enhance patient knowledge, informed consent, and shared decision-making, there is a substantial need to create understandable and high-quality online health information for those seeking gender-affirming surgery.
Collapse
Affiliation(s)
- Annahita R Fotouhi
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Sarah N Chiang
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Abdullah M Said
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Gary B Skolnick
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Alison K Snyder-Warwick
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Washington University School of Medicine, St. Louis, MO, USA.
| |
Collapse
|
45
|
Ferrell SC, Ferrell MC, Claassen A, Balogun SA, Vassar M. Frequently asked questions about mobility devices among older adults. Eur Geriatr Med 2023; 14:1075-1081. [PMID: 37505403 DOI: 10.1007/s41999-023-00815-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 06/07/2023] [Indexed: 07/29/2023]
Abstract
PURPOSE To assess frequently asked questions (FAQs) about mobility devices among older adults. MATERIALS AND METHODS We searched multiple terms on Google to find FAQs. Rothwell's classification, JAMA benchmark criteria, and Brief DISCERN were used to categorize and assess each entry. RESULTS Our search yielded 224 unique combinations of questions and linked answer sources. Viewing questions alone resulted in 214 unique FAQs, with the majority seeking factual information (130/214, 60.7%). Viewing website sources alone resulted in 175 unique answer sources, most of which were retail commercial sites (68/175, 38.9%) followed by non-retail commercial sites (65/175, 37.1%). Statistical analysis showed a significant difference between the JAMA benchmark scores by source type (p < 0.00010) and Brief DISCERN scores by source type (p = 0.0001). DISCUSSION Our findings suggest government, academic, and possibly non-retail commercial sources may provide better quality information about the use of mobility devices. We recommend medical providers be prepared to promote and provide quality resources on the risks, benefits, and proper techniques for using mobility devices.
Collapse
Affiliation(s)
- Sydney C Ferrell
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, 1111 W 17th St., Tulsa, OK, 74107, USA.
| | - Matthew C Ferrell
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, 1111 W 17th St., Tulsa, OK, 74107, USA
| | - Analise Claassen
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, 1111 W 17th St., Tulsa, OK, 74107, USA
| | - Seki A Balogun
- Department of the Geriatric Medicine, The University of Oklahoma College of Medicine, Oklahoma City, OK, USA
| | - Matt Vassar
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, 1111 W 17th St., Tulsa, OK, 74107, USA
- Department of Psychiatry and Behavioral Sciences, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
| |
Collapse
|
46
|
Fritz C, Barrette LX, Prasad A, Triantafillou V, Suresh N, De Ravin E, Rajasekaran K. Human papillomavirus related oropharyngeal cancer: identifying and quantifying topics of patient interest. J Laryngol Otol 2023; 137:1141-1148. [PMID: 36794539 DOI: 10.1017/s0022215123000270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
OBJECTIVE As the incidence of human papillomavirus related oropharyngeal cancer continues to rise, it is increasingly important for public understanding to keep pace. This study aimed to identify areas of patient interest and concern regarding human papillomavirus and oropharyngeal cancer. METHOD This study was a retrospective survey of search queries containing the keywords 'HPV cancer' between September 2015 and March 2021. RESULTS There was 3.5-fold more interest in human papillomavirus related oropharyngeal cancer (15 800 searches per month) compared with human papillomavirus related cervical cancer (4500 searches per month). Among searches referencing cancer appearance, 96.8 per cent pertained to the head and neck region (3050 searches per month). Among vaccination searches, 16 of 47 (34.0 per cent; 600 searches per month) referenced human papillomavirus vaccines as being a cause of cancer rather than preventing cancer. CONCLUSION The vast majority of online searches into human papillomavirus cancer pertain to the oropharynx. There are relatively few search queries on the topic of vaccination preventing human papillomavirus associated oropharyngeal cancer, which highlights the continued importance of patient education and awareness campaigns.
Collapse
Affiliation(s)
- C Fritz
- Department of Otorhinolaryngology - Head & Neck Surgery, University of Pennsylvania, Philadelphia, PA, USA
| | - L-X Barrette
- Department of Otorhinolaryngology - Head & Neck Surgery, University of Pennsylvania, Philadelphia, PA, USA
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
| | - A Prasad
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - V Triantafillou
- Department of Otorhinolaryngology - Head & Neck Surgery, University of Pennsylvania, Philadelphia, PA, USA
| | - N Suresh
- Department of Otorhinolaryngology - Head & Neck Surgery, University of Pennsylvania, Philadelphia, PA, USA
| | - E De Ravin
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - K Rajasekaran
- Department of Otorhinolaryngology - Head & Neck Surgery, University of Pennsylvania, Philadelphia, PA, USA
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
47
|
Varghese KJ, Singh SP, Qureshi FM, Shreekumar S, Ramprasad A, Qureshi F. Digital Patient Education on Xanthelasma Palpebrarum: A Content Analysis. Clin Pract 2023; 13:1207-1214. [PMID: 37887084 PMCID: PMC10605081 DOI: 10.3390/clinpract13050108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 09/18/2023] [Accepted: 09/21/2023] [Indexed: 10/28/2023] Open
Abstract
Patient education has been transformed using digital media and online repositories which disseminate information with greater efficiency. In dermatology, this transformation has allowed for patients to gain education on common cutaneous conditions and improve health literacy. Xanthelasma palpebrarum is one of the most common cutaneous conditions, yet there is a poor understanding of how digital materials affect health literacy on this condition. Our study aimed to address this paucity of literature utilizing Brief DISCERN, Rothwell's Classification of Questions, and six readability calculations. The findings of this study indicate a poor-quality profile (Brief DISCERN < 16) regarding digital materials and readability scores which do not meet grade-level recommendations in the United States. This indicates a need to improve the current body of educational materials used by clinicians for diagnosing and managing xanthelasma palpebrarum.
Collapse
Affiliation(s)
- Kevin J. Varghese
- Department of Biomedical Sciences, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA; (F.M.Q.); (S.S.); (A.R.)
| | - Som P. Singh
- Department of Biomedical Sciences, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA; (F.M.Q.); (S.S.); (A.R.)
| | - Fahad M. Qureshi
- Department of Biomedical Sciences, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA; (F.M.Q.); (S.S.); (A.R.)
| | - Shreevarsha Shreekumar
- Department of Biomedical Sciences, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA; (F.M.Q.); (S.S.); (A.R.)
| | - Aarya Ramprasad
- Department of Biomedical Sciences, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA; (F.M.Q.); (S.S.); (A.R.)
| | - Fawad Qureshi
- Department of Nephrology, Mayo Clinic Alix School of Medicine, Rochester, MN 55905, USA;
| |
Collapse
|
48
|
Yamaguchi S, Kimura S, Watanabe S, Mikami Y, Nakajima H, Yamaguchi Y, Sasho T, Ohtori S. Internet search analysis on the treatment of rheumatoid arthritis: What do people ask and read online? PLoS One 2023; 18:e0285869. [PMID: 37738275 PMCID: PMC10516429 DOI: 10.1371/journal.pone.0285869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 05/02/2023] [Indexed: 09/24/2023] Open
Abstract
OBJECTIVES This study aimed to characterize the content of frequently asked questions about the treatment of rheumatoid arthritis (RA) on the internet in Japan and to evaluate the quality of websites related to the questions. METHODS We searched terms on the treatment of RA on Google and extracted frequently asked questions generated by the Google "people also ask" function. The website that answered each question was also obtained. We categorized the questions based on the content. The quality of the websites was evaluated using the brief DISCERN, Journal of American Medical Association benchmark criteria, and Clear Communication Index. RESULTS Our search yielded 83 questions and the corresponding websites. The most frequently asked questions were regarding the timeline of treatment (n = 17, 23%) and those on the timeline of the clinical course (n = 13, 16%). The median score of brief DISCERN was 11 points, with only 7 (8%) websites having sufficient quality. Websites having sufficient quality based on the Journal of American Medical Association benchmark criteria and Clear Communication Index were absent. CONCLUSIONS The questions were most frequently related to the timeline of treatment and clinical course. Physicians should provide such information to patients with RA in the counseling and education materials.
Collapse
Affiliation(s)
- Satoshi Yamaguchi
- Graduate School of Global and Transdisciplinary Studies, Chiba University, Chiba-shi, Chiba, Japan
- Department of Orthopaedic Surgery, Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba-shi, Chiba, Japan
| | - Seiji Kimura
- Department of Orthopaedic Surgery, Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba-shi, Chiba, Japan
| | - Shotaro Watanabe
- Department of Orthopaedic Surgery, Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba-shi, Chiba, Japan
| | - Yukio Mikami
- Department of Orthopaedic Surgery, Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba-shi, Chiba, Japan
| | - Hirofumi Nakajima
- Department of Orthopaedic Surgery, Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba-shi, Chiba, Japan
| | - Yukiko Yamaguchi
- Department of Orthopaedic Surgery, Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba-shi, Chiba, Japan
| | - Takahisa Sasho
- Department of Orthopaedic Surgery, Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba-shi, Chiba, Japan
- Center for Preventive Medical Sciences, Chiba University, Chiba-shi, Chiba, Japan
| | - Seiji Ohtori
- Department of Orthopaedic Surgery, Graduate School of Medical and Pharmaceutical Sciences, Chiba University, Chiba-shi, Chiba, Japan
| |
Collapse
|
49
|
Dubin JA, Bains SS, Chen Z, Hameed D, Nace J, Mont MA, Delanois RE. Using a Google Web Search Analysis to Assess the Utility of ChatGPT in Total Joint Arthroplasty. J Arthroplasty 2023; 38:1195-1202. [PMID: 37040823 DOI: 10.1016/j.arth.2023.04.007] [Citation(s) in RCA: 59] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 03/22/2023] [Accepted: 04/03/2023] [Indexed: 04/13/2023] Open
Abstract
BACKGROUND Rapid technological advancements have laid the foundations for the use of artificial intelligence in medicine. The promise of machine learning (ML) lies in its potential ability to improve treatment decision making, predict adverse outcomes, and streamline the management of perioperative healthcare. In an increasing consumer-focused health care model, unprecedented access to information may extend to patients using ChatGPT to gain insight into medical questions. The main objective of our study was to replicate a patient's internet search in order to assess the appropriateness of ChatGPT, a novel machine learning tool released in 2022 that provides dialogue responses to queries, in comparison to Google Web Search, the most widely used search engine in the United States today, as a resource for patients for online health information. For the 2 different search engines, we compared i) the most frequently asked questions (FAQs) associated with total knee arthroplasty (TKA) and total hip arthroplasty (THA) by question type and topic; ii) the answers to the most frequently asked questions; as well as iii) the FAQs yielding a numerical response. METHODS A Google web search was performed with the following search terms: "total knee replacement" and "total hip replacement." These terms were individually entered and the first 10 FAQs were extracted along with the source of the associated website for each question. The following statements were inputted into ChatGPT: 1) "Perform a google search with the search term 'total knee replacement' and record the 10 most FAQs related to the search term" as well as 2) "Perform a google search with the search term 'total hip replacement' and record the 10 most FAQs related to the search term." A Google web search was repeated with the same search terms to identify the first 10 FAQs that included a numerical response for both "total knee replacement" and "total hip replacement." These questions were then inputted into ChatGPT and the questions and answers were recorded. RESULTS There were 5 of 20 (25%) questions that were similar when performing a Google web search and a search of ChatGPT for all search terms. Of the 20 questions asked for the Google Web Search, 13 of 20 were provided by commercial websites. For ChatGPT, 15 of 20 (75%) questions were answered by government websites, with the most frequent one being PubMed. In terms of numerical questions, 11 of 20 (55%) of the most FAQs provided different responses between a Google web search and ChatGPT. CONCLUSION A comparison of the FAQs by a Google web search with attempted replication by ChatGPT revealed heterogenous questions and responses for open and discrete questions. ChatGPT should remain a trending use as a potential resource to patients that needs further corroboration until its ability to provide credible information is verified and concordant with the goals of the physician and the patient alike.
Collapse
Affiliation(s)
- Jeremy A Dubin
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| | - Sandeep S Bains
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| | - Zhongming Chen
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| | - Daniel Hameed
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| | - James Nace
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| | - Michael A Mont
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| | - Ronald E Delanois
- LifeBridge Health, Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland
| |
Collapse
|
50
|
Phelps CR, Shepard S, Hughes G, Gurule J, Scott J, Raszewski J, Hatic S, Hawkins B, Vassar M. Insights Into Patients Questions Over Bunion Treatments: A Google Study. FOOT & ANKLE ORTHOPAEDICS 2023; 8:24730114231198837. [PMID: 37767008 PMCID: PMC10521286 DOI: 10.1177/24730114231198837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/29/2023] Open
Abstract
Background Approximately 1 in 4 adults will develop hallux valgus (HV). Up to 80% of adult Internet users reference online sources for health-related information. Overall, with the high prevalence of HV combined with the numerous treatment options, we believe patients are likely turning to Internet search engines for questions relevant to HV. Using Google's people also ask (PAA) or frequently asked questions (FAQs) feature, we sought to classify these questions, categorize the sources, as well as assess their levels of quality and transparency. Methods On October 9, 2022, we searched Google using these 4 phrases: "hallux valgus treatment," "hallux valgus surgery," "bunion treatment," and "bunion surgery." The FAQs were classified in accordance with the Rothwell Classification schema and each source was categorized. Lastly, transparency and quality of the sources' information were evaluated with the Journal of the American Medical Association's (JAMA) Benchmark tool and Brief DISCERN, respectively. Results Once duplicates and FAQs unrelated to HV were removed, our search returned 299 unique FAQs. The most common question in our sample was related to the evaluation of treatment options (79/299, 26.4%). The most common source type was medical practices (158/299, 52.8%). Nearly two-thirds of the answer sources (184/299; 61.5%) were lacking in transparency. One-way analysis of variance revealed a significant difference in mean Brief DISCERN scores among the 5 source types, F(4) = 54.49 (P < .001), with medical practices averaging the worst score (12.1/30). Conclusion Patients seeking online information concerning treatment options for HV search for questions pertaining to the evaluation of treatment options. The source type encountered most by patients is medical practices; these were found to have both poor transparency and poor quality. Publishing basic information such as the date of publication, authors or reviewers, and references would greatly improve the transparency and quality of online information regarding HV treatment. Level of Evidence Level V, mechanism-based reasoning.
Collapse
Affiliation(s)
- Cole R. Phelps
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
| | - Samuel Shepard
- Department of Orthopaedic Surgery, Kettering Health Network, Dayton, OH, USA
| | - Griffin Hughes
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
| | - Jon Gurule
- Department of Orthopaedic Surgery, Oklahoma State University Medical Center, Tulsa, OK, USA
| | - Jared Scott
- Department of Orthopaedic Surgery, Oklahoma State University Medical Center, Tulsa, OK, USA
| | - Jesse Raszewski
- Department of Orthopaedic Surgery, Kettering Health Network, Dayton, OH, USA
| | - Safet Hatic
- Department of Orthopaedic Surgery, Kettering Health Network, Dayton, OH, USA
| | - Bryan Hawkins
- Department of Orthopaedic Surgery, Oklahoma State University Medical Center, Tulsa, OK, USA
| | - Matt Vassar
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
- Department of Psychiatry and Behavioral Sciences, College of Osteopathic Medicine, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
| |
Collapse
|