1
|
Migliorini F, Pilone M, Lucenti L, Bardazzi T, Pipino G, Vaishya R, Maffulli N. Arthroscopic Management of Femoroacetabular Impingement: Current Concepts. J Clin Med 2025; 14:1455. [PMID: 40094916 PMCID: PMC11900325 DOI: 10.3390/jcm14051455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2024] [Revised: 01/30/2025] [Accepted: 02/19/2025] [Indexed: 03/19/2025] Open
Abstract
Background: Femoroacetabular impingement (FAI) is a common cause of hip pain and dysfunction, especially in young and active individuals, and it may require surgical management for associated labral tears and cartilage damage. The management of FAI has advanced radically over the last few years, and hip arthroscopy has gained a leading role. However, despite the increasing number of published research and technological advancements, a comprehensive systematic review summarising current evidence is still missing. Methods: All the clinical studies investigating the arthroscopic management of FAI were accessed. Only studies with a minimum of six months of follow-up were considered. The 2020 PRISMA guidelines were followed. In December 2024, PubMed, Web of Science, and Embase were accessed without time constraints. Results: The present systematic review included 258 clinical investigations (57,803 patients). The mean length of follow-up was 34.2 ± 22.7 months. The mean age was 34.7 ± 5.3, and the mean BMI was 25.1 ± 2.0 kg/m2. Conclusions: The present systematic review updates current evidence on patients who have undergone arthroscopic surgery for FAI, updating and discussing current progress in managing labral injuries and patient selection, emphasising outcomes and pitfalls. Progress in surgery and improvement in eligibility criteria, as well as current controversies and prospects, were also discussed.
Collapse
Affiliation(s)
- Filippo Migliorini
- Department of Life Sciences, Health, and Health Professions, Link Campus University, Via del Casale di San Pio V, 00165 Rome, Italy
- Department of Orthopaedic and Trauma Surgery, Academic Hospital of Bolzano (SABES-ASDAA), 39100 Bolzano, Italy;
| | - Marco Pilone
- Residency Program in Orthopedics and Traumatology, University of Milan, 20133 Milan, Italy;
| | - Ludovico Lucenti
- Department of Precision Medicine in Medical, Surgical and Critical Care (Me.Pre.C.C.), University of Palermo, 90133 Palermo, Italy;
| | - Tommaso Bardazzi
- Department of Orthopaedic and Trauma Surgery, Academic Hospital of Bolzano (SABES-ASDAA), 39100 Bolzano, Italy;
| | - Gennaro Pipino
- Department of Orthopaedic Surgery, Villa Erbosa Hospital, San Raffaele University, 20132 Milan, Italy;
| | - Raju Vaishya
- Department of Orthopaedic and Trauma Surgery, Indraprastha Apollo Hospitals, New Delhi 110076, India;
| | - Nicola Maffulli
- Department of Trauma and Orthopaedic Surgery, Faculty of Medicine and Psychology, University La Sapienza, 00185 Rome, Italy;
- School of Pharmacy and Bioengineering, Faculty of Medicine, Keele University, Stoke on Trent ST4 7QB, UK
- Centre for Sports and Exercise Medicine, Barts and the London School of Medicine and Dentistry, Mile End Hospital, Queen Mary University of London, London E1 4DG, UK
| |
Collapse
|
2
|
Slawaska-Eng D, Bourgeault-Gagnon Y, Cohen D, Pauyo T, Belzile EL, Ayeni OR. ChatGPT-3.5 and -4 provide mostly accurate information when answering patients' questions relating to femoroacetabular impingement syndrome and arthroscopic hip surgery. J ISAKOS 2025; 10:100376. [PMID: 39674512 DOI: 10.1016/j.jisako.2024.100376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Revised: 11/19/2024] [Accepted: 11/28/2024] [Indexed: 12/16/2024]
Abstract
OBJECTIVES This study aimed to evaluate the accuracy of ChatGPT in answering patient questions about femoroacetabular impingement (FAI) and arthroscopic hip surgery, comparing the performance of versions ChatGPT-3.5 (free) and ChatGPT-4 (paid). METHODS Twelve frequently asked questions (FAQs) relating to FAI were selected and posed to ChatGPT-3.5 and ChatGPT-4. The responses were assessed for accuracy by three hip arthroscopy surgeons using a four-tier grading system. Statistical analyses included Wilcoxon signed-rank tests and Gwet's AC2 coefficient for interrater agreement corrected for chance and employing quadratic weights. RESULTS The median ratings for responses ranged from "excellent not requiring clarification" to "satisfactory requiring moderate clarification." No responses were rated as "unsatisfactory requiring substantial clarification." The median accuracy scores were 2 (range 1-3) for ChatGPT-3.5 and 1.5 (range 1-3) for ChatGPT-4, with 25 % of ChatGPT-3.5's responses and 50 % of ChatGPT-4's responses rated as "excellent." There was no statistical difference in performance between the two versions (p = 0.279) although ChatGPT-4 showed a tendency towards higher accuracy in some areas. Interrater agreement was substantial for ChatGPT-3.5 (Gwet's AC2 = 0.79 [95% confidence interval (CI) = 0.6-0.94]) and moderate to substantial for ChatGPT-4 (Gwet's AC2 = 0.65 [95% CI = 0.43-0.87]). CONCLUSION Both versions of ChatGPT provided mostly accurate responses to FAQs on FAI and arthroscopic surgery, with no significant difference between the versions. The findings suggest potential utility of ChatGPT in patient education, though cautious implementation and further evaluation are recommended due to variability in response accuracy and low power of the study. LEVEL OF EVIDENCE IV.
Collapse
Affiliation(s)
- David Slawaska-Eng
- Division of Orthopaedic Surgery, Department of Surgery, McMaster University, 1200 Main St West, Hamilton, Ontario, L8N 3Z5, Canada
| | - Yoan Bourgeault-Gagnon
- Division of Orthopaedic Surgery, Department of Surgery, McMaster University, 1200 Main St West, Hamilton, Ontario, L8N 3Z5, Canada
| | - Dan Cohen
- Division of Orthopaedic Surgery, Department of Surgery, McMaster University, 1200 Main St West, Hamilton, Ontario, L8N 3Z5, Canada
| | - Thierry Pauyo
- Division of Orthopaedic Surgery, McGill University, 845 Rue Sherbrooke O, Montréal, QC H3A 0G4, Canada
| | - Etienne L Belzile
- Division of Orthopaedic Surgery, CHU de Québec-Université Laval, 2705 Bd Laurier, Québec, QC G1V 4G2, Canada
| | - Olufemi R Ayeni
- Division of Orthopaedic Surgery, Department of Surgery, McMaster University, 1200 Main St West, Hamilton, Ontario, L8N 3Z5, Canada.
| |
Collapse
|
3
|
Obana KK, Lind DR, Luzzi AJ, O’Connor MJ, LeVasseur MR, Levine WN. Online patients questions regarding reverse total shoulder arthroplasty pertain to timeline of recovery, specific activities, and limitations. JSES REVIEWS, REPORTS, AND TECHNIQUES 2025; 5:7-13. [PMID: 39872341 PMCID: PMC11764610 DOI: 10.1016/j.xrrt.2024.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 01/30/2025]
Abstract
Background Reverse total shoulder arthroplasty (rTSA) demonstrates favorable long-term data and has outpaced anatomic total shoulder arthroplasty and hemiarthroplasty as the most-performed shoulder arthroplasty procedure. As indications and outcomes continue to favor rTSA, patients may turn to the internet as an efficient modality to answer various questions or concerns. This study investigates online patient questions pertaining to rTSA and the quality of the websites providing information. Hypotheses (1) Questions will pertain to surgical indications, timeline of recovery, and postoperative restrictions; (2) the quality and transparency of online information is largely heterogenous. Methods Three rTSA searches were entered into the Google Web Search. Questions under the "People also ask" tab were expanded sequentially and 100 consecutive results for each query were included for analysis (300 in total). Questions were categorized based on Rothwell's Classification and subcategorized by topic. Websites were categorized by source. Website quality was evaluated by the Journal of the American Medical Association (JAMA) Benchmark Criteria. Results Most questions fell into the Rothwell Fact category (49.7%). The most common question topics were Timeline of Recovery (17.3%), Specific Activities (14.7%), and Restrictions (11.3%). The least common question topics were Anatomy/Function (0.0%), Cost (0.3%), and Diagnoses/Evaluation (0.3%). The most common websites were Medical Practice (45.0%), Academic (22.3%), and Single Surgeon (12.3%). PubMed articles consisted of 41.2% of Government websites. The average JAMA score for all websites was 1.48 ± 1.27. Government websites had the highest JAMA score (3.11 ± 1.01) and constituted 55.9% of all websites with a score of 4/4. Medical Practice websites had the lowest JAMA score (0.99 ± 0.91). Conclusion Patients are interested in the timeline of recovery, ability to perform specific activities after surgery, and short-term and long-term restrictions following rTSA. Although all patients will benefit from education on ways to perform activities of daily living while abiding by postoperative restrictions, physicians should set preoperative expectations regarding return-to-activity following rTSA in younger, more active patients. Finally, surgeons should provide patients with physical booklets and online information available on their websites to avoid reliance on low-quality online sources.
Collapse
Affiliation(s)
- Kyle K. Obana
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Medical Center, New York, NY, USA
| | - Dane R.G. Lind
- Center for Regenerative and Personalized Medicine, Steadman Philippon Research Institute, Vail, CO, USA
| | - Andrew J. Luzzi
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Medical Center, New York, NY, USA
| | - Michaela J. O’Connor
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Medical Center, New York, NY, USA
| | - Matthew R. LeVasseur
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Medical Center, New York, NY, USA
| | - William N. Levine
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Medical Center, New York, NY, USA
| |
Collapse
|
4
|
Uzun MF, Özer A, Askin A, Atahan MO, Yurdakul G, Gölgelioğlu F. Evaluating the Quality and Readability of Online Health Information on Snapping Hip Syndrome: A Cross-Sectional Analysis. Cureus 2025; 17:e79531. [PMID: 40144449 PMCID: PMC11937722 DOI: 10.7759/cureus.79531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2025] [Indexed: 03/28/2025] Open
Abstract
AIM This study aims to evaluate the quality and readability of online health information related to snapping hip syndrome (SHS). METHODS A cross-sectional analysis was conducted by searching the term "Snapping Hip Syndrome" on Google, Bing, and Yahoo. The first 30 results from each search engine were assessed, and duplicate or irrelevant websites were excluded. The remaining 90 unique web pages were categorized into academic, physician, commercial, medical professional, and non-identified groups. Quality was assessed using the DISCERN instrument, Journal of the American Medical Association (JAMA) Benchmark Criteria, and HONcode certification, while readability was evaluated with the Flesch-Kincaid Grade Level (FKGL) and Flesch-Kincaid Reading Score (FKRS). The SHS Content Score (SHS-CS) was also developed for a comprehensive content-specific evaluation. RESULTS Academic websites had the highest quality scores, with DISCERN (52.10 ± 6.85), JAMA (3.48 ± 0.50), and SHS-CS (27.85 ± 2.15), but demonstrated lower readability (FKGL: 11.76 ± 0.40, FKRS: 21.45 ± 7.12). Commercial and non-identified websites scored lowest across all quality measures. Significant correlations were found between DISCERN and JAMA (r = 0.932, p = 0.000*), SHS-CS and DISCERN (r = 0.918, p = 0.000*), and a negative correlation with readability metrics (DISCERN vs. FKRS, r = -0.668, p = 0.000*). CONCLUSION The quality of SHS-related online information varies significantly across website types. While academic websites provide the highest quality content, they often lack readability. HONcode-certified websites exhibited superior quality but did not differ significantly in readability compared to non-certified sites. Future efforts should focus on improving the readability of high-quality health information.
Collapse
Affiliation(s)
- Mehmet Fatih Uzun
- Orthopaedics and Traumatology, Ceylanpınar State Hospital, Şanlıurfa, TUR
| | - Alper Özer
- Orthopedics and Traumatology, Kayseri City Hospital, Kayseri, TUR
| | - Aydogan Askin
- Orthopaedics and Trauma, Antalya Training and Research Hospital, Antalya, TUR
| | - Mehmet O Atahan
- Orthopaedics and Traumatology, Afyonkarahisar State Hospital, Afyonkarahisar, TUR
| | - Göker Yurdakul
- Orthopaedics and Traumatology, Yozgat Bozok University, Yozgat, TUR
| | | |
Collapse
|
5
|
Nian PP, Saleet J, Magruder M, Wellington IJ, Choueka J, Houten JK, Saleh A, Razi AE, Ng MK. ChatGPT as a Source of Patient Information for Lumbar Spinal Fusion and Laminectomy: A Comparative Analysis Against Google Web Search. Clin Spine Surg 2024; 37:E394-E403. [PMID: 38409676 DOI: 10.1097/bsd.0000000000001582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 01/22/2024] [Indexed: 02/28/2024]
Abstract
STUDY DESIGN Retrospective Observational Study. OBJECTIVE The objective of this study was to assess the utility of ChatGPT, an artificial intelligence chatbot, in providing patient information for lumbar spinal fusion and lumbar laminectomy in comparison with the Google search engine. SUMMARY OF BACKGROUND DATA ChatGPT, an artificial intelligence chatbot with seemingly unlimited functionality, may present an alternative to a Google web search for patients seeking information about medical questions. With widespread misinformation and suboptimal quality of online health information, it is imperative to assess ChatGPT as a resource for this purpose. METHODS The first 10 frequently asked questions (FAQs) related to the search terms "lumbar spinal fusion" and "lumbar laminectomy" were extracted from Google and ChatGPT. Responses to shared questions were compared regarding length and readability, using the Flesch Reading Ease score and Flesch-Kincaid Grade Level. Numerical FAQs from Google were replicated in ChatGPT. RESULTS Two of 10 (20%) questions for both lumbar spinal fusion and lumbar laminectomy were asked similarly between ChatGPT and Google. Compared with Google, ChatGPT's responses were lengthier (340.0 vs. 159.3 words) and of lower readability (Flesch Reading Ease score: 34.0 vs. 58.2; Flesch-Kincaid grade level: 11.6 vs. 8.8). Subjectively, we evaluated these responses to be accurate and adequately nonspecific. Each response concluded with a recommendation to discuss further with a health care provider. Over half of the numerical questions from Google produced a varying or nonnumerical response in ChatGPT. CONCLUSIONS FAQs and responses regarding lumbar spinal fusion and lumbar laminectomy were highly variable between Google and ChatGPT. While ChatGPT may be able to produce relatively accurate responses in select questions, its role remains as a supplement or starting point to a consultation with a physician, not as a replacement, and should be taken with caution until its functionality can be validated.
Collapse
Affiliation(s)
- Patrick P Nian
- Departments of Orthopaedic Surgery, SUNY Downstate Health Sciences University, College of Medicine, Brooklyn, NY
| | | | | | | | | | - John K Houten
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY
| | | | | | | |
Collapse
|
6
|
Obana KK, Law C, Mastroianni MA, Abdelaziz A, Alexander FJ, Ahmad CS, Trofa DP. Patients With Posterior Cruciate Ligament Injuries Obtain Information Regarding Diagnosis, Management, and Recovery from Low-Quality Online Resources. PHYSICIAN SPORTSMED 2024; 52:601-607. [PMID: 38651524 DOI: 10.1080/00913847.2024.2346462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 04/19/2024] [Indexed: 04/25/2024]
Abstract
OBJECTIVES This study investigates the most common online patient questions pertaining to posterior cruciate ligament (PCL) injuries and the quality of the websites providing information. METHODS Four PCL search queries were entered into the Google Web Search. Questions under the 'People also ask' tab were expanded in order and 100 results for each query were included (400 total). Questions were categorized based on Rothwell's Classification of Questions (Fact, Policy, Value). Websites were categorized by source (Academic, Commercial, Government, Medical Practice, Single Surgeon Personal, Social Media). Website quality was evaluated based on the Journal of the American Medical Association (JAMA) Benchmark Criteria. Pearson's chi-squared was used to assess categorical data. Cohen's kappa was used to assess inter-rater reliability. RESULTS Most questions fell into the Rothwell Fact category (54.3%). The most common question topics were Diagnosis/Evaluation (18.0%), Indications/Management (15.5%), and Timeline of Recovery (15.3%). The least common question topics were Technical Details of Procedure (1.5%), Cost (0.5%), and Longevity (0.5%). The most common websites were Medical Practice (31.8%) and Commercial (24.3%), while the least common were Government (8.5%) and Social Media (1.5%). The average JAMA score for websites was 1.49 ± 1.36. Government websites had the highest JAMA score (3.00 ± 1.26) and constituted 42.5% of all websites with a score of 4/4. Comparatively, Single Surgeon Personal websites had the lowest JAMA score (0.76 ± 0.87, range [0-2]). PubMed articles constituted 70.6% (24/34) of Government websites, 70.8% (17/24) had a JAMA score of 4 and 20.8% (5/24) had a score of 3. CONCLUSION Patients search the internet for information regarding diagnosis, treatment, and recovery of PCL injuries and are less interested in the details of the procedure, cost, and longevity of treatment. The low JAMA score reflects the heterogenous quality and transparency of online information. Physicians can use this information to help guide patient expectations pre- and post-operatively.
Collapse
Affiliation(s)
- Kyle K Obana
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Christian Law
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Michael A Mastroianni
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Abed Abdelaziz
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Frank J Alexander
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Christopher S Ahmad
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - David P Trofa
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| |
Collapse
|
7
|
Parsa A, Prabhavalkar ON, Saeed S, Nerys-Figueroa J, Carbone A, Domb BG. Best practices on patient education materials in hip surgery based on learnings from major hip centers and societies. J Hip Preserv Surg 2024; 11:144-149. [PMID: 39070211 PMCID: PMC11272639 DOI: 10.1093/jhps/hnae011] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 01/09/2024] [Accepted: 02/13/2024] [Indexed: 07/30/2024] Open
Abstract
Patient education is important as it gives patients a better understanding of the risks and benefits of medical and surgical interventions. Developing communication technologies have completely changed and enhanced patient access to medical information. The aim of this study was to evaluate available patient education materials (PEMs) regarding hip surgery on the websites of major hip societies and centers. The PEM from 11 selected leading hip centers and societies were evaluated with the following assessment tools: Flesch-Kincaid (FK) readability test, Flesch Reading Ease formula, Literature-Intelligence-Data-Analysis (LIDA) instrument and Discernibility Interpretability Sources Comprehensive Evidence Relevance Noticeable (DISCERN) tool. Videos were assessed using Patient Educational Video Assessment Tool (PEVAT). A total of 69 educational items, including 52 text articles (75.4%) and 17 videos (24.6%) were retrieved and evaluated. The median Interquartile Range (IQR) FK level of 52 text articles was 10.8 (2.2). The median (IQR) LIDA score of text articles by center was 45. According to the LIDA score, 60% of all website articles demonstrated high accessibility (LIDA score > 44). The median DISCERN score of text articles by center was 69. Overall, 52 (100%) of the text articles were deemed to be at 'good' quality rating or higher, and 23.2% (16 out of 69) of the articles had excellent quality. The mean PEVAT score for the 17 videos was 25 ± 1.9. Analysis of text and video articles from the 11 leading orthopedic surgery centers and societies demonstrated that by selecting a reliable source of information from main scientific societies and major centers in hip surgery, patients can find more accurate information regarding their hip conditions.
Collapse
Affiliation(s)
- Ali Parsa
- American Hip Institute Research Foundation, Chicago, IL 60018, USA
- Orthopedic Research Center, Department of Orthopedic Surgery, Mashhad University of Medical sciences, Mashhad, Iran
| | | | - Sheema Saeed
- American Hip Institute Research Foundation, Chicago, IL 60018, USA
| | | | - Andrew Carbone
- American Hip Institute Research Foundation, Chicago, IL 60018, USA
| | - Benjamin G Domb
- American Hip Institute Research Foundation, Chicago, IL 60018, USA
- American Hip Institute, Chicago, IL 60018, USA
| |
Collapse
|
8
|
Obana KK, Lind DR, Mastroianni MA, Rondon AJ, Alexander FJ, Levine WN, Ahmad CS. What are our patients asking Google about acromioclavicular joint injuries?-frequently asked online questions and the quality of online resources. JSES REVIEWS, REPORTS, AND TECHNIQUES 2024; 4:175-181. [PMID: 38706686 PMCID: PMC11065754 DOI: 10.1016/j.xrrt.2024.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 05/07/2024]
Abstract
Background Management of acromioclavicular (AC) joint injuries has been an ongoing source of debate, with over 150 variations of surgery described in the literature. Without a consensus on surgical technique, patients are seeking answers to common questions through internet resources. This study investigates the most common online patient questions pertaining to AC joint injuries and the quality of the websites providing information. Hypothesis 1) Question topics will pertain to surgical indications, pain management, and success of surgery and 2) the quality and transparency of online information are largely heterogenous. Methods Three AC joint search queries were entered into the Google Web Search. Questions under the "People also ask" tab were expanded in order and 100 results for each query were included (300 total). Questions were categorized based on Rothwell's classification. Websites were categorized by source. Website quality was evaluated by the Journal of the American Medical Association (JAMA) Benchmark Criteria. Results Most questions fell into the Rothwell Fact category (48.0%). The most common question topics were surgical indications (28.0%), timeline of recovery (13.0%), and diagnosis/evaluation (12.0%). The least common question topics were anatomy/function (3.3%), evaluation of surgery (3.3%), injury comparison (1.0%), and cost (1.0%). The most common websites were medical practice (44.0%), academic (22.3%), and single surgeon personal (12.3%). The average JAMA score for all websites was 1.0 ± 1.3. Government websites had the highest JAMA score (4.0 ± 0.0) and constituted 45.8% of all websites with a score of 4/4. PubMed articles constituted 63.6% (7/11) of government website. Comparatively, medical practice websites had the lowest JAMA score (0.3 ± 0.7, range [0-3]). Conclusion Online patient AC joint injury questions pertain to surgical indications, timeline of recovery, and diagnosis/evaluation. Government websites and PubMed articles provide the highest-quality sources of reliable, up-to-date information but constitute the smallest proportion of resources. In contrast, medical practice represents the most visited websites, however, recorded the lowest quality score. Physicians should utilize this information to answer frequently asked questions, guide patient expectations, and help provide and identify reliable online resources.
Collapse
Affiliation(s)
- Kyle K. Obana
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Dane R.G. Lind
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Michael A. Mastroianni
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Alexander J. Rondon
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Frank J. Alexander
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - William N. Levine
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| | - Christopher S. Ahmad
- Department of Orthopaedic Surgery, New York-Presbyterian Hospital/Columbia University Irving Medical Center, New York, NY, USA
| |
Collapse
|
9
|
Hurley ET, Crook BS, Lorentz SG, Danilkowicz RM, Lau BC, Taylor DC, Dickens JF, Anakwenze O, Klifto CS. Evaluation High-Quality of Information from ChatGPT (Artificial Intelligence-Large Language Model) Artificial Intelligence on Shoulder Stabilization Surgery. Arthroscopy 2024; 40:726-731.e6. [PMID: 37567487 DOI: 10.1016/j.arthro.2023.07.048] [Citation(s) in RCA: 47] [Impact Index Per Article: 47.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 06/27/2023] [Accepted: 07/28/2023] [Indexed: 08/13/2023]
Abstract
PURPOSE To analyze the quality and readability of information regarding shoulder stabilization surgery available using an online AI software (ChatGPT), using standardized scoring systems, as well as to report on the given answers by the AI. METHODS An open AI model (ChatGPT) was used to answer 23 commonly asked questions from patients on shoulder stabilization surgery. These answers were evaluated for medical accuracy, quality, and readability using The JAMA Benchmark criteria, DISCERN score, Flesch-Kincaid Reading Ease Score (FRES) & Grade Level (FKGL). RESULTS The JAMA Benchmark criteria score was 0, which is the lowest score, indicating no reliable resources cited. The DISCERN score was 60, which is considered a good score. The areas that open AI model did not achieve full marks were also related to the lack of available source material used to compile the answers, and finally some shortcomings with information not fully supported by the literature. The FRES was 26.2, and the FKGL was considered to be that of a college graduate. CONCLUSIONS There was generally high quality in the answers given on questions relating to shoulder stabilization surgery, but there was a high reading level required to comprehend the information presented. However, it is unclear where the answers came from with no source material cited. It is important to note that the ChatGPT software repeatedly references the need to discuss these questions with an orthopaedic surgeon and the importance of shared discussion making, as well as compliance with surgeon treatment recommendations. CLINICAL RELEVANCE As shoulder instability is an injury that predominantly affects younger individuals who may use the Internet for information, this study shows what information patients may be getting online.
Collapse
Affiliation(s)
| | | | | | | | - Brian C Lau
- Duke University, Durham, North Carolina, U.S.A
| | | | | | | | | |
Collapse
|
10
|
Varghese KJ, Singh SP, Qureshi FM, Shreekumar S, Ramprasad A, Qureshi F. Digital Patient Education on Xanthelasma Palpebrarum: A Content Analysis. Clin Pract 2023; 13:1207-1214. [PMID: 37887084 PMCID: PMC10605081 DOI: 10.3390/clinpract13050108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 09/18/2023] [Accepted: 09/21/2023] [Indexed: 10/28/2023] Open
Abstract
Patient education has been transformed using digital media and online repositories which disseminate information with greater efficiency. In dermatology, this transformation has allowed for patients to gain education on common cutaneous conditions and improve health literacy. Xanthelasma palpebrarum is one of the most common cutaneous conditions, yet there is a poor understanding of how digital materials affect health literacy on this condition. Our study aimed to address this paucity of literature utilizing Brief DISCERN, Rothwell's Classification of Questions, and six readability calculations. The findings of this study indicate a poor-quality profile (Brief DISCERN < 16) regarding digital materials and readability scores which do not meet grade-level recommendations in the United States. This indicates a need to improve the current body of educational materials used by clinicians for diagnosing and managing xanthelasma palpebrarum.
Collapse
Affiliation(s)
- Kevin J. Varghese
- Department of Biomedical Sciences, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA; (F.M.Q.); (S.S.); (A.R.)
| | - Som P. Singh
- Department of Biomedical Sciences, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA; (F.M.Q.); (S.S.); (A.R.)
| | - Fahad M. Qureshi
- Department of Biomedical Sciences, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA; (F.M.Q.); (S.S.); (A.R.)
| | - Shreevarsha Shreekumar
- Department of Biomedical Sciences, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA; (F.M.Q.); (S.S.); (A.R.)
| | - Aarya Ramprasad
- Department of Biomedical Sciences, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA; (F.M.Q.); (S.S.); (A.R.)
| | - Fawad Qureshi
- Department of Nephrology, Mayo Clinic Alix School of Medicine, Rochester, MN 55905, USA;
| |
Collapse
|
11
|
Phelps CR, Shepard S, Hughes G, Gurule J, Scott J, Raszewski J, Hatic S, Hawkins B, Vassar M. Insights Into Patients Questions Over Bunion Treatments: A Google Study. FOOT & ANKLE ORTHOPAEDICS 2023; 8:24730114231198837. [PMID: 37767008 PMCID: PMC10521286 DOI: 10.1177/24730114231198837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/29/2023] Open
Abstract
Background Approximately 1 in 4 adults will develop hallux valgus (HV). Up to 80% of adult Internet users reference online sources for health-related information. Overall, with the high prevalence of HV combined with the numerous treatment options, we believe patients are likely turning to Internet search engines for questions relevant to HV. Using Google's people also ask (PAA) or frequently asked questions (FAQs) feature, we sought to classify these questions, categorize the sources, as well as assess their levels of quality and transparency. Methods On October 9, 2022, we searched Google using these 4 phrases: "hallux valgus treatment," "hallux valgus surgery," "bunion treatment," and "bunion surgery." The FAQs were classified in accordance with the Rothwell Classification schema and each source was categorized. Lastly, transparency and quality of the sources' information were evaluated with the Journal of the American Medical Association's (JAMA) Benchmark tool and Brief DISCERN, respectively. Results Once duplicates and FAQs unrelated to HV were removed, our search returned 299 unique FAQs. The most common question in our sample was related to the evaluation of treatment options (79/299, 26.4%). The most common source type was medical practices (158/299, 52.8%). Nearly two-thirds of the answer sources (184/299; 61.5%) were lacking in transparency. One-way analysis of variance revealed a significant difference in mean Brief DISCERN scores among the 5 source types, F(4) = 54.49 (P < .001), with medical practices averaging the worst score (12.1/30). Conclusion Patients seeking online information concerning treatment options for HV search for questions pertaining to the evaluation of treatment options. The source type encountered most by patients is medical practices; these were found to have both poor transparency and poor quality. Publishing basic information such as the date of publication, authors or reviewers, and references would greatly improve the transparency and quality of online information regarding HV treatment. Level of Evidence Level V, mechanism-based reasoning.
Collapse
Affiliation(s)
- Cole R. Phelps
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
| | - Samuel Shepard
- Department of Orthopaedic Surgery, Kettering Health Network, Dayton, OH, USA
| | - Griffin Hughes
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
| | - Jon Gurule
- Department of Orthopaedic Surgery, Oklahoma State University Medical Center, Tulsa, OK, USA
| | - Jared Scott
- Department of Orthopaedic Surgery, Oklahoma State University Medical Center, Tulsa, OK, USA
| | - Jesse Raszewski
- Department of Orthopaedic Surgery, Kettering Health Network, Dayton, OH, USA
| | - Safet Hatic
- Department of Orthopaedic Surgery, Kettering Health Network, Dayton, OH, USA
| | - Bryan Hawkins
- Department of Orthopaedic Surgery, Oklahoma State University Medical Center, Tulsa, OK, USA
| | - Matt Vassar
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
- Department of Psychiatry and Behavioral Sciences, College of Osteopathic Medicine, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
| |
Collapse
|