1
|
Uppal H, Garcia D, Abdelmalek G, Farshchian J, Sahai N, Emami A, McGinniss A. Readability of the Most Commonly Used Patient-Reported Outcome Measures in Hand Surgery. J Hand Surg Am 2025; 50:568-573. [PMID: 40117436 DOI: 10.1016/j.jhsa.2025.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Revised: 12/27/2024] [Accepted: 02/12/2025] [Indexed: 03/23/2025]
Abstract
PURPOSE Patient-reported outcome measures (PROMs) assess surgical outcomes and patient perspectives on function, symptoms, and quality of life. The readability of patient-reported outcome measures is crucial for ensuring patients can understand and accurately complete them. The National Institutes of Health and American Medical Association recommend that patient materials be written at or below a sixth-grade reading level. We aimed to evaluate whether PROMs identified in the hand literature meet these recommended reading standards. METHODS We conducted a readability analysis of 22 PROMs referenced in the hand literature. Readability was assessed using the Flesch Reading Ease Score (FRES) and the Simple Measure of Gobbledygook (SMOG) Index. Scores were obtained using an online readability calculator. Patient-reported outcome measures meeting a FRES ≥ 80 or SMOG ˂ 7 were considered at a sixth-grade reading level or lower, per the National Institutes of Health and American Medical Association guidelines. RESULTS Across all PROMs, the average FRES was 66 ± 12, and the average SMOG Index was 8 ± 1, corresponding to approximately an eighth- to ninth-grade reading level. Three PROMs met the target readability thresholds: Patient-Reported Outcome Measurement Information System-Physical Function Upper Extremity, Patient Evaluation Measure, and the 6-item Carpal Tunnel Syndrome Symptom Scale. Several PROMs, including the Southampton Dupuytren's Scoring Scheme, Hand Assessment Tool, and Manual Ability Measure 16, demonstrated relatively low readability scores. CONCLUSIONS Most PROMs mentioned in the hand literature exceeded the recommended sixth-grade reading level, potentially affecting patient comprehension and data accuracy. Although improving readability may enhance patient understanding, altering PROM wording is not straightforward and may require extensive revalidation because changes risk affecting validity and reliability, underscoring the complexity of revising PROMs. CLINICAL RELEVANCE These findings highlight the importance of raising awareness about PROM readability issues. Recognizing these readability challenges may encourage researchers, developers, and journal editors to consider recommended guidelines when proposing, modifying, or evaluating these measures.
Collapse
Affiliation(s)
- Harjot Uppal
- Department of Orthopaedic Surgery, St. Joseph's University Medical Center, Paterson, NJ.
| | - Daniel Garcia
- Department of Orthopaedic Surgery, St. Joseph's University Medical Center, Paterson, NJ
| | - George Abdelmalek
- Department of Orthopaedic Surgery, St. Joseph's University Medical Center, Paterson, NJ
| | - Joseph Farshchian
- Department of Orthopaedic Surgery, St. Joseph's University Medical Center, Paterson, NJ
| | - Nikhil Sahai
- Department of Orthopaedic Surgery, St. Joseph's University Medical Center, Paterson, NJ
| | - Arash Emami
- Department of Orthopaedic Surgery, St. Joseph's University Medical Center, Paterson, NJ
| | - Andrew McGinniss
- Department of Orthopaedic Surgery, St. Joseph's University Medical Center, Paterson, NJ
| |
Collapse
|
2
|
Uppal H, Garcia DJ, Kruchten M, Kraeutler MJ, McGinniss A, Emami A, Scillia AJ. Sports Medicine Patient-Reported Outcomes Fail to Meet National Institutes of Health- and American Medical Association-Recommended Reading Levels. Arthroscopy 2025:S0749-8063(25)00150-1. [PMID: 40056942 DOI: 10.1016/j.arthro.2025.02.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/04/2024] [Revised: 02/05/2025] [Accepted: 02/18/2025] [Indexed: 03/23/2025]
Abstract
PURPOSE To evaluate the readability of commonly used patient-reported outcome measures (PROMs) in the sports medicine literature to determine whether they meet the recommended reading levels set by the National Institutes of Health and the American Medical Association (AMA). METHODS A readability analysis was conducted on 26 PROMs commonly used in the sports medicine literature. The primary readability metrics used were the Flesch Reading Ease Score (FRES) and the Simple Measure of Gobbledygook (SMOG) Index. Readability scores were obtained using an online readability calculator and compared against National Institutes of Health and American Medical Association guidelines. An FRES of 80 or greater or an SMOG Index less than 7 was applied as a threshold for a sixth-grade reading level or lower. RESULTS The average FRES and SMOG Index for all PROMs were 65 ± 13 and 9 ± 1, respectively, indicating an eighth- to ninth-grade reading level. Four PROMs met the FRES and SMOG Index threshold for readability: 12-Item Short Form Survey, Pediatric Quality of Life Inventory, Numeric Pain Rating Scale, and Musculoskeletal Function Assessment. The Patient-Specific Functional Scale, Disablement in the Physically Active scale, Upper Extremity Functional Index, Low Back Outcome Score, and International Knee Documentation Committee questionnaire were among the least readable PROMs. CONCLUSIONS Most sports medicine PROMs are written above the recommended sixth-grade reading level. CLINICAL RELEVANCE Ensuring that sports medicine PROMs meet recommended readability standards may improve data accuracy and patient comprehension. By reducing literacy barriers, clinicians can obtain more reliable responses, better evaluate outcomes, and ultimately enhance patient care.
Collapse
Affiliation(s)
- Harjot Uppal
- St. Joseph's University Medical Center, Paterson, New Jersey, U.S.A
| | - Daniel J Garcia
- Rutgers New Jersey Medical School, Newark, New Jersey, U.S.A
| | - Matthew Kruchten
- St. Joseph's University Medical Center, Paterson, New Jersey, U.S.A
| | - Matthew J Kraeutler
- Department of Orthopaedic Surgery & Rehabilitation, Texas Tech University Health Sciences Center, Lubbock, Texas, U.S.A
| | - Andrew McGinniss
- St. Joseph's University Medical Center, Paterson, New Jersey, U.S.A
| | - Arash Emami
- St. Joseph's University Medical Center, Paterson, New Jersey, U.S.A
| | - Anthony J Scillia
- St. Joseph's University Medical Center, Paterson, New Jersey, U.S.A..
| |
Collapse
|
3
|
Hartnett DA, Philips AP, Daniels AH, Blankenhorn BD. Readability of Online Foot and Ankle Surgery Patient Education Materials. Foot Ankle Spec 2025; 18:9-18. [PMID: 35934974 DOI: 10.1177/19386400221116463] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Background. Online health education resources are frequently accessed by patients seeking information on orthopaedic conditions and procedures. The objectives of this study were to assess the readability of information provided by the American Orthopaedic Foot and Ankle Society (AOFAS) and compare current levels of readability with previous online material. Methods. This study examined 115 articles classified as "Conditions" or "Treatments" on FootCareMD.org. Readability was assessed using the 6 readability assessment tools: Flesch Reading Ease, Flesch-Kincaid Grade Level (FKGL), Gunning Fog Score, Simple Measure of Gobbledygook (SMOG) Index, Coleman-Liau Index, and the Automated Readability Index. Results. The mean readability score across all metrics ranged from 9.1 to 12.1, corresponding to a 9th- to 12th-grade reading level, with a mean FKGL of 9.2 ± SD 1.1 (range: 6.3-15.0). No articles were written below the recommended US sixth-grade reading level, with only 3 articles at or below an eighth-grade level. Treatment articles had higher mean readability grade levels than condition articles (P = .03). Conclusion. Although the volume and quality of the AOFAS resource Web site has increased, readability of information has worsened since 2008 and remains higher than the recommended reading level for optimal comprehension by the general population.Levels of Evidence: Level IV:Retrospective quantitative analysis.
Collapse
Affiliation(s)
- Davis A Hartnett
- The Warren Alpert Medical School of Brown University, Providence, Rhode Island (DAH, APP)
- Department of Orthopaedic Surgery, Warren Alpert Medical School of Brown University, Providence, Rhode Island (AHD, BDB)
| | - Alexander P Philips
- The Warren Alpert Medical School of Brown University, Providence, Rhode Island (DAH, APP)
- Department of Orthopaedic Surgery, Warren Alpert Medical School of Brown University, Providence, Rhode Island (AHD, BDB)
| | - Alan H Daniels
- The Warren Alpert Medical School of Brown University, Providence, Rhode Island (DAH, APP)
- Department of Orthopaedic Surgery, Warren Alpert Medical School of Brown University, Providence, Rhode Island (AHD, BDB)
| | - Brad D Blankenhorn
- The Warren Alpert Medical School of Brown University, Providence, Rhode Island (DAH, APP)
- Department of Orthopaedic Surgery, Warren Alpert Medical School of Brown University, Providence, Rhode Island (AHD, BDB)
| |
Collapse
|
4
|
Aytaç E, Khanzada NK, Ibrahim Y, Khayet M, Hilal N. Reverse Osmosis Membrane Engineering: Multidirectional Analysis Using Bibliometric, Machine Learning, Data, and Text Mining Approaches. MEMBRANES 2024; 14:259. [PMID: 39728709 DOI: 10.3390/membranes14120259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2024] [Revised: 11/30/2024] [Accepted: 12/04/2024] [Indexed: 12/28/2024]
Abstract
Membrane engineering is a complex field involving the development of the most suitable membrane process for specific purposes and dealing with the design and operation of membrane technologies. This study analyzed 1424 articles on reverse osmosis (RO) membrane engineering from the Scopus database to provide guidance for future studies. The results show that since the first article was published in 1964, the domain has gained popularity, especially since 2009. Thin-film composite (TFC) polymeric material has been the primary focus of RO membrane experts, with 550 articles published on this topic. The use of nanomaterials and polymers in membrane engineering is also high, with 821 articles. Common problems such as fouling, biofouling, and scaling have been the center of work dedication, with 324 articles published on these issues. Wang J. is the leader in the number of published articles (73), while Gao C. is the leader in other metrics. Journal of Membrane Science is the most preferred source for the publication of RO membrane engineering and related technologies. Author social networks analysis shows that there are five core clusters, and the dominant cluster have 4 researchers. The analysis of sentiment, subjectivity, and emotion indicates that abstracts are positively perceived, objectively written, and emotionally neutral.
Collapse
Affiliation(s)
- Ersin Aytaç
- Department of Structure of Matter, Thermal Physics and Electronics, Faculty of Physics, University Complutense of Madrid, Avda. Complutense s/n, 28040 Madrid, Spain
- Department of Environmental Engineering, Zonguldak Bülent Ecevit University, 67100 Zonguldak, Türkiye
| | - Noman Khalid Khanzada
- NYUAD Water Research Center, New York University Abu Dhabi, P.O. Box 129188, Abu Dhabi 129188, United Arab Emirates
| | - Yazan Ibrahim
- NYUAD Water Research Center, New York University Abu Dhabi, P.O. Box 129188, Abu Dhabi 129188, United Arab Emirates
- Chemical and Biomolecular Engineering Division, New York University, Brooklyn, NY 11201, USA
| | - Mohamed Khayet
- Department of Structure of Matter, Thermal Physics and Electronics, Faculty of Physics, University Complutense of Madrid, Avda. Complutense s/n, 28040 Madrid, Spain
- Madrid Institute for Advanced Studies of Water (IMDEA Water Institute), Avda. Punto Com N° 2, 28805 Madrid, Spain
| | - Nidal Hilal
- NYUAD Water Research Center, New York University Abu Dhabi, P.O. Box 129188, Abu Dhabi 129188, United Arab Emirates
| |
Collapse
|
5
|
Kuo KT, Park KE, Suresh R, Heron MJ, Zhu KJ, Lebbos F, Wilde BM, Sim D, Zamore Z, Chekfa AJ, Tuffaha SH, Elhelali A. Quality, Reliability, and Readability of Peripheral Nerve Intervention Websites for Patients. Hand (N Y) 2024:15589447241299045. [PMID: 39593257 PMCID: PMC11600414 DOI: 10.1177/15589447241299045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/28/2024]
Abstract
BACKGROUND This study aims to evaluate the readability, quality, and reliability of online resources about peripheral nerve surgeries to determine if they meet recommended literacy standards. METHODS We analyzed a total of 137 peripheral nerve surgery website by performing a Google search using the search terms "nerve transfer," "nerve repair," "nerve graft," "nerve decompression," "neurolysis," "targeted muscle reinnervation," "regenerative peripheral nerve interface," and "vascularized denervated muscle target." The reading level of the website text was assessed using Simple Measures of Gobbledygook, Flesch-Kincaid, and Gunning Fog. Quality was evaluated using the DISCERN Instrument. Reliability was determined using the Journal of American Medical Association Benchmark Criteria. RESULTS All the websites exceeded the sixth-grade reading level, with median readability scores corresponding to a high school reading level or above. Conceptually harder peripheral nerve surgeries such as targeted muscle reinnervation and regenerative peripheral nerve interface websites were generally written at a significantly higher reading level than conceptually easier surgeries such as nerve repair and nerve graft. The median quality of the websites was rated as poor, and the median reliability of the websites was rated as low. CONCLUSIONS The findings indicate that the current peripheral nerve surgery websites texts do not adhere to recommended reading levels and are constructed with poor quality and low reliability. This potentially hinders patients understanding and utilization of peripheral nerve surgeries, suggesting a need for standardized guidelines to enhance the accessibility of medical information online.
Collapse
|
6
|
Singh S, Jamal A, Qureshi F. Readability Metrics in Patient Education: Where Do We Innovate? Clin Pract 2024; 14:2341-2349. [PMID: 39585011 PMCID: PMC11586978 DOI: 10.3390/clinpract14060183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 10/09/2024] [Accepted: 10/18/2024] [Indexed: 11/26/2024] Open
Abstract
The increasing use of digital applications in healthcare has led to a greater need for patient education materials. These materials, often in the form of pamphlets, booklets, and handouts, are designed to supplement physician-patient communication and aim to improve patient outcomes. However, the effectiveness of these materials can be hindered by variations in patient health literacy. Readability, a measure of text comprehension, is a key factor influencing how well patients understand these educational materials. While there has been growing interest in readability assessment in medicine, many studies have demonstrated that digital texts do not frequently meet the recommended sixth-to-eighth grade reading level. The purpose of this opinion article is to review readability from the perspective of studies in pediatric medicine, internal medicine, preventative medicine, and surgery. This article aims to communicate that while readability is important, it tends to not fully capture the complexity of health literacy or effective patient communication. Moreover, a promising avenue to improve readability may be in generative artificial intelligence, as there are currently limited tools with similar effectiveness.
Collapse
Affiliation(s)
- Som Singh
- Department of Internal Medicine, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA
| | - Aleena Jamal
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA 19107, USA
| | - Fawad Qureshi
- Department of Nephrology and Hypertension, Mayo Clinic Alix School of Medicine, Rochester, MN 55905, USA
| |
Collapse
|
7
|
Shin A, Paidisetty PS, Chivukula S, Wang LKP, Chen W. Assessing the Readability of Online English and Spanish Resources for Polydactyly and Syndactyly. Ann Plast Surg 2024; 93:546-550. [PMID: 39445874 DOI: 10.1097/sap.0000000000004121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2024]
Abstract
INTRODUCTION Online patient education materials (PEMs) that are difficult to read disproportionately affect patients with low health literacy and educational attainment. Patients may not be fully informed or empowered to engage meaningfully with providers and advocate for their goals. We aim to assess the readability of online PEMs regarding polydactyly and syndactyly. METHODS Google was used to query "polydactyly" and "syndactyly" in English and Spanish. The first 50 results were categorized into institutional (government, medical school, teaching hospital), noninstitutional (private practice, blog), and academic (journal articles, book chapters). Readability scores were generated using the Simple Measure of Gobbledygook and Spanish Simple Measure of Gobbledygook scales. RESULTS All polydactyly PEMs and >95% of syndactyly PEMs exceeded the National Institutes of Health recommended 6th-grade reading level. Altogether, English PEMs had an average reading level of a university freshman and Spanish PEMs had an average reading level of nearly a high school sophomore. For both diagnoses, English PEMs were harder to read than Spanish PEMs overall and when compared across the 3 categories between the 2 languages. Generally, noninstitutional PEMs were more difficult to read than their institutional counterparts. CONCLUSIONS To improve patient education, health literacy, and language equity, online resources for polydactyly and syndactyly should be written at the 6th-grade level. Currently, these PEMs are too advanced, which can make accessing, understanding, and pursuing healthcare decisions more challenging. Understanding health conditions and information is crucial to empower patients, regardless of literacy.
Collapse
Affiliation(s)
- Ashley Shin
- From the McGovern Medical School at UTHealth, Houston, TX
| | | | - Surya Chivukula
- John Sealy School of Medicine, The University of Texas Medical Branch, Galveston, TX
| | - Leonard Kuan-Pei Wang
- John Sealy School of Medicine, The University of Texas Medical Branch, Galveston, TX
| | | |
Collapse
|
8
|
Vallurupalli M, Shah ND, Vyas RM. Optimizing Readability of Patient-Facing Hand Surgery Education Materials Using Chat Generative Pretrained Transformer (ChatGPT) 3.5. J Hand Surg Am 2024; 49:986-991. [PMID: 38970600 DOI: 10.1016/j.jhsa.2024.05.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 05/06/2024] [Accepted: 05/21/2024] [Indexed: 07/08/2024]
Abstract
PURPOSE To address patient health literacy, the American Medical Association and the National Institutes of Health recommend that readability of patient education materials should not exceed an eighth grade reading level. However, patient-facing materials often remain above the recommended average reading level. Current online calculators provide readability scores; however, they lack the ability to provide text-specific feedback, which may streamline the process of simplifying patient materials. The purpose of this study was to evaluate Chat Generative Pretrained Transformer (ChatGPT) 3.5 as a tool for optimizing patient-facing hand surgery education materials through reading level analysis and simplification. METHODS The readability of 18 patient-facing hand surgery education materials was compared by a traditional online calculator for reading level and ChatGPT 3.5. The original excerpts were then entered into ChatGPT 3.5 and simplified by the artificial intelligence tool. The simplified excerpts were scored by the same calculators. RESULTS The readability scores for the original excerpts from the online calculator and ChatGPT 3.5 were similar. The simplified excerpts' scores were lower than the originals, with a mean of 7.28, less than the maximum recommended 8. CONCLUSIONS The use of ChatGPT 3.5 for the purpose of simplification and readability analysis of patient-facing hand surgery materials is efficient and may help facilitate the conveyance of important health information. ChatGPT 3.5 rendered readability scores comparable with traditional readability calculators, in addition to excerpt-specific feedback. It was also able to simplify materials to the recommended grade levels. CLINICAL RELEVANCE By confirming ChatGPT3.5's ability to assess and simplify patient education materials, this study offers a practical solution for potentially improving patient comprehension, engagement, and health outcomes in clinical settings.
Collapse
Affiliation(s)
- Medha Vallurupalli
- Keck School of Medicine of University of Southern California, Los Angeles, CA; Department of Plastic Surgery, University of California, Irvine, Orange, CA
| | - Nikhil D Shah
- Department of Plastic Surgery, University of California, Irvine, Orange, CA
| | - Raj M Vyas
- Department of Plastic Surgery, University of California, Irvine, Orange, CA; Children's Hospital of Orange County, Orange, CA.
| |
Collapse
|
9
|
Lim B, Sen S. A cross-sectional quantitative analysis of the readability and quality of online resources regarding thumb carpometacarpal joint replacement surgery. J Hand Microsurg 2024; 16:100119. [PMID: 39234384 PMCID: PMC11369735 DOI: 10.1016/j.jham.2024.100119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 06/18/2024] [Accepted: 06/20/2024] [Indexed: 09/06/2024] Open
Abstract
Background Thumb carpometacarpal (CMC) joint osteoarthritis is a common degenerative condition that affects up to 15 % of the population older than 30 years. Poor readability of online health resources has been associated with misinformation, inappropriate care, incorrect self-treatment, worse health outcomes, and increased healthcare resource waste. This study aims to assess the readability and quality of online information regarding thumb carpometacarpal (CMC) joint replacement surgery. Methods The terms "thumb joint replacement surgery", "thumb carpometacarpal joint replacement surgery", "thumb cmc joint replacement surgery", "thumb arthroplasty", "thumb carpometacarpal arthroplasty", and "thumb cmc arthroplasty" were searched in Google and Bing. Readability was determined using the Flesch Reading Ease Score (FRES) and the Flesch-Kincaid Reading Grade Level (FKGL). FRES >65 or a grade level score of sixth grade and under was considered acceptable. Quality was assessed using the Patient Education Materials Assessment Tool (PEMAT) and a modified DISCERN tool. PEMAT scores below 70 were considered poorly understandable and poorly actionable. Results A total of 34 websites underwent qualitative analysis. The average FRES was 54.60 ± 7.91 (range 30.30-67.80). Only 3 (8.82 %) websites had a FRES score >65. The average FKGL score was 8.19 ± 1.80 (range 5.60-12.90). Only 3 (8.82 %) websites were written at or below a sixth-grade level. The average PEMAT percentage score for understandability and actionability was 76.82 ± 9.43 (range 61.54-93.75) and 36.18 ± 24.12 (range 0.00-60.00) respectively. Although 22 (64.71 %) of websites met the acceptable standard of 70 % for understandability, none of the websites met the acceptable standard of 70 % for actionability. The average total DISCERN score was 32.00 ± 4.29 (range 24.00-42.00). Conclusions Most websites reviewed were written above recommended reading levels. Most showed acceptable understandability but none showed acceptable actionability. To avoid the negative outcomes of poor patient understanding of online resources, providers of these resources should optimise accessibility to the average reader by using simple words, avoiding jargon, and analysing texts with readability software before publishing the materials online. Websites should also utilise visual aids and provide clearer pre-operative and post-operative instructions.
Collapse
Affiliation(s)
- Brandon Lim
- Trinity College Dublin, School of Medicine, Dublin, Ireland
| | - Suddhajit Sen
- Department of Trauma and Orthopaedic Surgery, Raigmore Hospital, Inverness, UK
| |
Collapse
|
10
|
Chang M, Weiss B, Worrell S, Hsu CH, Ghaderi I. Readability of online patient education material for foregut surgery. Surg Endosc 2024; 38:5259-5265. [PMID: 39009725 DOI: 10.1007/s00464-024-11042-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 06/30/2024] [Indexed: 07/17/2024]
Abstract
INTRODUCTION Health literacy is the ability of individuals to use basic health information and services to make well-informed decisions. Low health literacy among surgical patients has been associated with nonadherence to preoperative and/or discharge instructions as well as poor comprehension of surgery. It likely poses as a barrier to patients considering foregut surgery which requires an understanding of different treatment options and specific diet instructions. The objective of this study was to assess and compare the readability of online patient education materials (PEM) for foregut surgery. METHODS Using Google, the terms "anti-reflux surgery, "GERD surgery," and "foregut surgery" were searched and a total of 30 webpages from universities and national organizations were selected. The readability of the text was assessed with seven instruments: Flesch Reading Ease formula (FRE), Gunning Fog (GF), Flesch-Kincaid Grade Level (FKGL), Coleman Liau Index (CL), Simple Measure of Gobbledygook (SMOG), Automated Readability Index (ARI), and Linsear Write Formula (LWF). Mean readability scores were calculated with standard deviations. We performed a qualitative analysis gathering characteristics such as, type of information (preoperative or postoperative), organization, use of multimedia, inclusion of a version in another language. RESULTS The overall average readability of the top PEM for foregut surgery was 12th grade. There was only one resource at the recommended sixth grade reading level. Nearly half of PEM included some form of multimedia. CONCLUSIONS The American Medical Association and National Institute of Health have recommended that PEMs to be written at the 5th-6th grade level. The majority of online PEM for foregut surgery is above the recommended reading level. This may be a barrier for patients seeking foregut surgery. Surgeons should be aware of the potential gaps in understanding of their patients to help them make informed decisions and improve overall health outcomes.
Collapse
Affiliation(s)
- Michelle Chang
- College of Medicine, Department of Surgery, University of Arizona, Tucson, USA.
| | - Barry Weiss
- College of Medicine, Department of Family and Community Medicine, University of Arizona, Tucson, USA
| | - Stephanie Worrell
- College of Medicine, Department of Surgery, University of Arizona, Tucson, USA
| | - Chiu-Hsieh Hsu
- College of Public Health, Department of Epidemiology and Biostatistics, University of Arizona, Tucson, USA
| | - Iman Ghaderi
- College of Medicine, Department of Surgery, University of Arizona, Tucson, USA
| |
Collapse
|
11
|
Pohl NB, Derector E, Rivlin M, Bachoura A, Tosti R, Kachooei AR, Beredjiklian PK, Fletcher DJ. A quality and readability comparison of artificial intelligence and popular health website education materials for common hand surgery procedures. HAND SURGERY & REHABILITATION 2024; 43:101723. [PMID: 38782361 DOI: 10.1016/j.hansur.2024.101723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Revised: 05/16/2024] [Accepted: 05/18/2024] [Indexed: 05/25/2024]
Abstract
INTRODUCTION ChatGPT and its application in producing patient education materials for orthopedic hand disorders has not been extensively studied. This study evaluated the quality and readability of educational information pertaining to common hand surgeries from patient education websites and information produced by ChatGPT. METHODS Patient education information for four hand surgeries (carpal tunnel release, trigger finger release, Dupuytren's contracture, and ganglion cyst surgery) was extracted from ChatGPT (at a scientific and fourth-grade reading level), WebMD, and Mayo Clinic. In a blinded and randomized fashion, five fellowship-trained orthopaedic hand surgeons evaluated the quality of information using a modified DISCERN criteria. Readability and reading grade level were assessed using Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKGL) equations. RESULTS The Mayo Clinic website scored higher in terms of quality for carpal tunnel release information (p = 0.004). WebMD scored higher for Dupuytren's contracture release (p < 0.001), ganglion cyst surgery (p = 0.003), and overall quality (p < 0.001). ChatGPT - 4th Grade Reading Level, ChatGPT - Scientific Reading Level, WebMD, and Mayo Clinic written materials on average exceeded recommended reading grade levels (4th-6th grade) by at least four grade levels (10th, 14th, 13th, and 11th grade, respectively). CONCLUSIONS ChatGPT provides inferior education materials compared to patient-friendly websites. When prompted to provide more easily read materials, ChatGPT generates less robust information compared to patient-friendly websites and does not adequately simplify the educational information. ChatGPT has potential to improve the quality and readability of patient education materials but currently, patient-friendly websites provide superior quality at similar reading comprehension levels.
Collapse
Affiliation(s)
- Nicholas B Pohl
- Department of Orthopaedic Surgery, Rothman Orthopaedic Institute, Philadelphia, PA, USA.
| | - Evan Derector
- Department of Orthopaedic Surgery, Rothman Orthopaedic Institute, Philadelphia, PA, USA
| | - Michael Rivlin
- Department of Orthopaedic Surgery, Rothman Orthopaedic Institute, Philadelphia, PA, USA
| | - Abdo Bachoura
- Department of Orthopaedic Surgery, Rothman Orthopaedics Florida, Orlando, FL, USA
| | - Rick Tosti
- Department of Orthopaedic Surgery, Rothman Orthopaedic Institute, Philadelphia, PA, USA
| | - Amir R Kachooei
- Department of Orthopaedic Surgery, Rothman Orthopaedics Florida, Orlando, FL, USA
| | - Pedro K Beredjiklian
- Department of Orthopaedic Surgery, Rothman Orthopaedic Institute, Philadelphia, PA, USA
| | - Daniel J Fletcher
- Department of Orthopaedic Surgery, Rothman Orthopaedic Institute, Philadelphia, PA, USA
| |
Collapse
|
12
|
Browne R, Gull K, Hurley CM, Sugrue RM, O’Sullivan JB. ChatGPT-4 Can Help Hand Surgeons Communicate Better With Patients. JOURNAL OF HAND SURGERY GLOBAL ONLINE 2024; 6:436-438. [PMID: 38817773 PMCID: PMC11133925 DOI: 10.1016/j.jhsg.2024.03.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 03/10/2024] [Indexed: 06/01/2024] Open
Abstract
The American Society for Surgery of the Hand and British Society for Surgery of the Hand produce patient-focused information above the sixth-grade readability recommended by the American Medical Association. To promote health equity, patient-focused content should be aimed at an appropriate level of health literacy. Artificial intelligence-driven large language models may be able to assist hand surgery societies in improving the readability of the information provided to patients. The readability was calculated for all the articles written in English on the American Society for Surgery of the Hand and British Society for Surgery of the Hand websites, in terms of seven of the commonest readability formulas. Chat Generative Pre-Trained Transformer version 4 (ChatGPT-4) was then asked to rewrite each article at a sixth-grade readability level. The readability for each response was calculated and compared with the unedited articles. Chat Generative Pre-Trained Transformer version 4 was able to improve the readability across all chosen readability formulas and was successful in achieving a mean sixth-grade readability level in terms of the Flesch Kincaid Grade Level and Simple Measure of Gobbledygook calculations. It increased the mean Flesch Reading Ease score, with higher scores representing more readable material. This study demonstrated that ChatGPT-4 can be used to improve the readability of patient-focused material in hand surgery. However, ChatGPT-4 is interested primarily in sounding natural, and not in seeking truth, and hence, each response must be evaluated by the surgeon to ensure that information accuracy is not being sacrificed for the sake of readability by this powerful tool.
Collapse
Affiliation(s)
- Robert Browne
- Royal College of Surgeons in Ireland, Dublin, Ireland
| | - Khadija Gull
- Department of Reconstructive and Plastic Surgery, Connolly Hospital Blanchardstown, Dublin, Ireland
| | | | | | - John Barry O’Sullivan
- Royal College of Surgeons in Ireland, Dublin, Ireland
- Department of Reconstructive and Plastic Surgery, Connolly Hospital Blanchardstown, Dublin, Ireland
| |
Collapse
|
13
|
Croen BJ, Abdullah MS, Berns E, Rapaport S, Hahn AK, Barrett CC, Sobel AD. Evaluation of Patient Education Materials From Large-Language Artificial Intelligence Models on Carpal Tunnel Release. Hand (N Y) 2024:15589447241247332. [PMID: 38660977 PMCID: PMC11571324 DOI: 10.1177/15589447241247332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
BACKGROUND ChatGPT, an artificial intelligence technology, has the potential to be a useful patient aid, though the accuracy and appropriateness of its responses and recommendations on common hand surgical pathologies and procedures must be understood. Comparing the sources referenced and characteristics of responses from ChatGPT and an established search engine (Google) on carpal tunnel surgery will allow for an understanding of the utility of ChatGPT for patient education. METHODS A Google search of "carpal tunnel release surgery" was performed and "frequently asked questions (FAQs)" were recorded with their answer and source. ChatGPT was then asked to provide answers to the Google FAQs. The FAQs were compared, and answer content was compared using word count, readability analyses, and content source. RESULTS There was 40% concordance among questions asked by the programs. Google answered each question with one source per answer, whereas ChatGPT's answers were created from two sources per answer. ChatGPT's answers were significantly longer than Google's and multiple readability analysis algorithms found ChatGPT responses to be statistically significantly more difficult to read and at a higher grade level than Google's. ChatGPT always recommended "contacting your surgeon." CONCLUSION A comparison of ChatGPT's responses to Google's FAQ responses revealed that ChatGPT's answers were more in-depth, from multiple sources, and from a higher proportion of academic Web sites. However, ChatGPT answers were found to be more difficult to understand. Further study is needed to understand if the differences in the responses between programs correlate to a difference in patient comprehension.
Collapse
Affiliation(s)
- Brett J. Croen
- Department of Orthopaedic Surgery, Penn Medicine, Philadelphia, PA, USA
| | | | - Ellis Berns
- Department of Orthopaedic Surgery, Penn Medicine, Philadelphia, PA, USA
| | - Sarah Rapaport
- Department of Orthopaedic Surgery, Penn Medicine, Philadelphia, PA, USA
| | - Alexander K. Hahn
- Department of Orthopaedic Surgery, University of Connecticut, Farmington, USA
| | | | - Andrew D. Sobel
- Department of Orthopaedic Surgery, Penn Medicine, Philadelphia, PA, USA
| |
Collapse
|
14
|
Sathyanarayanan S, Paidisetty P, Wang LKP, Gosman A, Williams S, Chen W. Assessing the Readability of Online English and Spanish Language Patient Education Resources Provided by the American Society of Plastic Surgeons, American Society of Aesthetic Plastic Surgeons, and American Society of Reconstructive Microsurgeons. Ann Plast Surg 2024; 92:263-266. [PMID: 38320007 DOI: 10.1097/sap.0000000000003754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
INTRODUCTION The National Institutes of Health recommends that patient education materials (PEMs) be written at the sixth grade level. However, PEMs online are still generally difficult to read. The usefulness of online PEMs depends on their comprehensibility. OBJECTIVES This study assessed the readability of PEMs from national Plastic and Reconstructive Surgery (PRS) organization websites. METHODS Patient education materials were collected from 3 prominent PRS organizations-the American Society of Plastic Surgeons (ASPS), American Society of Aesthetic Plastic Surgeons (ASAPS), and the American Society of Reconstructive Microsurgeons (ASRM). ASPS PEMs were organized into reconstructive and cosmetic groups, and then further subdivided into English and Spanish subgroups. ASAPS and ASRM PEMs provided cosmetic and reconstructive comparison groups to ASPS, respectively. Readability scores were generated using the Simple Measure of Gobbledygook (SMOG) and the Spanish SMOG scales. RESULTS Overall, all PEMs failed to meet readability guidelines. Within ASPS, Spanish PEMs were easier to read than English PEMs ( P < 0.001), and cosmetic PEMs were easier to read than reconstructive PEMs ( P < 0.05). There was no significant difference between ASPS cosmetic and ASAPS PEMs ( P = 0.36), nor between ASPS reconstructive and ASRM PEMs ( P = 0.65). ASAPS and ASRM did not have any Spanish PEMs, and 92% of all ASPS PEMs were in English. CONCLUSION Although PRS societies strive to better educate the public on the scope of PRS, PRS ranks lowly in public understanding of its role in patient care. In addition, Spanish language PEMs from the 3 PRS organizations are severely lacking. Addressing these concerns will make online patient resources more equitable for various patient populations.
Collapse
Affiliation(s)
| | - Praneet Paidisetty
- From the University of Texas Health Science Center Houston at McGovern Medical School Houston
| | - Leonard Kuan-Pei Wang
- John Sealy School of Medicine, The University of Texas Medical Branch, Galveston, TX
| | - Amanda Gosman
- Division of Plastic Surgery, UC San Diego School of Medicine, San Diego
| | | | | |
Collapse
|
15
|
Paidisetty P, Sathyanarayanan S, Kuan-Pei Wang L, Slaughter K, Freet D, Greives M, Chen W. Assessing the Readability of Online Patient Education Resources Related to Neophallus Reconstruction. J Surg Res 2023; 291:296-302. [PMID: 37506428 DOI: 10.1016/j.jss.2023.06.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 05/07/2023] [Accepted: 06/20/2023] [Indexed: 07/30/2023]
Abstract
INTRODUCTION Online patient education materials (PEMs) often exceed the recommended 6th grade reading level. This can negatively affect transmasculine patients' understanding of treatment plans, increasing barriers to care and worsening health outcomes and patient satisfaction. This study assessed the readability of online English and Spanish PEMs regarding phalloplasty and urethroplasty. METHODS The English and Spanish terms for phalloplasty and urethroplasty were queried on Google. The first fifty results were grouped into institutional (government, medical school, teaching hospital), noninstitutional (private practice, news channel, blog, etc.), and academic (journal articles, book chapters) categories. Readability scores were generated using the Simple Measure of Gobbledygook and Spanish Simple Measure of Gobbledygook scales. RESULTS All PEMs exceeded recommended reading levels. For both procedures, English PEMs had an average reading level approximately of a university sophomore and Spanish PEMs had an average reading level approximately of a high school junior. For both procedures, English PEMs were harder to read than Spanish PEMs overall (P < 0.001) and when compared across the three categories between the two languages (P < 0.001). For Spanish urethroplasty PEMs, noninstitutional PEMs were more difficult to read than institutional PEMs (P < 0.05). CONCLUSIONS Online information for phalloplasty and urethroplasty should be revised and/or standardized materials should be created by trans-affirming health-care providers and national organizations in order to more fully educate the public and prospective patients prior to intervention. A well-informed patient population will improve patient decision-making and surgeon-patient communication, ultimately leading to better health outcomes.
Collapse
Affiliation(s)
| | | | - Leonard Kuan-Pei Wang
- John Sealy School of Medicine, The University of Texas Medical Branch, Galveston, Texas
| | - Kristen Slaughter
- McGovern Medical School at UTHealth, Houston, Texas; Department of Plastic and Reconstructive Surgery, UTHealth, Houston, Texas
| | - Daniel Freet
- McGovern Medical School at UTHealth, Houston, Texas; Department of Plastic and Reconstructive Surgery, UTHealth, Houston, Texas
| | - Matthew Greives
- McGovern Medical School at UTHealth, Houston, Texas; Department of Plastic and Reconstructive Surgery, UTHealth, Houston, Texas
| | - Wendy Chen
- McGovern Medical School at UTHealth, Houston, Texas; Department of Plastic and Reconstructive Surgery, UTHealth, Houston, Texas.
| |
Collapse
|
16
|
Chawla S, Ding J, Mazhar L, Khosa F. Entering the Misinformation Age: Quality and Reliability of YouTube for Patient Information on Liposuction. Plast Surg (Oakv) 2023; 31:371-376. [PMID: 37915348 PMCID: PMC10617453 DOI: 10.1177/22925503211064382] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 10/01/2021] [Accepted: 10/13/2021] [Indexed: 11/03/2023] Open
Abstract
Background: YouTube is currently the most popular online platform and is increasingly being utilized by patients as a resource on aesthetic surgery. Yet, its content is largely unregulated and this may result in dissemination of unreliable and inaccurate information. The objective of this study was to evaluate the quality and reliability of YouTube liposuction content available to potential patients. Methods: YouTube was screened using the keywords: "liposuction," "lipoplasty," and "body sculpting." The top 50 results for each term were screened for relevance. Videos which met the inclusion criteria were scored using the Global Quality Score (GQS) for educational value and the Journal of the American Medical Association (JAMA) criteria for video reliability. Educational value, reliability, video views, likes, dislikes, duration and publishing date were compared between authorship groups, high/low reliability, and high/low educational value. Results: A total of 150 videos were screened, of which 89 videos met the inclusion criteria. Overall, the videos had low reliability (mean JAMA score = 2.78, SD = 1.15) and low educational value (mean GQS score = 3.55, SD = 1.31). Videos uploaded by physicians accounted for 83.1% percent of included videos and had a higher mean educational value and reliability score than those by patients. Video views, likes, dislikes, comments, popularity, and length were significantly greater in videos with high reliability. Conclusions: To ensure liposuction-seeking patients are appropriately educated and informed, surgeons and their patients may benefit from an analysis of educational quality and reliability of such online content. Surgeons may wish to discuss online sources of information with patients.
Collapse
Affiliation(s)
- Sahil Chawla
- Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Jeffrey Ding
- Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Leena Mazhar
- Faculty of Science, University of British Columbia, Vancouver, BC, Canada
| | - Faisal Khosa
- Department of Radiology, Vancouver General Hospital, Vancouver, BC, Canada
| |
Collapse
|
17
|
Kilgallen W, Earp B, Zhang D. Internet Search Trends for Common Hand Surgery Diagnoses. Cureus 2023; 15:e49755. [PMID: 38161884 PMCID: PMC10757678 DOI: 10.7759/cureus.49755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2023] [Indexed: 01/03/2024] Open
Abstract
PURPOSE The internet is a common resource for patients seeking health information. Trends in internet search interests for common hand surgery diagnoses and their seasonal variations have not been previously studied. The objectives of this study were (1) to describe the temporal trends in internet search interest for common hand surgery diagnoses in the recent five-year time period and (2) to assess seasonal variations in search term interest. METHODS An internet-based study of internet search term interest of 10 common hand surgery diagnoses was performed using Google Trends (Google, Inc., Mountain View, CA) from January 2017 to December 2021. The 10 diagnoses were "carpal tunnel syndrome," "trigger finger," "thumb arthritis," "ganglion cyst," "de Quervain's tenosynovitis," "lateral epicondylitis," "Dupuytren disease," "distal radius fracture," "finger fracture," and "scaphoid fracture." Analysis of variance (ANOVA) was used to assess for seasonal differences in search interest, and temporal trends were assessed using the two-tailed Mann-Kendall trend test. RESULTS During the study period, there was an increasing trend for search interest for "carpal tunnel syndrome," "trigger finger," "thumb arthritis," "Dupuytren disease," and "finger fracture," both in the United States and worldwide. There was no significant temporal trend for "ganglion cyst," "de Quervain's tenosynovitis," "lateral epicondylitis," and "distal radius fracture." There were no significant temporal trend for "scaphoid fracture" in the United States and a decreasing trend worldwide. There was significant seasonal variation in search term interest for "finger fracture" in the United States, "finger fracture" worldwide, and "scaphoid fracture" in the United States, with popularity peaking in the fall. CONCLUSIONS Despite growth in global internet usage, internet search interest has remained stagnant for many common hand surgery conditions, which may represent a shifting preference for patients to obtain health information from other resources. Internet search interest for traumatic hand conditions corresponds to seasonal variations in fracture epidemiology and peaks in the fall season.
Collapse
Affiliation(s)
| | - Brandon Earp
- Orthopedic Surgery, Brigham and Women's Hospital, Boston, USA
| | - Dafang Zhang
- Orthopedic Surgery, Brigham and Women's Hospital, Boston, USA
| |
Collapse
|
18
|
Crook BS, Park CN, Hurley ET, Richard MJ, Pidgeon TS. Evaluation of Online Artificial Intelligence-Generated Information on Common Hand Procedures. J Hand Surg Am 2023; 48:1122-1127. [PMID: 37690015 DOI: 10.1016/j.jhsa.2023.08.003] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Revised: 07/25/2023] [Accepted: 08/02/2023] [Indexed: 09/11/2023]
Abstract
PURPOSE The purpose of this study was to analyze the quality and readability of the information generated by an online artificial intelligence (AI) platform regarding 4 common hand surgeries and to compare AI-generated responses to those provided in the informational articles published by the American Society for Surgery of the Hand (ASSH) HandCare website. METHODS An open AI model (ChatGPT) was used to answer questions commonly asked by patients on 4 common hand surgeries (carpal tunnel release, cubital tunnel release, trigger finger release, and distal radius fracture fixation). These answers were evaluated for medical accuracy, quality and readability and compared to answers derived from the ASSH HandCare materials. RESULTS For the AI model, the Journal of the American Medical Association benchmark criteria score was 0/4, and the DISCERN score was 58 (considered good). The areas in which the AI model lost points were primarily related to the lack of attribution, reliability and currency of the source material. For AI responses, the mean Flesch Kinkaid Reading Ease score was 15, and the Flesch Kinkaid Grade Level was 34, which is considered to be college level. For comparison, ASSH HandCare materials scored 3/4 on the Journal of the American Medical Association Benchmark, 71 on DISCERN (excellent), 9 on Flesch Kinkaid Grade Level, and 60 on Flesch Kinkaid Reading Ease score (eighth/ninth grade level). CONCLUSION An AI language model (ChatGPT) provided generally high-quality answers to frequently asked questions relating to the common hand procedures queried, but it is unclear when or where these answers came from without citations to source material. Furthermore, a high reading level was required to comprehend the information presented. The AI software repeatedly referenced the need to discuss these questions with a surgeon, the importance of shared decision-making and individualized care, and compliance with surgeon treatment recommendations. CLINICAL RELEVANCE As novel AI applications become increasingly mainstream, hand surgeons must understand the limitations and ramifications these technologies have for patient care.
Collapse
Affiliation(s)
- Bryan S Crook
- Department of Orthopaedic Surgery, Duke University Hospital, Durham, NC.
| | - Caroline N Park
- Department of Orthopaedic Surgery, Duke University Hospital, Durham, NC
| | - Eoghan T Hurley
- Department of Orthopaedic Surgery, Duke University Hospital, Durham, NC
| | - Marc J Richard
- Department of Orthopaedic Surgery, Duke University Hospital, Durham, NC
| | - Tyler S Pidgeon
- Department of Orthopaedic Surgery, Duke University Hospital, Durham, NC
| |
Collapse
|
19
|
Park A, Sayed F, Robinson P, Elopre L, Ge Y, Li S, Grov C, Sullivan PS. Health Information on Pre-Exposure Prophylaxis From Search Engines and Twitter: Readability Analysis. JMIR Public Health Surveill 2023; 9:e48630. [PMID: 37665621 PMCID: PMC10507523 DOI: 10.2196/48630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 06/21/2023] [Accepted: 06/26/2023] [Indexed: 09/05/2023] Open
Abstract
BACKGROUND Pre-exposure prophylaxis (PrEP) is proven to prevent HIV infection. However, PrEP uptake to date has been limited and inequitable. Analyzing the readability of existing PrEP-related information is important to understand the potential impact of available PrEP information on PrEP uptake and identify opportunities to improve PrEP-related education and communication. OBJECTIVE We examined the readability of web-based PrEP information identified using search engines and on Twitter. We investigated the readability of web-based PrEP documents, stratified by how the PrEP document was obtained on the web, information source, document format and communication method, PrEP modality, and intended audience. METHODS Web-based PrEP information in English was systematically identified using search engines and the Twitter API. We manually verified and categorized results and described the method used to obtain information, information source, document format and communication method, PrEP modality, and intended audience. Documents were converted to plain text for the analysis and readability of the collected documents was assessed using 4 readability indices. We conducted pairwise comparisons of readability based on how the PrEP document was obtained on the web, information source, document format, communication method, PrEP modality, and intended audience, then adjusted for multiple comparisons. RESULTS A total of 463 documents were identified. Overall, the readability of web-based PrEP information was at a higher level (10.2-grade reading level) than what is recommended for health information provided to the general public (ninth-grade reading level, as suggested by the Department of Health and Human Services). Brochures (n=33, 7% of all identified resources) were the only type of PrEP materials that achieved the target of ninth-grade reading level. CONCLUSIONS Web-based PrEP information is often written at a complex level for potential and current PrEP users to understand. This may hinder PrEP uptake for some people who would benefit from it. The readability of PrEP-related information found on the web should be improved to align more closely with health communication guidelines for reading level to improve access to this important health information, facilitate informed decisions by those with a need for PrEP, and realize national prevention goals for PrEP uptake and reducing new HIV infections in the United States.
Collapse
Affiliation(s)
- Albert Park
- Department of Software and Information Systems, University of North Carolina Charlotte, Charlotte, NC, United States
| | - Fatima Sayed
- Department of Software and Information Systems, University of North Carolina Charlotte, Charlotte, NC, United States
| | - Patrick Robinson
- Department of Public Health Sciences, University of North Carolina Charlotte, Charlotte, NC, United States
| | - Latesha Elopre
- Division of Infectious Disease, University of Alabama at Birmingham, Birmingham, AL, United States
| | - Yaorong Ge
- Department of Software and Information Systems, University of North Carolina Charlotte, Charlotte, NC, United States
| | - Shaoyu Li
- Department of Mathematics and Statistics, University of North Carolina Charlotte, Charlotte, NC, United States
| | - Christian Grov
- Department of Community Health and Social Sciences, City University of New York, New York City, NY, United States
| | | |
Collapse
|
20
|
Ahmadzadeh K, Bahrami M, Zare-Farashbandi F, Adibi P, Boroumand MA, Rahimi A. Patient education information material assessment criteria: A scoping review. Health Info Libr J 2023; 40:3-28. [PMID: 36637218 DOI: 10.1111/hir.12467] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 10/13/2022] [Accepted: 11/03/2022] [Indexed: 01/14/2023]
Abstract
BACKGROUND Patient education information material (PEIM) is an essential component of patient education programs in increasing patients' ability to cope with their diseases. Therefore, it is essential to consider the criteria that will be used to prepare and evaluate these resources. OBJECTIVE This paper aims to identify these criteria and recognize the tools or methods used to evaluate them. METHODS National and international databases and indexing banks, including PubMed, Scopus, Web of Science, ProQuest, the Cochrane Library, Magiran, SID and ISC, were searched for this review. Original or review articles, theses, short surveys, and conference papers published between January 1990 and June 2022 were included. RESULTS Overall, 4688 documents were retrieved, of which 298 documents met the inclusion criteria. The criteria were grouped into 24 overarching criteria. The most frequently used criteria were readability, quality, suitability, comprehensibility and understandability. CONCLUSION This review has provided empirical evidence to identify criteria, tools, techniques or methods for developing or evaluating a PEIM. The authors suggest that developing a comprehensive tool based on these findings is critical for evaluating the overall efficiency of PEIM using effective criteria.
Collapse
Affiliation(s)
- Khadijeh Ahmadzadeh
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
- Student Research Commitee, Sirjan School of Medical Sciences, Sirjan, Iran
| | - Masoud Bahrami
- Department of Adult Health Nursing, Nursing and Midwifery Care Research Center, School of Nursing and Midwifery, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Firoozeh Zare-Farashbandi
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Payman Adibi
- Gastroenterology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Mohammad Ali Boroumand
- Department of Medical Library and Information Sciences, School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Alireza Rahimi
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
21
|
Flesch-Kincaid Measure as Proxy of Socio-Economic Status on Twitter. INT J SEMANT WEB INF 2022. [DOI: 10.4018/ijswis.297037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Social media gives researchers an invaluable opportunity to gain insight into different facets of human life. Researchers put a great emphasis on categorizing the socioeconomic status (SES) of individuals to help predict various findings of interest. Forum uses, hashtags and chatrooms are common tools of conversations grouping. Crowdsourcing involves gathering intelligence to group online user community based on common interest. This paper provides a mechanism to look at writings on social media and group them based on their academic background. We analyzed online forum posts from various geographical regions in the US and characterized the readability scores of users. Specifically, we collected 10,000 tweets from the members of US Senate and computed the Flesch-Kincaid readability score. Comparing the Senators’ tweets to the ones from average internet users, we note 1) US Senators’ readability based on their tweets rate is much higher, and 2) immense difference among average citizen’s score compared to those of US Senators is attributed to the wide spectrum of academic attainment.
Collapse
|