1
|
Aponte J, Tejada K, Figueroa K. Readability Level of Spanish Language Online Health Information: A Systematic Review. HISPANIC HEALTH CARE INTERNATIONAL 2025; 23:107-122. [PMID: 39360353 DOI: 10.1177/15404153241286720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2024]
Abstract
Introduction: Because there is limited online health information in Spanish and it is critical to raise health literacy among Spanish-speaking people, it is essential to assess the readability level of Spanish material. Method: This systematic review included all articles published up to January 3, 2024, and used the CINAHL, MEDLINE, and PubMed databases. The objective was to include the body of knowledge on published articles on the readability levels of Spanish-language, web-based health information intended for lay audiences. Results: There were 27 articles in the final review. Within these articles, 11 tools were used in the Spanish language text. Of the tools, INFLESZ was the most frequently used and the FRY formula, Flesch-Szigriszt Index, and Flesch Formula Index were least used. Most materials (85.2%) reported readability levels of online Spanish information above the 8th grade reading level. Conclusions: The findings show the lack of internet-based Spanish language health information and materials at a recommended (e.g., 5th to 8th grade) reading level. More research is needed to determine which readability tests are more accurate for calculating the readability of Spanish web health information.
Collapse
Affiliation(s)
- Judith Aponte
- Nursing Department, Hunter College, New York, New York, USA
- CUNY Institute of Health Equity, New York, USA
| | - Karen Tejada
- Fort Tryon Center for Rehabilitation and Nursing, New York, New York, USA
| | | |
Collapse
|
2
|
Ellison IE, Oslock WM, Abdullah A, Wood L, Thirumalai M, English N, Jones BA, Hollis R, Rubyan M, Chu DI. De novo generation of colorectal patient educational materials using large language models: Prompt engineering key to improved readability. Surgery 2025; 180:109024. [PMID: 39756334 PMCID: PMC11936715 DOI: 10.1016/j.surg.2024.109024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 10/17/2024] [Accepted: 11/29/2024] [Indexed: 01/07/2025]
Abstract
BACKGROUND Improving patient education has been shown to improve clinical outcomes and reduce disparities, though such efforts can be labor intensive. Large language models may serve as an accessible method to improve patient educational material. The aim of this study was to compare readability between existing educational materials and those generated by large language models. METHODS Baseline colorectal surgery educational materials were gathered from a large academic institution (n = 52). Three prompts were entered into Perplexity and ChatGPT 3.5 for each topic: a Basic prompt that simply requested patient educational information the topic, an Iterative prompt that repeated instruction asking for the information to be more health literate, and a Metric-based prompt that requested a sixth-grade reading level, short sentences, and short words. Flesch-Kincaid Grade Level or Grade Level, Flesch-Kincaid Reading Ease or Ease, and Modified Grade Level scores were calculated for all materials, and unpaired t tests were used to compare mean scores between baseline and documents generated by artificial intelligence platforms. RESULTS Overall existing materials were longer than materials generated by the large language models across categories and prompts: 863-956 words vs 170-265 (ChatGPT) and 220-313 (Perplexity), all P < .01. Baseline materials did not meet sixth-grade readability guidelines based on grade level (Grade Level 7.0-9.8 and Modified Grade Level 9.6-11.5) or ease of readability (Ease 53.1-65.0). Readability of materials generated by a large language model varied by prompt and platform. Overall, ChatGPT materials were more readable than baseline materials with the Metric-based prompt: Grade Level 5.2 vs 8.1, Modified Grade Level 7.3 vs 10.3, and Ease 70.5 vs 60.4, all P < .01. In contrast, Perplexity-generated materials were significantly less readable except for those generated with the Metric-based prompt, which did not statistically differ. CONCLUSION Both existing materials and the majority of educational materials created by large language models did not meet readability recommendations. The exception to this was with ChatGPT materials generated with a Metric-based prompt that consistently improved readability scores from baseline and met recommendations in terms of the average Grade Level score. The variability in performance highlights the importance of the prompt used with large language models.
Collapse
Affiliation(s)
- India E Ellison
- Department of Surgery, University of Alabama at Birmingham, AL
| | - Wendelyn M Oslock
- Department of Surgery, University of Alabama at Birmingham, AL; Department of Quality, Birmingham Veterans Affairs Medical Center, AL. https://www.twitter.com/WendelynOslock
| | - Abiha Abdullah
- Trauma and Transfusion Department, University of Pittsburgh Medical College, PA. https://www.twitter.com/abihaabdullah7
| | - Lauren Wood
- Department of Surgery, University of Alabama at Birmingham, AL
| | | | - Nathan English
- Department of Surgery, University of Alabama at Birmingham, AL; Department of General Surgery, University of Cape Town, WC, South Africa
| | - Bayley A Jones
- Department of Surgery, University of Alabama at Birmingham, AL; Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX. https://www.twitter.com/bayley_jones
| | - Robert Hollis
- Department of Surgery, University of Alabama at Birmingham, AL. https://www.twitter.com/rhhollis
| | - Michael Rubyan
- University of Michigan School of Public Health, Ann Arbor, MI
| | - Daniel I Chu
- Department of Surgery, University of Alabama at Birmingham, AL.
| |
Collapse
|
3
|
Akkan H, Seyyar GK. Improving readability in AI-generated medical information on fragility fractures: the role of prompt wording on ChatGPT's responses. Osteoporos Int 2025; 36:403-410. [PMID: 39777491 DOI: 10.1007/s00198-024-07358-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Accepted: 12/16/2024] [Indexed: 01/11/2025]
Abstract
Understanding how the questions used when interacting with chatbots impact the readability of the generated text is essential for effective health communication. Using descriptive queries instead of just keywords during interaction with ChatGPT results in more readable and understandable answers about fragility fractures. PURPOSE Large language models like ChatGPT can enhance patients' understanding of medical information, making health decisions more accessible. Complex terms, such as "fragility fracture," can confuse patients, so presenting its medical content in plain language is crucial. This study explored whether conversational prompts improve readability and understanding compared to keyword-based prompts when generating patient-centered health information on fragility fractures. METHODS The 32 most frequently searched keywords related to "fragility fracture" and "osteoporotic fracture" were identified using Google Trends. From this set, 24 keywords were selected based on relevance and entered sequentially into ChatGPT. Each keyword was tested with two prompt types: (1) plain language with keywords embedded and (2) keywords alone. The readability and comprehensibility of the AI-generated responses were assessed using the Flesch-Kincaid reading ease (FKRE) and Flesch-Kincaid grade level (FKGL), respectively. The scores of the responses were compared using the Mann-Whitney U test. RESULTS The FKRE scores indicated significantly higher readability with plain language prompts (median 34.35) compared to keyword-only prompts (median 23.60). Similarly, the FKGL indicated a lower grade level for plain language prompts (median 12.05) versus keyword-only (median 14.50), with both differences achieving statistical significance. CONCLUSION Our findings suggest that using conversational prompts can enhance the readability of AI-generated medical information on fragility fractures. Clinicians and content creators should consider this approach when using AI for patient education to optimize comprehension.
Collapse
Affiliation(s)
- Hakan Akkan
- Department of Therapy and Rehabilitation, Tavsanli Vocational School of Health Services, Kutahya Health Sciences University, Yeni Mah. Sehit Gaffar Okkan Cd. No: 2 43300, Tavsanli, Kutahya, Turkey.
| | - Gulce Kallem Seyyar
- Division of Occupational Therapy, Faculty of Health Sciences, Kutahya Health Sciences University, Kutahya, Turkey
| |
Collapse
|
4
|
García-Álvarez JM, García-Sánchez A. Readability of Informed Consent Forms for Medical and Surgical Clinical Procedures: A Systematic Review. Clin Pract 2025; 15:26. [PMID: 39996696 PMCID: PMC11854161 DOI: 10.3390/clinpract15020026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2024] [Revised: 01/02/2025] [Accepted: 01/22/2025] [Indexed: 02/26/2025] Open
Abstract
Background/Objectives: The wording of informed consent forms for medical or surgical clinical procedures can be difficult to read and comprehend, making it difficult for patients to make decisions. The objective of this study was to analyze the readability of informed consent forms for medical or surgical clinical procedures. Methods: A systematic review was performed according to the PRISMA statement using PubMed, Embase, and Google Scholar databases. Primary studies analyzing the readability of informed consent forms using mathematical formulas published in any country or language during the last 10 years were selected. The results were synthesized according to the degree of reading difficulty to allow for the comparison of the studies. Study selection was performed independently by the reviewers to avoid the risk of selection bias. Results: Of the 664 studies identified, 26 studies were selected that analyzed the legibility of 13,940 forms. Of these forms, 76.3% had poor readability. Of the six languages analyzed, only English, Spanish, and Turkish had adapted readability indexes. Flesch Reading Ease was the most widely used readability index, although it would be more reliable to use language-specific indices. Conclusions: Most of the analyzed informed consent forms had poor readability, which made them difficult for a large percentage of patients to read and comprehend. It is necessary to modify these forms to make them easier to read and comprehend, to adapt them to each specific language, and to carry out qualitative studies to find out the real legibility of each specific population.
Collapse
Affiliation(s)
- José Manuel García-Álvarez
- Resident Intern of Family and Community Medicine, Hospital Comarcal del Noroeste, 30400 Caravaca, Murcia, Spain
- Health Sciences Program, Catholic University of Murcia (UCAM), 30107 Guadalupe, Murcia, Spain
| | - Alfonso García-Sánchez
- Medical Specialist in Anesthesiology and Critical Care, Anesthesiology and Critical Care Department, Hospital de la Vega Lorenzo Guirao, 30530 Cieza, Spain;
- Clinical Simulation Instructor, Faculty of Nursing, Catholic University of Murcia (UCAM), 30107 Guadalupe, Murcia, Spain
| |
Collapse
|
5
|
Alassaf MS, Bakkari A, Saleh J, Habeeb A, Aljuhani BF, Qazali AA, Alqutaibi AY. An infodemiologic review of internet resources on dental hypersensitivity: A quality and readability assessment. PLoS One 2025; 20:e0312832. [PMID: 39854429 PMCID: PMC11760580 DOI: 10.1371/journal.pone.0312832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 10/15/2024] [Indexed: 01/26/2025] Open
Abstract
BACKGROUND This study aimed to investigate the quality and readability of online English health information about dental sensitivity and how patients evaluate and utilize these web-based information. METHODS The credibility and readability of health information was obtained from three search engines. We conducted searches in "incognito" mode to reduce the possibility of biases. Quality assessment utilized JAMA benchmarks, the DISCERN tool, and HONcode. Readability was analyzed using the SMOG, FRE, and FKGL indices. RESULTS Out of 600 websites, 90 were included, with 62.2% affiliated with dental or medical centers, among these websites, 80% exclusively related to dental implant treatments. Regarding JAMA benchmarks, currency was the most commonly achieved and 87.8% of websites fell into the "moderate quality" category. Word and sentence counts ranged widely with a mean of 815.7 (±435.4) and 60.2 (±33.3), respectively. FKGL averaging 8.6 (±1.6), SMOG scores averaging 7.6 (±1.1), and FRE scale showed a mean of 58.28 (±9.1), with "fair difficult" being the most common category. CONCLUSION The overall evaluation using DISCERN indicated a moderate quality level, with a notable absence of referencing. JAMA benchmarks revealed a general non-adherence among websites, as none of the websites met all of the four criteria. Only one website was HON code certified, suggesting a lack of reliable sources for web-based health information accuracy. Readability assessments showed varying results, with the majority being "fair difficult". Although readability did not significantly differ across affiliations, a wide range of the number of words and sentences count was observed between them.
Collapse
Affiliation(s)
- Muath Saad Alassaf
- Department of Oral and Maxillofacial Surgery, King Fahad Hospital, Madina, Saudi Arabia
| | - Ayman Bakkari
- College of Dentistry, Taibah University, Medina, Saudi Arabia
| | - Jehad Saleh
- College of Dentistry, Taibah University, Medina, Saudi Arabia
| | | | | | - Ahmad A. Qazali
- Substitutive Dental Sciences Department (Prosthodontics), College of Dentistry, Taibah University, Al Madinah, Saudi Arabia
| | - Ahmed Yaseen Alqutaibi
- Substitutive Dental Sciences Department (Prosthodontics), College of Dentistry, Taibah University, Al Madinah, Saudi Arabia
- Department of Prosthodontics, College of Dentistry, Ibb University, Ibb, Yemen
| |
Collapse
|
6
|
Singh S, Errampalli E, Errampalli N, Miran MS. Enhancing Patient Education on Cardiovascular Rehabilitation with Large Language Models. MISSOURI MEDICINE 2025; 122:67-71. [PMID: 39958590 PMCID: PMC11827661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 02/18/2025]
Abstract
Introduction There are barriers that exist for individuals to adhere to cardiovascular rehabilitation programs. A key driver to patient adherence is appropriately educating patients. A growing education tool is using large language models to answer patient questions. Methods The primary objective of this study was to evaluate the readability quality of educational responses provided by large language models for questions regarding cardiac rehabilitation using Gunning Fog, Flesh Kincaid, and Flesch Reading Ease scores. Results The findings of this study demonstrate that the mean Gunning Fog, Flesch Kincaid, and Flesch Reading Ease scores do not meet US grade reading level recommendations across three models: ChatGPT 3.5, Copilot, and Gemini. The Gemini and Copilot models demonstrated greater ease of readability compared to ChatGPT 3.5. Conclusions Large language models could serve as educational tools on cardiovascular rehabilitation, but there remains a need to improve the text readability for these to effectively educate patients.
Collapse
Affiliation(s)
- Som Singh
- University of Missouri-Kansas City, Kansas City, Missouri, and the University of Texas Health Science Center, Houston, Texas
| | | | - Nathan Errampalli
- George Washington University School of Medicine and Health Sciences, Washington, DC
| | | |
Collapse
|
7
|
Yang N, Wu X, Kim CS. College Students’ Preference and Information Comprehension of Different Forms of Diabetes Education Materials Under Different Reading Scenarios. AMERICAN JOURNAL OF HEALTH EDUCATION 2024:1-10. [DOI: 10.1080/19325037.2024.2396596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Accepted: 07/14/2024] [Indexed: 01/03/2025]
|
8
|
Ko TK, Tan DJY, Fan KS. Evaluation of the Quality and Readability of Web-Based Information Regarding Foreign Bodies of the Ear, Nose, and Throat: Qualitative Content Analysis. JMIR Form Res 2024; 8:e55535. [PMID: 39145998 PMCID: PMC11362703 DOI: 10.2196/55535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 01/28/2024] [Accepted: 01/29/2024] [Indexed: 08/16/2024] Open
Abstract
BACKGROUND Foreign body (FB) inhalation, ingestion, and insertion account for 11% of emergency admissions for ear, nose, and throat conditions. Children are disproportionately affected, and urgent intervention may be needed to maintain airway patency and prevent blood vessel occlusion. High-quality, readable online information could help reduce poor outcomes from FBs. OBJECTIVE We aim to evaluate the quality and readability of available online health information relating to FBs. METHODS In total, 6 search phrases were queried using the Google search engine. For each search term, the first 30 results were captured. Websites in the English language and displaying health information were included. The provider and country of origin were recorded. The modified 36-item Ensuring Quality Information for Patients tool was used to assess information quality. Readability was assessed using a combination of tools: Flesch Reading Ease score, Flesch-Kincaid Grade Level, Gunning-Fog Index, and Simple Measure of Gobbledygook. RESULTS After the removal of duplicates, 73 websites were assessed, with the majority originating from the United States (n=46, 63%). Overall, the quality of the content was of moderate quality, with a median Ensuring Quality Information for Patients score of 21 (IQR 18-25, maximum 29) out of a maximum possible score of 36. Precautionary measures were not mentioned on 41% (n=30) of websites and 30% (n=22) did not identify disk batteries as a risky FB. Red flags necessitating urgent care were identified on 95% (n=69) of websites, with 89% (n=65) advising patients to seek medical attention and 38% (n=28) advising on safe FB removal. Readability scores (Flesch Reading Ease score=12.4, Flesch-Kincaid Grade Level=6.2, Gunning-Fog Index=6.5, and Simple Measure of Gobbledygook=5.9 years) showed most websites (56%) were below the recommended sixth-grade level. CONCLUSIONS The current quality and readability of information regarding FBs is inadequate. More than half of the websites were above the recommended sixth-grade reading level, and important information regarding high-risk FBs such as disk batteries and magnets was frequently excluded. Strategies should be developed to improve access to high-quality information that informs patients and parents about risks and when to seek medical help. Strategies to promote high-quality websites in search results also have the potential to improve outcomes.
Collapse
Affiliation(s)
- Tsz Ki Ko
- Department of Surgery, Royal Stoke Hospital, Stoke, United Kingdom
| | | | - Ka Siu Fan
- Department of Surgery, Royal Surrey County Hospital, Guildford, United Kingdom
| |
Collapse
|
9
|
Singh SP, Jamal A, Qureshi F, Zaidi R, Qureshi F. Leveraging Generative Artificial Intelligence Models in Patient Education on Inferior Vena Cava Filters. Clin Pract 2024; 14:1507-1514. [PMID: 39194925 DOI: 10.3390/clinpract14040121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Revised: 06/13/2024] [Accepted: 07/23/2024] [Indexed: 08/29/2024] Open
Abstract
Background: Inferior Vena Cava (IVC) filters have become an advantageous treatment modality for patients with venous thromboembolism. As the use of these filters continues to grow, it is imperative for providers to appropriately educate patients in a comprehensive yet understandable manner. Likewise, generative artificial intelligence models are a growing tool in patient education, but there is little understanding of the readability of these tools on IVC filters. Methods: This study aimed to determine the Flesch Reading Ease (FRE), Flesch-Kincaid, and Gunning Fog readability of IVC Filter patient educational materials generated by these artificial intelligence models. Results: The ChatGPT cohort had the highest mean Gunning Fog score at 17.76 ± 1.62 and the lowest at 11.58 ± 1.55 among the Copilot cohort. The difference between groups for Flesch Reading Ease scores (p = 8.70408 × 10-8) was found to be statistically significant albeit with priori power found to be low at 0.392. Conclusions: The results of this study indicate that the answers generated by the Microsoft Copilot cohort offers a greater degree of readability compared to ChatGPT cohort regarding IVC filters. Nevertheless, the mean Flesch-Kincaid readability for both cohorts does not meet the recommended U.S. grade reading levels.
Collapse
Affiliation(s)
- Som P Singh
- Department of Internal Medicine, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA
| | - Aleena Jamal
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA 19107, USA
| | - Farah Qureshi
- Lake Erie College of Osteopathic Medicine, Erie, PA 16509, USA
| | - Rohma Zaidi
- Department of Internal Medicine, University of Missouri Kansas City School of Medicine, Kansas City, MO 64108, USA
| | - Fawad Qureshi
- Department of Nephrology and Hypertension, Mayo Clinic Alix School of Medicine, Rochester, MN 55905, USA
| |
Collapse
|
10
|
Ahn AB, Kulhari S, Karimi A, Sundararajan S, Sajatovic M. Readability of patient education material in stroke: a systematic literature review. Top Stroke Rehabil 2024; 31:345-360. [PMID: 37724783 DOI: 10.1080/10749357.2023.2259177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 09/09/2023] [Indexed: 09/21/2023]
Abstract
BACKGROUND Stroke education materials are crucial for the recovery of stroke patients, but their effectiveness depends on their readability. The American Medical Association (AMA) recommends patient education materials be written at a sixth-grade level. Studies show existing paper and online materials exceed patients' reading levels and undermine their health literacy. Low health literacy among stroke patients is associated with worse health outcomes and decreased efficacy of stroke rehabilitation. OBJECTIVE We reviewed the readability of paper (i.e brochures, factsheets, posters) and online (i.e American Stroke Association, Google, Yahoo!) stroke patient education materials, reading level of stroke patients, accessibility of online health information, patients' perceptions on gaps in stroke information, and provided recommendations for improving readability. METHOD A PRISMA-guided systematic literature review was conducted using PUBMED, Google Scholar, and EbscoHost databases and "stroke", "readability of stroke patient education", and "stroke readability" search terms to discover English-language articles. A total of 12 articles were reviewed. RESULTS SMOG scores for paper and online material ranged from 11.0 - 12.0 grade level and 7.8 - 13.95 grade level respectively. Reading level of stroke patients ranged from 3rd grade to 9th grade level or above. Accessibility of online stroke information was high. Structured patient interviews illustrated gaps in patient education materials and difficulty with comprehension. CONCLUSION Paper and online patient education materials exceed the reading level of stroke patients and the AMA recommended 6th grade level. Due to limitations in readability, stroke patients are not being adequately educated about their condition.
Collapse
Affiliation(s)
- Aaron B Ahn
- Department of Neurology, University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, OH, USA
| | - Sajal Kulhari
- Department of Neurology, University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, OH, USA
| | - Amir Karimi
- Department of Neurology, University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, OH, USA
| | - Sophia Sundararajan
- Department of Neurology, University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, OH, USA
| | - Martha Sajatovic
- Department of Psychiatry, University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, OH, USA
| |
Collapse
|
11
|
Morse E, Odigie E, Gillespie H, Rameau A. The Readability of Patient-Facing Social Media Posts on Common Otolaryngologic Diagnoses. Otolaryngol Head Neck Surg 2024; 170:1051-1058. [PMID: 38018504 DOI: 10.1002/ohn.584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 10/09/2023] [Accepted: 10/28/2023] [Indexed: 11/30/2023]
Abstract
OBJECTIVE To assess the readability of patient-facing educational information about the most common otolaryngology diagnoses on popular social media platforms. STUDY DESIGN Cross-sectional study. SETTING Social media platforms. METHODS The top 5 otolaryngologic diagnoses were identified from the National Ambulatory Medical Care Survey Database. Facebook, Twitter, TikTok, and Instagram were searched using these terms, and the top 25 patient-facing posts from unique accounts for each search term and poster type (otolaryngologist, other medical professional, layperson) were identified. Captions, text, and audio from images and video, and linked articles were extracted. The readability of each post element was calculated with multiple readability formulae. Readability was summarized and was compared between poster types, platforms, and search terms via Kruskal-Wallis testing. RESULTS Median readability, by grade level, by grade level, was greater than 10 for captions, 5 for image-associated text, and 9 for linked articles. Captions and images in posts by laypeople were significantly more readable than captions by otolaryngologists or other medical professionals, but there was no difference for linked articles. All post components were more readable in posts about cerumen than those about other search terms. CONCLUSIONS When examining the readability of posts on social media regarding the most common otolaryngology diagnoses, we found that many posts are less readable than recommended for patients, and found that posts by laypeople were significantly more readable than those by medical professionals. Medical professionals should work to make educational social media posts more readable to facilitate patient comprehension.
Collapse
Affiliation(s)
- Elliot Morse
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
| | - Eseosa Odigie
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
- Sean Parker Institute for the Voice, Weill Cornell Medicine, New York, New York, USA
| | - Helen Gillespie
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
- Sean Parker Institute for the Voice, Weill Cornell Medicine, New York, New York, USA
| | - Anaïs Rameau
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
- Sean Parker Institute for the Voice, Weill Cornell Medicine, New York, New York, USA
| |
Collapse
|
12
|
Venosa M, Cerciello S, Zoubi M, Petralia G, Vespasiani A, Angelozzi M, Romanini E, Logroscino G. Readability and Quality of Online Patient Education Materials Concerning Posterior Cruciate Ligament Reconstruction. Cureus 2024; 16:e58618. [PMID: 38770469 PMCID: PMC11103262 DOI: 10.7759/cureus.58618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/19/2024] [Indexed: 05/22/2024] Open
Abstract
Objective This study aimed to assess the quality of online patient educational materials regarding posterior cruciate ligament (PCL) reconstruction. Methods We performed a search of the top-50 results on Google® (terms: "posterior cruciate ligament reconstruction," "PCL reconstruction," "posterior cruciate ligament surgery," and "PCL surgery") and subsequently filtered to rule out duplicated/inaccessible websites or those containing only videos (67 websites included). Readability was assessed using six formulas: Flesch-Kincaid Reading Ease (FRE), Flesch-Kincaid Grade Level (FKG), Gunning Fog Score (GF), Simple Measure of Gobbledygook (SMOG) Index, Coleman-Liau Index (CLI), Automated Readability Index (ARI); quality was assessed using the JAMA benchmark criteria and recording the presence of the HONcode seal. Results The mean FRE was 49.3 (SD 11.2) and the mean FKG level was 8.09. These results were confirmed by the other readability formulae (average: GF 8.9; SMOG Index 7.3; CLI 14.7; ARI 6.5). A HONcode seal was available for 7.4 % of websites. The average JAMA score was 1.3. Conclusion The reading level of online patient materials concerning PCL reconstruction is too high for the average reader, requiring high comprehension skills. Practice implications Online medical information has been shown to influence patient healthcare decision processes. Patient-oriented educational materials should be clear and easy to understand.
Collapse
Affiliation(s)
- Michele Venosa
- Department of Life, Health and Environmental Sciences, University of L'Aquila, L'Aquila, ITA
- Department of Orthopaedics, RomaPro, Polo Sanitario San Feliciano, Rome, ITA
| | - Simone Cerciello
- Department of Orthopaedics and Traumatology, Fondazione Policlinico Universitario Agostino Gemelli, Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Rome, ITA
- Orthopaedic Department, Casa di Cura Villa Betania, Rome, ITA
| | - Mohammad Zoubi
- Department of Life, Health and Environmental Sciences, University of L'Aquila, L'Aquila, ITA
| | - Giuseppe Petralia
- Department of Life, Health and Environmental Sciences, University of L'Aquila, L'Aquila, ITA
| | - Andrea Vespasiani
- Department of Life, Health and Environmental Sciences, University of L'Aquila, L'Aquila, ITA
| | - Massimo Angelozzi
- Department of Life, Health and Environmental Sciences, University of L'Aquila, L'Aquila, ITA
| | - Emilio Romanini
- Department of Orthopaedics, RomaPro, Polo Sanitario San Feliciano, Rome, ITA
- Department of Orthopaedics, Italian Working Group on Evidence-Based Orthopaedics (GLOBE), Rome, ITA
| | - Giandomenico Logroscino
- Department of Life, Health and Environmental Sciences, University of L'Aquila, L'Aquila, ITA
- Department of Minimally Invasive and Computer-Assisted Orthopaedic Surgery, San Salvatore Hospital, L'Aquila, ITA
| |
Collapse
|
13
|
Biancaniello CM, Rolph KE, Cavanaugh SM, Karnik P, Peda A, Cavanaugh RP. Readability of postoperative discharge instructions is associated with complication rate in companion animals undergoing sterilisation. Vet Rec 2024; 194:e3796. [PMID: 38321362 DOI: 10.1002/vetr.3796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 10/26/2023] [Accepted: 12/07/2023] [Indexed: 02/08/2024]
Abstract
BACKGROUND Readability of client communications is a commonly overlooked topic in veterinary medical education. In human medicine, it has been advised that the readability of patient materials should be at USA schooling sixth-grade level or below. We hypothesised that student written discharge instructions would be of an inappropriate readability level, and discharges scored with higher reading grade levels would be associated with more complications. METHODS The cohort comprised 149 dogs and cats presenting for sterilisation. The readability of discharge instructions was assessed using the Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKGL) formulas. Records were examined for evidence of postoperative complications. RESULTS The mean FRE score of the discharge instructions was 61.97, with 30.87% being classified as 'difficult' or 'fairly difficult', 60.4% as 'standard' and 8.72% as 'fairly easy'. The mean FKGL was 8.64, with 98% being above reading level 6. Overall, there was an association between FKGL and complication occurrence (p = 0.005). Stratification by species demonstrated FRE and FKGL to be associated with complication occurrence in dogs (FRE score, p = 0.038; FKGL, p = 0.002), but not cats (FRE score, p = 0.964; FKGL, p = 0.679). LIMITATIONS Due to the retrospective nature of the study, there were difficulties associated with extracting relevant complication information from the medical records. CONCLUSION Only 2% of owner-directed discharge instructions were written at readability levels aligning with the recommendations set forth in the human guidelines.
Collapse
Affiliation(s)
- Christopher M Biancaniello
- Center for Research and Innovation in Veterinary and Medical Education, Ross University School of Veterinary Medicine, Basseterre, West Indies, Saint Kitts and Nevis
| | - Kerry E Rolph
- Center for Research and Innovation in Veterinary and Medical Education, Ross University School of Veterinary Medicine, Basseterre, West Indies, Saint Kitts and Nevis
| | - Sarah M Cavanaugh
- Center for Research and Innovation in Veterinary and Medical Education, Ross University School of Veterinary Medicine, Basseterre, West Indies, Saint Kitts and Nevis
| | - Priti Karnik
- Center for Research and Innovation in Veterinary and Medical Education, Ross University School of Veterinary Medicine, Basseterre, West Indies, Saint Kitts and Nevis
| | - Andrea Peda
- Center for Research and Innovation in Veterinary and Medical Education, Ross University School of Veterinary Medicine, Basseterre, West Indies, Saint Kitts and Nevis
| | - Ryan P Cavanaugh
- Center for Research and Innovation in Veterinary and Medical Education, Ross University School of Veterinary Medicine, Basseterre, West Indies, Saint Kitts and Nevis
| |
Collapse
|
14
|
Tan DJY, Ko TK, Fan KS. The Readability and Quality of Web-Based Patient Information on Nasopharyngeal Carcinoma: Quantitative Content Analysis. JMIR Form Res 2023; 7:e47762. [PMID: 38010802 DOI: 10.2196/47762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 08/25/2023] [Accepted: 10/25/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND Nasopharyngeal carcinoma (NPC) is a rare disease that is strongly associated with exposure to the Epstein-Barr virus and is characterized by the formation of malignant cells in nasopharynx tissues. Early diagnosis of NPC is often difficult owing to the location of initial tumor sites and the nonspecificity of initial symptoms, resulting in a higher frequency of advanced-stage diagnoses and a poorer prognosis. Access to high-quality, readable information could improve the early detection of the disease and provide support to patients during disease management. OBJECTIVE This study aims to assess the quality and readability of publicly available web-based information in the English language about NPC, using the most popular search engines. METHODS Key terms relevant to NPC were searched across 3 of the most popular internet search engines: Google, Yahoo, and Bing. The top 25 results from each search engine were included in the analysis. Websites that contained text written in languages other than English, required paywall access, targeted medical professionals, or included nontext content were excluded. Readability for each website was assessed using the Flesch Reading Ease score and the Flesch-Kincaid grade level. Website quality was assessed using the Journal of the American Medical Association (JAMA) and DISCERN tools as well as the presence of a Health on the Net Foundation seal. RESULTS Overall, 57 suitable websites were included in this study; 26% (15/57) of the websites were academic. The mean JAMA and DISCERN scores of all websites were 2.80 (IQR 3) and 57.60 (IQR 19), respectively, with a median of 3 (IQR 2-4) and 61 (IQR 49-68), respectively. Health care industry websites (n=3) had the highest mean JAMA score of 4 (SD 0). Academic websites (15/57, 26%) had the highest mean DISCERN score of 77.5. The Health on the Net Foundation seal was present on only 1 website, which also achieved a JAMA score of 3 and a DISCERN score of 50. Significant differences were observed between the JAMA score of hospital websites and the scores of industry websites (P=.04), news service websites (P<.048), charity and nongovernmental organization websites (P=.03). Despite being a vital source for patients, general practitioner websites were found to have significantly lower JAMA scores compared with charity websites (P=.05). The overall mean readability scores reflected an average reading age of 14.3 (SD 1.1) years. CONCLUSIONS The results of this study suggest an inconsistent and suboptimal quality of information related to NPC on the internet. On average, websites presented readability challenges, as written information about NPC was above the recommended reading level of sixth grade. As such, web-based information requires improvement in both quality and accessibility, and healthcare providers should be selective about information recommended to patients, ensuring they are reliable and readable.
Collapse
Affiliation(s)
- Denise Jia Yun Tan
- Department of Surgery, Royal Stoke University Hospital, Stoke on Trent, United Kingdom
| | - Tsz Ki Ko
- Department of Surgery, Royal Stoke University Hospital, Stoke on Trent, United Kingdom
| | - Ka Siu Fan
- Department of Surgery, Royal Surrey County Hospital, Guildford, Surrey, United Kingdom
| |
Collapse
|
15
|
Chen G, Xie J, Liang T, Wang Y, Liao W, Song L, Zhang X. Exploring the causality between educational attainment and gastroesophageal reflux disease: A Mendelian randomization study. Dig Liver Dis 2023; 55:1208-1213. [PMID: 37029064 DOI: 10.1016/j.dld.2023.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 03/16/2023] [Accepted: 03/19/2023] [Indexed: 04/09/2023]
Abstract
BACKGROUND AND OBJECTIVES Observational studies suggest that higher educational attainment (EA) contributes to the prevention and treatment of gastroesophageal reflux disease (GERD). However, the causality of this relationship is not supported by strong evidence. We used publicly available genetic summary data, including that on EA, GERD, and the common risk of GERD, to prove this causality. METHODS Multiple methods in Mendelian randomization (MR) were employed to evaluate the causality. The leave-one-out sensitivity test, MR-Egger regression, and multivariable MR (MVMR) analysis were applied to evaluate the MR results. RESULTS Higher EA was significantly associated with lower GERD risk (inverse variance weighted method, odds ratio [OR]: 0.979, 95% confidence interval [CI]: 0.975-0.984, P <0.001). Similar results were obtained when the weighted median and weighted mode were used for causal estimation. After adjusting for potential mediators, the MVMR analysis showed that body mass index (BMI) and EA were still significantly correlated and negatively correlated with GERD (OR: 0.997, 95% CI: 0.996-0.998, P =0.008 and OR: 0.981, 95% CI: 0.977-0.984, P <0.001), respectively. CONCLUSIONS Higher levels of EA may have a protective effect against GERD by having a negative causal relationship. Additionally, BMI may be a crucial factor in the EA-GERD pathway.
Collapse
Affiliation(s)
- Gui Chen
- State Key Laboratory of Respiratory Disease, Department of Otolaryngology-Head and Neck Surgery, the First Affiliated Hospital of Guangzhou Medical University, 151 Yanjiangxi Road, Guangzhou, Guangdong 510120, PR China
| | - Junyang Xie
- State Key Laboratory of Respiratory Disease, Department of Otolaryngology-Head and Neck Surgery, the First Affiliated Hospital of Guangzhou Medical University, 151 Yanjiangxi Road, Guangzhou, Guangdong 510120, PR China
| | - Tianhao Liang
- State Key Laboratory of Respiratory Disease, Department of Otolaryngology-Head and Neck Surgery, the First Affiliated Hospital of Guangzhou Medical University, 151 Yanjiangxi Road, Guangzhou, Guangdong 510120, PR China
| | - Yiyan Wang
- State Key Laboratory of Respiratory Disease, Department of Otolaryngology-Head and Neck Surgery, the First Affiliated Hospital of Guangzhou Medical University, 151 Yanjiangxi Road, Guangzhou, Guangdong 510120, PR China
| | - Wenjing Liao
- State Key Laboratory of Respiratory Disease, Department of Otolaryngology-Head and Neck Surgery, the First Affiliated Hospital of Guangzhou Medical University, 151 Yanjiangxi Road, Guangzhou, Guangdong 510120, PR China
| | - Lijuan Song
- State Key Laboratory of Respiratory Disease, Department of Otolaryngology-Head and Neck Surgery, the First Affiliated Hospital of Guangzhou Medical University, 151 Yanjiangxi Road, Guangzhou, Guangdong 510120, PR China
| | - Xiaowen Zhang
- State Key Laboratory of Respiratory Disease, Department of Otolaryngology-Head and Neck Surgery, the First Affiliated Hospital of Guangzhou Medical University, 151 Yanjiangxi Road, Guangzhou, Guangdong 510120, PR China.
| |
Collapse
|
16
|
Ahmadzadeh K, Bahrami M, Zare-Farashbandi F, Adibi P, Boroumand MA, Rahimi A. Patient education information material assessment criteria: A scoping review. Health Info Libr J 2023; 40:3-28. [PMID: 36637218 DOI: 10.1111/hir.12467] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 10/13/2022] [Accepted: 11/03/2022] [Indexed: 01/14/2023]
Abstract
BACKGROUND Patient education information material (PEIM) is an essential component of patient education programs in increasing patients' ability to cope with their diseases. Therefore, it is essential to consider the criteria that will be used to prepare and evaluate these resources. OBJECTIVE This paper aims to identify these criteria and recognize the tools or methods used to evaluate them. METHODS National and international databases and indexing banks, including PubMed, Scopus, Web of Science, ProQuest, the Cochrane Library, Magiran, SID and ISC, were searched for this review. Original or review articles, theses, short surveys, and conference papers published between January 1990 and June 2022 were included. RESULTS Overall, 4688 documents were retrieved, of which 298 documents met the inclusion criteria. The criteria were grouped into 24 overarching criteria. The most frequently used criteria were readability, quality, suitability, comprehensibility and understandability. CONCLUSION This review has provided empirical evidence to identify criteria, tools, techniques or methods for developing or evaluating a PEIM. The authors suggest that developing a comprehensive tool based on these findings is critical for evaluating the overall efficiency of PEIM using effective criteria.
Collapse
Affiliation(s)
- Khadijeh Ahmadzadeh
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
- Student Research Commitee, Sirjan School of Medical Sciences, Sirjan, Iran
| | - Masoud Bahrami
- Department of Adult Health Nursing, Nursing and Midwifery Care Research Center, School of Nursing and Midwifery, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Firoozeh Zare-Farashbandi
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Payman Adibi
- Gastroenterology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Mohammad Ali Boroumand
- Department of Medical Library and Information Sciences, School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Alireza Rahimi
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|