1
|
Guven S, Ayyildiz B. Acceptability and readability of ChatGPT-4 based responses for frequently asked questions about strabismus and amblyopia. J Fr Ophtalmol 2025; 48:104400. [PMID: 39708624 DOI: 10.1016/j.jfo.2024.104400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 08/25/2024] [Accepted: 09/19/2024] [Indexed: 12/23/2024]
Abstract
PURPOSE To evaluate the compatibility and readability of ChatGPT-4 in providing responses to common inquiries about strabismus and amblyopia. MATERIALS AND METHODS A series of commonly asked questions were compiled, covering topics such as the definition, prevalence, diagnostic approaches, surgical and non-surgical treatment alternatives, postoperative guidelines, surgery-related risks, and visual prognosis associated with strabismus and amblyopia. Each question was asked three times on the online ChatGPT-4 platform both in English and French, with data collection conducted on February 18, 2024. The responses generated by ChatGPT-4 were evaluated by two independent pediatric ophthalmologists, who classified them as "acceptable," "unacceptable," or "incomplete." Additionally, an online readability assessment tool called "readable" was utilized for readability analysis. RESULTS The majority of responses, totaling 97% of the questions regarding strabismus and amblyopia, consistently met the criteria for acceptability. Only 3% of responses were classified as incomplete, with no instances of unacceptable responses observed. The average Flesch-Kincaid Grade Level and Flesch Reading Ease Score were calculated as 14.53±1.8 and 23.63±8.2, respectively. Furthermore, the means for all readability indices, including the Coleman-Liau index, the Gunning Fog index, and the SMOG index, were found to be 15.75±1.4, 16.96±2.4, and 16.05±1.6, respectively. CONCLUSIONS ChatGPT-4 consistently produced acceptable responses to the majority of the questions asked (97%). Nevertheless, the readability of these responses proved challenging for the average layperson, requiring a college-level education for comprehension. Further improvements, particularly in terms of readability, are necessary to enhance the advisory capacity of this AI software in providing eye and health-related guidance for patients, physicians, and the general public.
Collapse
Affiliation(s)
- S Guven
- Kayseri City Hospital, Department of Ophthalmology, Kayseri, Turkey.
| | - B Ayyildiz
- Kayseri City Hospital, Department of Ophthalmology, Kayseri, Turkey
| |
Collapse
|
2
|
Doğan L, Yılmaz İE. The performance of ChatGPT-4 and Bing Chat in frequently asked questions about glaucoma. Eur J Ophthalmol 2025:11206721251321197. [PMID: 39973162 DOI: 10.1177/11206721251321197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
PURPOSE To evaluate the appropriateness and readability of the responses generated by ChatGPT-4 and Bing Chat to frequently asked questions about glaucoma. METHOD Thirty-four questions were generated for this study. Each question was directed three times to a fresh ChatGPT-4 and Bing Chat interface. The obtained responses were categorised by two glaucoma specialists in terms of their appropriateness. Accuracy of the responses was evaluated using the Structure of the Observed Learning Outcome (SOLO) taxonomy. Readability of the responses was assessed using Flesch Reading Ease (FRE), Flesch Kincaid Grade Level (FKGL), Coleman-Liau Index (CLI), Simple Measure of Gobbledygook (SMOG), and Gunning- Fog Index (GFI). RESULTS The percentage of appropriate responses was 88.2% (30/34) and 79.2% (27/34) in ChatGPT-4 and Bing Chat, respectively. Both the ChatGPT-4 and Bing Chat interfaces provided at least one inappropriate response to 1 of the 34 questions. The SOLO test results for ChatGPT-3.5 and Bing Chat were 3.86 ± 0.41 and 3.70 ± 0.52, respectively. No statistically significant difference in performance was observed between both LLMs (p = 0.101). The mean count of words used when generating responses was 316.5 (± 85.1) and 61.6 (± 25.8) in ChatGPT-4 and Bing Chat, respectively (p < 0.05). According to FRE scores, the generated responses were suitable for only 4.5% and 33% of U.S. adults in ChatGPT-4 and Bing Chat, respectively (p < 0.05). CONCLUSIONS ChatGPT-4 and Bing Chat consistently provided appropriate responses to the questions. Both LLMs had low readability scores, but ChatGPT-4 provided more difficult responses in terms of readability.
Collapse
Affiliation(s)
- Levent Doğan
- Department of Ophthalmology, Kilis State Hospital, Kilis, Turkey
| | | |
Collapse
|
3
|
Uzunçıbuk H, Marrapodi MM, Ronsivalle V, Cicciù M, Minervini G. Lessons to be learned when designing comprehensible patient-oriented online information about temporomandibular disorders. J Oral Rehabil 2025; 52:222-229. [PMID: 39034447 PMCID: PMC11740279 DOI: 10.1111/joor.13798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 06/26/2024] [Accepted: 07/03/2024] [Indexed: 07/23/2024]
Abstract
BACKGROUND Temporomandibular disorders (TMD) are a prevalent ailment with a global impact, affecting a substantial number of individuals. While some individuals are receiving treatment from orthodontists for TMD, a significant proportion of individuals obtain knowledge through websites. OBJECTIVES Our purpose had been to evaluate, from a patient-oriented perspective, the readability of home pages of websites scored in the 10 most prominent devoted to TMD. We also determined what level of education would have been needed to get an overview of the information on the websites under scrutiny. This approach ensures that our findings are centred on the patient experience, providing insights into how accessible and understandable websites about TMD. METHODS We determined the top 10 patient-focused English language websites by searching for 'temporomandibular disorders' in the 'no country redirect' plugin of the Google Chrome browser (www.google.com/ncr). The readability of the texts was assessed using the Gunning fog index (GFI), Coleman Liau index (CLI), Automated readability index (ARI) Simple Measure of Gobbledygook (SMOG), Flesch Kincald grade level (FKGL), Flesh reasing ease (FRE) (https://readabilityformulas.com). RESULTS The mean Flesch reading ease index score was determined to be 48.67, accompanied by a standard deviation of 15.04 and these websites require an average of 13.49 years of formal education (GFI), with a standard deviation of 2.62, for ease of understanding. CONCLUSION Our research indicates that a significant proportion of websites related to TMD can be defined as a level of complexity that exceeds the ability to read comprehension of the general population.
Collapse
Affiliation(s)
- Hande Uzunçıbuk
- Department of Orthodontics, Dentistry FacultyTrakya UniversityEdirneTurkey
| | - Maria Maddalena Marrapodi
- Department of Woman, Child and General and Specialist SurgeryUniversity of Campania “Luigi Vanvitelli”NaplesItaly
| | - Vincenzo Ronsivalle
- Department of General Surgery and Medical‐Surgical Specialties, School of DentistryUniversity of CataniaCataniaItaly
| | - Marco Cicciù
- Department of General Surgery and Medical‐Surgical Specialties, School of DentistryUniversity of CataniaCataniaItaly
| | - Giuseppe Minervini
- Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences (SIMATS)Saveetha UniversityChennaiIndia
- Multidisciplinary Department of Medical‐Surgical and Dental SpecialtiesUniversity of Campania “Luigi Vanvitelli”NaplesItaly
| |
Collapse
|
4
|
Doğan L, Özer Özcan Z, Edhem Yılmaz I. The promising role of chatbots in keratorefractive surgery patient education. J Fr Ophtalmol 2025; 48:104381. [PMID: 39674098 DOI: 10.1016/j.jfo.2024.104381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 06/12/2024] [Accepted: 07/20/2024] [Indexed: 12/16/2024]
Abstract
PURPOSE To evaluate the appropriateness, understandability, actionability, and readability of responses provided by ChatGPT-3.5, Bard, and Bing Chat to frequently asked questions about keratorefractive surgery (KRS). METHOD Thirty-eight frequently asked questions about KRS were directed three times to a fresh ChatGPT-3.5, Bard, and Bing Chat interfaces. Two experienced refractive surgeons categorized the chatbots' responses according to their appropriateness and the accuracy of the responses was assessed using the Structure of the Observed Learning Outcome (SOLO) taxonomy. Flesch Reading Ease (FRE) and Coleman-Liau Index (CLI) were used to evaluate the readability of the responses of chatbots. Furthermore, the understandability scores of responses were evaluated using the Patient Education Materials Assessment Tool (PEMAT). RESULTS The appropriateness of the ChatGPT-3.5, Bard, and Bing Chat responses was 86.8% (33/38), 84.2% (32/38), and 81.5% (31/38), respectively (P>0.05). According to the SOLO test, ChatGPT-3.5 (3.91±0.44) achieved the highest mean accuracy and followed by Bard (3.64±0.61) and Bing Chat (3.19±0.55). For understandability (mean PEMAT-U score the ChatGPT-3.5: 68.5%, Bard: 78.6%, and Bing Chat: 67.1%, P<0.05), and actionability (mean PEMAT-A score the ChatGPT-3.5: 62.6%, Bard: 72.4%, and Bing Chat: 60.9%, P<0.05) the Bard scored better than the other chatbots. Two readability analyses showed that Bing had the highest readability, followed by the ChatGPT-3.5 and Bard, however, the understandability and readability scores were more challenging than the recommended level. CONCLUSION Artificial intelligence supported chatbots have the potential to provide detailed and appropriate responses at acceptable levels in KRS. Chatbots, while promising for patient education in KRS, require further progress, especially in readability and understandability aspects.
Collapse
Affiliation(s)
- L Doğan
- Department of Ophthalmology, Ömer Halisdemir University School of Medicine, 51100 Niğde, Turkey.
| | - Z Özer Özcan
- Department of Ophthalmology, Gaziantep City Hospital, Gaziantep, Turkey
| | - I Edhem Yılmaz
- Department of Ophthalmology, Gaziantep Islamic Science and Technology University School of Medicine, Gaziantep, Turkey
| |
Collapse
|
5
|
Doğan L, Özçakmakcı GB, Yılmaz ĬE. The Performance of Chatbots and the AAPOS Website as a Tool for Amblyopia Education. J Pediatr Ophthalmol Strabismus 2024; 61:325-331. [PMID: 38661309 DOI: 10.3928/01913913-20240409-01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
PURPOSE To evaluate the understandability, actionability, and readability of responses provided by the website of the American Association for Pediatric Ophthalmology and Strabismus (AAPOS), ChatGPT-3.5, Bard, and Bing Chat about amblyopia and the appropriateness of the responses generated by the chatbots. METHOD Twenty-five questions provided by the AAPOS website were directed three times to fresh ChatGPT-3.5, Bard, and Bing Chat interfaces. Two experienced pediatric ophthalmologists categorized the responses of the chatbots in terms of their appropriateness. Flesch Reading Ease (FRE), Flesch Kincaid Grade Level (FKGL), and Coleman-Liau Index (CLI) were used to evaluate the readability of the responses of the AAPOS website and chatbots. Furthermore, the understandability scores were evaluated using the Patient Education Materials Assessment Tool (PEMAT). RESULTS The appropriateness of the chatbots' responses was 84% for ChatGPT-3.5 and Bard and 80% for Bing Chat (P > .05). For understandability (mean PEMAT-U score AAPOS website: 81.5%, Bard: 77.6%, ChatGPT-3.5: 76.1%, and Bing Chat: 71.5%, P < .05) and actionability (mean PEMAT-A score AAPOS website: 74.6%, Bard: 69.2%, ChatGPT-3.5: 67.8%, and Bing Chat: 64.8%, P < .05), the AAPOs website scored better than the chat-bots. Three readability analyses showed that Bard had the highest mean score, followed by the AAPOS website, Bing Chat, and ChatGPT-3.5, and these scores were more challenging than the recommended level. CONCLUSIONS Chatbots have the potential to provide detailed and appropriate responses at acceptable levels. The AAPOS website has the advantage of providing information that is more understandable and actionable. The AAPOS website and chatbots, especially Chat-GPT, provided difficult-to-read data for patient education regarding amblyopia. [J Pediatr Ophthalmol Strabismus. 2024;61(5):325-331.].
Collapse
|
6
|
Momenaei B, Wakabayashi T, Shahlaee A, Durrani AF, Pandit SA, Wang K, Mansour HA, Abishek RM, Xu D, Sridhar J, Yonekawa Y, Kuriyan AE. Appropriateness and Readability of ChatGPT-4-Generated Responses for Surgical Treatment of Retinal Diseases. Ophthalmol Retina 2023; 7:862-868. [PMID: 37277096 DOI: 10.1016/j.oret.2023.05.022] [Citation(s) in RCA: 101] [Impact Index Per Article: 50.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 05/26/2023] [Accepted: 05/30/2023] [Indexed: 06/07/2023]
Abstract
OBJECTIVE To evaluate the appropriateness and readability of the medical knowledge provided by ChatGPT-4, an artificial intelligence-powered conversational search engine, regarding common vitreoretinal surgeries for retinal detachments (RDs), macular holes (MHs), and epiretinal membranes (ERMs). DESIGN Retrospective cross-sectional study. SUBJECTS This study did not involve any human participants. METHODS We created lists of common questions about the definition, prevalence, visual impact, diagnostic methods, surgical and nonsurgical treatment options, postoperative information, surgery-related complications, and visual prognosis of RD, MH, and ERM, and asked each question 3 times on the online ChatGPT-4 platform. The data for this cross-sectional study were recorded on April 25, 2023. Two independent retina specialists graded the appropriateness of the responses. Readability was assessed using Readable, an online readability tool. MAIN OUTCOME MEASURES The "appropriateness" and "readability" of the answers generated by ChatGPT-4 bot. RESULTS Responses were consistently appropriate in 84.6% (33/39), 92% (23/25), and 91.7% (22/24) of the questions related to RD, MH, and ERM, respectively. Answers were inappropriate at least once in 5.1% (2/39), 8% (2/25), and 8.3% (2/24) of the respective questions. The average Flesch Kincaid Grade Level and Flesch Reading Ease Score were 14.1 ± 2.6 and 32.3 ± 10.8 for RD, 14 ± 1.3 and 34.4 ± 7.7 for MH, and 14.8 ± 1.3 and 28.1 ± 7.5 for ERM. These scores indicate that the answers are difficult or very difficult to read for the average lay person and college graduation would be required to understand the material. CONCLUSIONS Most of the answers provided by ChatGPT-4 were consistently appropriate. However, ChatGPT and other natural language models in their current form are not a source of factual information. Improving the credibility and readability of responses, especially in specialized fields, such as medicine, is a critical focus of research. Patients, physicians, and laypersons should be advised of the limitations of these tools for eye- and health-related counseling. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Bita Momenaei
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Taku Wakabayashi
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Abtin Shahlaee
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Asad F Durrani
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Saagar A Pandit
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Kristine Wang
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Hana A Mansour
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Robert M Abishek
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - David Xu
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Jayanth Sridhar
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - Yoshihiro Yonekawa
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Ajay E Kuriyan
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania.
| |
Collapse
|
7
|
Backowski R, Hinnant K, LaValle L. Writing library database descriptions in Plain Language. COLLEGE & UNDERGRADUATE LIBRARIES 2022. [DOI: 10.1080/10691316.2022.2149439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Affiliation(s)
- Roxanne Backowski
- McIntyre Library, University of Wisconsin-Eau Claire, Eau Claire, WI, USA
| | - Kate Hinnant
- McIntyre Library, University of Wisconsin-Eau Claire, Eau Claire, WI, USA
| | - Liliana LaValle
- McIntyre Library, University of Wisconsin-Eau Claire, Eau Claire, WI, USA
| |
Collapse
|
8
|
McMenemy D, Robinson E, Ruthven I. The Impact of COVID-19 Lockdowns on Public Libraries in the UK: Findings from a National Study. PUBLIC LIBRARY QUARTERLY 2022. [DOI: 10.1080/01616846.2022.2058860] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | - Elaine Robinson
- Computer and Information Sciences, University of Strathclyde, Glasgow, UK
| | - Ian Ruthven
- Computer and Information Sciences, University of Strathclyde, Glasgow, UK
| |
Collapse
|
9
|
Robinson E, McMenemy D. Communicating Patron Rights and Responsibilities Transparently: Creating a Model Internet Acceptable Use Policy for UK Public Libraries. PUBLIC LIBRARY QUARTERLY 2021. [DOI: 10.1080/01616846.2021.1936883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Elaine Robinson
- Computer and Information Sciences, University of Strathclyde, Glasgow UK
| | - David McMenemy
- Computer and Information Sciences, University of Strathclyde, Glasgow UK
| |
Collapse
|
10
|
Abstract
PurposeThe purpose of this research is to investigate the language of “weeding” (library deselection) within public library collection development policies in order to examine whether such policies and practices can be usefully connected to library and information science (LIS) theory, specifically to “Deweyan pragmatic adaptation” as suggested by Buschman (2017) in the pages of this journal.Design/methodology/approachThis is a policy analysis of collection deselection policies from the 50 public libraries serving US state capitals, using Bacchi’s policy problem representation technique.Findings“Weeding” as described by these public library collection deselection policies is clearly pragmatic and oriented to increasing circulation to patrons, but the “Deweyan pragmatic adaptation” as reflected by many of those reviewed might better be defined as the pragmatism of Melvil Dewey rather than that of John Dewey.Research limitations/implicationsAlthough this work reviewed policies from a very small sample of US public libraries, collection, selection and deselection language as shown in the policies studied appear to be consistent with neoliberal priorities and values in terms of prioritizing “circulation” and “customers,” which may have additional implications for the current transition from print to electronic materials in public librariesOriginality/valueJohn Dewey’s political philosophy and Carol Bacchi’s policy problem representation technique have not been widely used in policy analysis by LIS researchers, and this paper offers a number of suggestions for similar public library policy investigations.
Collapse
|
11
|
Powell LE, Cisu TI, Klausner AP. Bladder Cancer Health Literacy: Assessing Readability of Online Patient Education Materials. Bladder Cancer 2021; 7:91-98. [PMID: 38993224 PMCID: PMC11181814 DOI: 10.3233/blc-200387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 11/24/2020] [Indexed: 11/15/2022]
Abstract
BACKGROUND Understanding of health-related materials, termed health literacy, affects decision makings and outcomes in the treatment of bladder cancer. The National Institutes of Health recommend writing education materials at a sixth-seventh grade reading level. The goal of this study is to assess readability of bladder cancer materials available online. OBJECTIVE The goal of this study is to characterize available information about bladder cancer online and evaluate readability. METHODS Materials on bladder cancer were collected from the American Urological Association's Urology Care Foundation (AUA-UCF) and compared to top 50 websites by search engine results. Resources were analyzed using four different validated readability assessment scales. The mean and standard deviation of the materials was calculated, and a two-tailed t test for used to assess for significance between the two sets of patient education materials. RESULTS The average readability of AUA materials was 8.5 (8th-9th grade reading level). For the top 50 websites, average readability was 11.7 (11-12th grade reading level). A two-tailed t test between the AUA and top 50 websites demonstrated statistical significance between the readability of the two sets of resources (P = 0.0001), with the top search engine results being several grade levels higher than the recommended 6-7th grade reading level. CONCLUSIONS Most health information provided by the AUA on bladder cancer is written at a reading ability that aligns with most US adults, with top websites for search engine results exceeding the average reading level by several grade levels. By focusing on health literacy, urologists may contribute lowering barriers to health literacy, improving health care expenditure and perioperative complications.
Collapse
Affiliation(s)
- Lauren E. Powell
- Virginia Commonwealth University School of Medicine, Richmond, VA, USA
| | - Theodore I. Cisu
- Virginia Commonwealth University Health System Division of Urology, Richmond, VA, USA
| | - Adam P. Klausner
- Division of Urology, Department of Surgery, Virginia Commonwealth University School of Medicine, Richmond, VA, USA
| |
Collapse
|
12
|
Increasing or decreasing reading trend: an overview of the current status of the public libraries in Khyber Pakhtunkhwa, Pakistan. LIBRARY MANAGEMENT 2020. [DOI: 10.1108/lm-01-2020-0006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeAs the human race moved from the Paleolithic to the current phases of the Neolithic period, the learning process developed from inscriptions on stones to clay tablets, from papyrus to papers and, ultimately, to digital technology. From ancient times to the present, public libraries have become open universities that are more democratic in the provision of educational and information services and the preservation of cultural heritage, regardless of gender and belief. This study attempts to understand reading trends and the use of citizens’ resources in public libraries in the age of technology as an open university.Design/methodology/approachThe data for this study on regular visitors, permanent library members, and information on the library inventory was collected from each public library administration through personal visits and interviews. In addition, data on regional population and literacy rates were collected from the Government of the Khyber Pakhtunkhwa Provincial Bureau of Statistics. The authors used descriptive statistics to analyze data for comparative studies.FindingsThe results show that daily visitors, regular library members and their use of library resources are decreasing compared to the literacy rate in each district. It was also concluded that, due to a lack of education and training in the area of information and digital literacy, the accessed database resources are not used properly. Moreover, each densely populated district relies on a single public library to meet general education and information needs.Practical implicationsThe results of this study will help the government expand the network of public libraries at the union council level with competent working staff to increase general motivation to improve reading and resource usage trends. Given the current literacy and population growth in each district, the law on the public library can also be amended and implemented to support the existing library system better and create more libraries in the public interest.Originality/valueThis study was conducted for the first time to determine the current state of public libraries in Khyber Pakhtunkhwa and to help public library authorities improve their existing public library service status based on the results.
Collapse
|