1
|
Pollock JR, Mujahed T, Smith JF, Arthur JR, Brinkman JC, Atkinson CM, Pollock NT, Renfree KJ. What Patients Say About Their Orthopaedic Hand and Wrist Surgeons: A Qualitative Analysis of Negative Reviews on Yelp. J Wrist Surg 2024; 13:202-207. [PMID: 38808180 PMCID: PMC11129890 DOI: 10.1055/s-0043-1768924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 04/03/2023] [Indexed: 05/30/2024]
Abstract
Background Patients often turn to online reviews as a source of information to inform their decisions regarding care. Existing literature has analyzed factors associated with positive online patient ratings among hand and wrist surgeons. However, there is limited in-depth analysis of factors associated with low patient satisfaction for hand and wrist surgeons. The focus of this study is to examine and characterize extremely negative reviews of hand and wrist surgeons on Yelp.com. Methods A search was performed using the keywords "hand surgery" on Yelp.com for eight major metropolitan areas including Washington DC, Dallas, New York, Phoenix, Los Angeles, San Francisco, Boston, and Seattle. Only single-star reviews (out of a possible 5 stars) of hand and wrist surgeons were included. The complaints in the 1-star reviews were then categorized into clinical and nonclinical categories. Result A total of 233 single-star reviews were included for analysis, which resulted in 468 total complaints. Of these complaints, 81 (18.8%) were clinically related and 351 (81.3%) were nonclinical in nature. The most common clinical complaints were for complication (24 complaints, 6%), misdiagnosis (16 complaints, 4%), unclear treatment plan (16 complaints, 4%), and uncontrolled pain (15 complaints, 3%). The most common nonclinical complaints were for physician bedside manner (93 complaints, 22%), financially related (80 complaints, 19%), unprofessional nonclinical staff (61 complaints, 14%), and wait time (46 complaints, 11%). The difference in the number of complaints for surgical and nonsurgical patients was statistically significant ( p < 0.05) for complication and uncontrolled pain. Clinical Relevance Patient satisfaction is dependent on a multitude of clinical and nonclinical factors. An awareness of online physician ratings is essential for hand and wrist surgeons to maintain and improve patient care and patient satisfaction. We believe the results of our study could be used to further improve the quality of care provided by hand and wrist surgeons.
Collapse
Affiliation(s)
- Jordan R. Pollock
- Department of Orthopedic Surgery, Mayo Clinic Alix School of Medicine, Scottsdale, Arizona
| | - Tala Mujahed
- Department of Orthopedic Surgery, Mayo Clinic Alix School of Medicine, Scottsdale, Arizona
| | - Jacob F. Smith
- Department of Orthopedic Surgery, Mayo Clinic Alix School of Medicine, Scottsdale, Arizona
| | - Jaymeson R. Arthur
- Department of Life Sciences, Department of Orthopedics, Mayo Clinic, Phoenix, Arizona
| | - Joseph C. Brinkman
- Department of Life Sciences, Department of Orthopedics, Mayo Clinic, Phoenix, Arizona
| | | | | | - Kevin J. Renfree
- Department of Life Sciences, Department of Orthopedics, Mayo Clinic, Phoenix, Arizona
| |
Collapse
|
2
|
Xie Y, He W, Wan Y, Luo H, Cai Y, Gong W, Liu S, Zhong D, Hu W, Zhang L, Li J, Zhao Q, Lv S, Li C, Zhang Z, Li C, Chen X, Huang W, Wang Y, Xu D. Validity of patients' online reviews at direct-to-consumer teleconsultation platforms: a protocol for a cross-sectional study using unannounced standardised patients. BMJ Open 2023; 13:e071783. [PMID: 37164474 PMCID: PMC10173992 DOI: 10.1136/bmjopen-2023-071783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 04/12/2023] [Indexed: 05/12/2023] Open
Abstract
INTRODUCTION As direct-to-consumer teleconsultation (hereafter referred to as 'teleconsultation') has gained popularity, an increasing number of patients have been leaving online reviews of their teleconsultation experiences. These reviews can help guide patients in identifying doctors for teleconsultation. However, few studies have examined the validity of online reviews in assessing the quality of teleconsultation against a gold standard. Therefore, we aim to use unannounced standardised patients (USPs) to validate online reviews in assessing both the technical and patient-centred quality of teleconsultations. We hypothesise that online review results will be more consistent with the patient-centred quality, rather than the technical quality, as assessed by the USPs. METHODS AND ANALYSIS In this cross-sectional study, USPs representing 11 common primary care conditions will randomly visit 253 physicians via the three largest teleconsultation platforms in China. Each physician will receive a text-based and a voice/video-based USP visit, resulting in a total of 506 USP visits. The USP will complete a quality checklist to assess the proportion of clinical practice guideline-recommended items during teleconsultation. After each visit, the USP will also complete the Patient Perception of Patient-Centeredness Rating. The USP-assessed results will be compared with online review results using the intraclass correlation coefficient (ICC). If ICC >0.4 (p<0.05), we will assume reasonable concordance between the USP-assessed quality and online reviews. Furthermore, we will use correlation analysis, Lin's Coordinated Correlation Coefficient and Kappa as supplementary analyses. ETHICS AND DISSEMINATION This study has received approval from the Institutional Review Board of Southern Medical University (#Southern Medical Audit (2022) No. 013). Results will be actively disseminated through print and social media, and USP tools will be made available for other researchers. TRIAL REGISTRATION The study has been registered at the China Clinical Trials Registry (ChiCTR2200062975).
Collapse
Affiliation(s)
- Yunyun Xie
- School of Health Management, Southern Medical University, Guangzhou, China
| | - Wenjun He
- School of Public Health, Southern Medical University, Guangzhou, Guangdong, China
| | - Yuting Wan
- School of Health Management, Southern Medical University, Guangzhou, China
| | - Huanyuan Luo
- Dermatology Hospital, Southern Medical University, Guangzhou, Guangdong, China
- Southern Medical University Institute for Global Health (SIGHT), Dermatology Hospital of Southern Medical University, Guangzhou, China
- Acacia Lab for Implementation Science, School of Health Management and Dermatology Hospital, Southern Medical University, Guangzhou, China
| | - Yiyuan Cai
- Department of Epidemiology and Medical Statistic, School of Public Health, Sun Yat-Sen University, Guangzhou, Guangdong, China
- Department of Epidemiology and Medical Statistics, School of Public Health, Guizhou Medical University, Guiyang, Guizhou, China
| | - Wenjie Gong
- School of Public Health, Central South University, Changsha, China
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| | - Siyuan Liu
- School of Public Health, Southern Medical University, Guangzhou, Guangdong, China
- Acacia Lab for Implementation Science, School of Health Management and Dermatology Hospital, Southern Medical University, Guangzhou, China
| | - Dongmei Zhong
- School of Public Health, Southern Medical University, Guangzhou, Guangdong, China
- Acacia Lab for Implementation Science, School of Health Management and Dermatology Hospital, Southern Medical University, Guangzhou, China
| | - Wenping Hu
- Department of Social Medicine and Health Management, Lanzhou University, Lanzhou, Gansu Province, China
| | - Lanping Zhang
- School of Health Management, Southern Medical University, Guangzhou, China
- Acacia Lab for Implementation Science, School of Health Management and Dermatology Hospital, Southern Medical University, Guangzhou, China
| | - Jiaqi Li
- School of Health Management, Southern Medical University, Guangzhou, China
- Acacia Lab for Implementation Science, School of Health Management and Dermatology Hospital, Southern Medical University, Guangzhou, China
| | - Qing Zhao
- Acacia Lab for Implementation Science, School of Health Management and Dermatology Hospital, Southern Medical University, Guangzhou, China
| | - Sensen Lv
- School of Health Management, Southern Medical University, Guangzhou, China
- Acacia Lab for Implementation Science, School of Health Management and Dermatology Hospital, Southern Medical University, Guangzhou, China
| | - Chunping Li
- School of Health Management, Southern Medical University, Guangzhou, China
- Acacia Lab for Implementation Science, School of Health Management and Dermatology Hospital, Southern Medical University, Guangzhou, China
| | - Zhang Zhang
- Gillings School of Global Public Health, The University of North Carolina at Chapel Hill Gillings School of Global Public Health, Chapel Hill, North Carolina, USA
| | - Changchang Li
- Dermatology Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Xiaoshan Chen
- School of Health Management, Southern Medical University, Guangzhou, China
- Acacia Lab for Implementation Science, School of Health Management and Dermatology Hospital, Southern Medical University, Guangzhou, China
| | - Wangqing Huang
- School of Health Management, Southern Medical University, Guangzhou, China
- Acacia Lab for Implementation Science, School of Health Management and Dermatology Hospital, Southern Medical University, Guangzhou, China
| | - Yutong Wang
- Department of Epidemiology and Biostatistics, School of Public Health, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Dong Xu
- Southern Medical University Institute for Global Health (SIGHT), Dermatology Hospital of Southern Medical University, Guangzhou, China
- Acacia Lab for Implementation Science, School of Health Management and Dermatology Hospital, Southern Medical University, Guangzhou, China
- Center for World Health Organization Studies and Department of Health Management, School of Health Management, Southern Medical University, Guangzhou, China
| |
Collapse
|
3
|
Guetz B, Bidmon S. The Credibility of Physician Rating Websites: A Systematic Literature Review. Health Policy 2023; 132:104821. [PMID: 37084700 DOI: 10.1016/j.healthpol.2023.104821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 04/05/2023] [Accepted: 04/11/2023] [Indexed: 04/23/2023]
Abstract
OBJECTIVES Increasingly, the credibility of online reviews is drawing critical attention due to the lack of control mechanisms, the constant debate about fake reviews and, last but not least, current developments in the field of artificial intelligence. For this reason, the aim of this study was to examine the extent to which assessments recorded on physician rating websites (PRWs) are credible, based on a comparison to other evaluation criteria. METHODS Referring to the PRISMA guidelines, a comprehensive literature search was conducted across different scientific databases. Data were synthesized by comparing individual statistical outcomes, objectives and conclusions. RESULTS The chosen search strategy led to a database of 36,755 studies of which 28 were ultimately included in the systematic review. The literature review yielded mixed results regarding the credibility of PRWs. While seven publications supported the credibility of PRWs, six publications found no correlation between PRWs and alternative datasets. 15 studies reported mixed results. CONCLUSIONS This study has shown that ratings on PRWs seem to be credible when relying primarily on patients' perception. However, these portals seem inadequate to represent alternative comparative values such as the medical quality of physicians. For health policy makers our results show that decisions based on patients' perceptions may be well supported by data from PRWs. For all other decisions, however, PRWs do not seem to contain sufficiently useful data.
Collapse
Affiliation(s)
- Bernhard Guetz
- Department of Marketing and International Management, Alpen-Adria- Universitaet Klagenfurt, Universitaetsstrasse 65-67, Klagenfurt am Woerthersee, 9020, Austria.
| | - Sonja Bidmon
- Department of Marketing and International Management, Alpen-Adria- Universitaet Klagenfurt, Universitaetsstrasse 65-67, Klagenfurt am Woerthersee, 9020, Austria
| |
Collapse
|
4
|
Analysis of Patients' Online Reviews of Orthopaedic Surgeons. J Am Acad Orthop Surg Glob Res Rev 2022; 6:01979360-202210000-00006. [PMID: 36734653 PMCID: PMC9584189 DOI: 10.5435/jaaosglobal-d-22-00074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 07/29/2022] [Indexed: 02/07/2023]
Abstract
INTRODUCTION Physician rating websites (PRWs) are an increasingly popular interface between patient and surgeon. Despite the growing popularity of PRWs, little guidance exists for orthopaedic surgeons regarding online reviews. We analyzed online ratings and comments to provide a better understanding of patients' values and expectations so that surgeons can tailor their practice accordingly to enhance their clinical care and online reputation. METHODS Three common PRWs (Vitals, HealthGrades, and RateMDs) were queried from January 1, 2006, to May 18, 2020. Publicly available ratings, both quantitative (1 to 5 stars) and qualitative (free text comments), were collected. Comments were qualitatively tabulated as having positive or negative assessments for categories including outcome, personality, staff, surgical skill, visit time, bedside manner, wait time, diagnosis, knowledge, treatment, and advanced practice providers and analyzed using chi square goodness of fit. Quantitative comparisons of star ratings were made across surgeon years in practice, sex, practice setting, and PRW and compared using chi square independence testing. RESULTS In total, 81% of patient comments were found to have a positive assessment. Comments regarding outcome (P < 0.001), staff (P = 0.001), surgical skill (P < 0.001), or knowledge (P = 0.001) were more likely to be positive. Reviews regarding bedside manner (P < 0.001), wait time (P < 0.001), diagnosis (P < 0.001), treatment (P < 0.001), or advanced practice providers (P < 0.001) were more likely to be negative. Surgeon sex was not associated with a difference in quantitative ratings (P = 0.131), unlike practice setting (P < 0.001) and PRW (P < 0.001). DISCUSSION PRWs are a growing interface between surgeon and patient with a considerable effect on surgeon marketability. This study reveals a statistical association between certain patient-centered medical practices and positive patient reviews. This emphasizes the importance of ensuring that high standards are maintained throughout a physician's practice of maintaining a constant awareness of the fundamentals for effective patient care and of taking care to curate a physician's online presence.
Collapse
|
5
|
Rajagopalan D, Thomas J, Ring D, Fatehi A. Quantitative Patient-Reported Experience Measures Derived From Natural Language Processing Have a Normal Distribution and No Ceiling Effect. Qual Manag Health Care 2022; 31:210-218. [PMID: 35383720 DOI: 10.1097/qmh.0000000000000355] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
BACKGROUND AND OBJECTIVES Patient-reported experience measures have the potential to guide improvement in health care delivery. Many patient-reported experience measures are limited by the presence of strong ceiling effects that limit their analytical utility. METHODS We used natural language processing to develop 2 new methods of evaluating patient experience using text comments and associated ordinal and categorical ratings of willingness to recommend from 1390 patients receiving specialty or nonspecialty care at our offices. One method used multivariable analysis based on linguistic factors to derive a formula to estimate the ordinal likelihood to recommend. The other method used the meaning extraction method of thematic analysis to identify words associated with categorical ratings of likelihood to recommend with which we created an equation to compute an experience score. We measured normality of the 2 score distributions and ceiling effects. RESULTS Spearman rank-order correlation analysis identified 36 emotional and linguistic constructs associated with ordinal rating of likelihood to recommend, 9 of which were independently associated in multivariable analysis. The calculation derived from this model corresponded with the original ordinal rating with an accuracy within 0.06 units on a 0 to 10 scale. This score and the score developed from thematic analysis both had a relatively normal distribution and limited or no ceiling effect. CONCLUSIONS Quantitative ratings of patient experience developed using natural language processing of text comments can have relatively normal distributions and no ceiling effect.
Collapse
Affiliation(s)
- Dayal Rajagopalan
- Department of Surgery and Perioperative Care, Dell Medical School, The University of Texas at Austin
| | | | | | | |
Collapse
|
6
|
Bhojak NP, Modi A, Patel JD, Patel M. Measuring patient satisfaction in emergency department: An empirical test using structural equation modeling. INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT 2022. [DOI: 10.1080/20479700.2022.2112440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Affiliation(s)
- Nimesh P. Bhojak
- Department of Hospital Management, Hemchandracharya North Gujarat University, Patan, India
| | - Ashwin Modi
- Department of Commerce and Management, Hemchandracharya North Gujarat University, Patan, India
| | - Jayesh D. Patel
- Ganpat University - V. M. Patel Institute of Management, Mehsana, Gujarat, India
| | | |
Collapse
|
7
|
The Majority of Complaints About Orthopedic Sports Surgeons on Yelp Are Nonclinical. Arthrosc Sports Med Rehabil 2021; 3:e1465-e1472. [PMID: 34746847 PMCID: PMC8551418 DOI: 10.1016/j.asmr.2021.07.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 07/12/2021] [Indexed: 12/15/2022] Open
Abstract
Purpose To examine and characterize extremely negative Yelp reviews of orthopedic sports surgeons in the United States. Methods A search for reviews was performed using the keywords “Orthopedic Sports Medicine” on Yelp.com for 8 major metropolitan areas. Single-star reviews were isolated for analysis, and individual complaints were then categorized as clinical or nonclinical. The reviews were classified as surgical or nonsurgical. Results A total of 11,033 reviews were surveyed. Of these, 1,045 (9.5%) were identified as 1-star, and 289 were ultimately included in the study. These reviews encompassed 566 total complaints, 133 (23%) of which were clinical, and 433 (77%) of which were nonclinical in nature. The most common clinical complaints concerned complications (32 complaints; 6%), misdiagnosis (29 complaints; 5%), and uncontrolled pain (21 complaints; 4%). The most common nonclinical complaints concerned physicians’ bedside manner (120 complaints; 21%), unprofessional staff (98 complaints; 17%), and finances (78 complaints; 14%). Patients who had undergone surgery wrote 47 reviews that resulted in 114 complaints (20.5% of total complaints), whereas nonsurgical patients were responsible for 242 reviews and a total of 452 complaints (81.3% of total complaints). The difference in the number of complaints by patients after surgery and patients without surgery was statistically significant (P < 0.05) for all categories except for uncontrolled pain, delay in care, bedside manner of midlevel staff, and facilities. Conclusion Our study of extremely negative Yelp reviews found that 77% of negative complaints were nonclinical in nature. The most common clinical complaints were complications, misdiagnoses and uncontrolled pain. Only 16% of 1-star reviews were from surgical patients. Clinical Relevance Patients use online review platforms when choosing surgeons. A comprehensive understanding of factors affecting patient satisfaction and dissatisfaction is needed. The results of our study could be used to guide future quality-improvement measures and to assist surgeons in maintaining favorable online reputations.
Collapse
|
8
|
Shah AM, Muhammad W, Lee K, Naqvi RA. Examining Different Factors in Web-Based Patients' Decision-Making Process: Systematic Review on Digital Platforms for Clinical Decision Support System. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph182111226. [PMID: 34769745 PMCID: PMC8582809 DOI: 10.3390/ijerph182111226] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 10/18/2021] [Accepted: 10/21/2021] [Indexed: 01/22/2023]
Abstract
(1) Background: The appearance of physician rating websites (PRWs) has raised researchers’ interest in the online healthcare field, particularly how users consume information available on PRWs in terms of online physician reviews and providers’ information in their decision-making process. The aim of this study is to consistently review the early scientific literature related to digital healthcare platforms, summarize key findings and study features, identify literature deficiencies, and suggest digital solutions for future research. (2) Methods: A systematic literature review using key databases was conducted to search published articles between 2010 and 2020 and identified 52 papers that focused on PRWs, different signals in the form of PRWs’ features, the findings of these studies, and peer-reviewed articles. The research features and main findings are reported in tables and figures. (3) Results: The review of 52 papers identified 22 articles for online reputation, 15 for service popularity, 16 for linguistic features, 15 for doctor–patient concordance, 7 for offline reputation, and 11 for trustworthiness signals. Out of 52 studies, 75% used quantitative techniques, 12% employed qualitative techniques, and 13% were mixed-methods investigations. The majority of studies retrieved larger datasets using machine learning techniques (44/52). These studies were mostly conducted in China (38), the United States (9), and Europe (3). The majority of signals were positively related to the clinical outcomes. Few studies used conventional surveys of patient treatment experience (5, 9.61%), and few used panel data (9, 17%). These studies found a high degree of correlation between these signals with clinical outcomes. (4) Conclusions: PRWs contain valuable signals that provide insights into the service quality and patient treatment choice, yet it has not been extensively used for evaluating the quality of care. This study offers implications for researchers to consider digital solutions such as advanced machine learning and data mining techniques to test hypotheses regarding a variety of signals on PRWs for clinical decision-making.
Collapse
Affiliation(s)
- Adnan Muhammad Shah
- Department of Computing Engineering, Gachon University, Seoul 13120, Korea
- Department of Physics, Charles E. Schmidt College of Science, Florida Atlantic University, Boca Raton, FL 33431-0991, USA; (A.M.S.); (W.M.)
- Department of Management Sciences, Shaheed Zulfikar Ali Bhutto Institute of Science and Technology, Islamabad 44320, Pakistan
| | - Wazir Muhammad
- Department of Physics, Charles E. Schmidt College of Science, Florida Atlantic University, Boca Raton, FL 33431-0991, USA; (A.M.S.); (W.M.)
| | - Kangyoon Lee
- Department of Computing Engineering, Gachon University, Seoul 13120, Korea
- Correspondence:
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Korea;
| |
Collapse
|
9
|
Zillioux J, Pike CW, Sharma D, Rapp DE. Analysis of Online Urologist Ratings: Are Rating Differences Associated With Subspecialty? J Patient Exp 2021; 7:1062-1067. [PMID: 33457546 PMCID: PMC7786750 DOI: 10.1177/2374373520951901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Patients are increasingly using online rating websites to obtain information about physicians and to provide feedback. We performed an analysis of urologist online ratings, with specific focus on the relationship between overall rating and urologist subspecialty. We conducted an analysis of urologist ratings on Healthgrades.com. Ratings were sampled across 4 US geographical regions, with focus across 3 practice types (large and small private practice, academic) and 7 urologic subspecialties. Statistical analysis was performed to assess for differences among subgroup ratings. Data were analyzed for 954 urologists with a mean age of 53 (±10) years. The median overall urologist rating was 4.0 [3.4-4.7]. Providers in an academic practice type or robotics/oncology subspecialty had statistically significantly higher ratings when compared to other practice settings or subspecialties (P < 0.001). All other comparisons between practice types, specialties, regions, and sexes failed to demonstrate statistically significant differences. In our study of online urologist ratings, robotics/oncology subspecialty and academic practice setting were associated with higher overall ratings. Further study is needed to assess reasons underlying this difference.
Collapse
Affiliation(s)
| | - C William Pike
- Georgetown University School of Medicine, Washington, DC, USA
| | - Devang Sharma
- Department of Urology, University of Virginia, VA, USA
| | - David E Rapp
- Department of Urology, University of Virginia, VA, USA
| |
Collapse
|
10
|
Cross-sectional analysis of online patient reviews of infertility care providers. F S Rep 2020; 1:282-286. [PMID: 34223257 PMCID: PMC8244325 DOI: 10.1016/j.xfre.2020.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Revised: 07/17/2020] [Accepted: 07/17/2020] [Indexed: 11/22/2022] Open
Abstract
Objective To observe the effects of practice type, location, and mandated insurance coverage on infertility physician online reviews by patients. Design Retrospective cohort study. Setting Not applicable. Patient(s) Patient online reviews of fertility specialists from 2016 to 2019. Interventions(s) None. Main Outcome Measure(s) The analysis consisted of the average rating out of 5 for each physician published on Vitals, RateMD, and Healthgrades. Result(s) Data were collected on 1,097 specialists. Physicians practicing in states with versus without mandated insurance coverage received an average rating of 4.093 versus 4.076, respectively. The average rating was 3.964 for physicians affiliated with a university or hospital versus 4.128 for those working in a private practice. Significant differences were found in physician ratings from the four regions. It was revealed that physicians who practiced in the South (n = 354) received significantly higher mean average ratings than those in the Northeast (n = 327) and Midwest (n = 175). Physicians practicing in the West (n = 241) received significantly higher ratings than those in the Midwest (n = 175). Conclusion(s) The average online patient rating of infertility specialists was found to be significantly higher for physicians working in a private practice compared with those affiliated with a university or hospital system. No significant difference was found between the average rating in states with versus without mandated insurance coverage for infertility treatment. We propose that qualities other than patient financial responsibility are implicated in the factors used to rate physicians. The average online patient rating of infertility specialists was found to be significantly higher for physicians working in a private practice than for those affiliated with a university or hospital system. No significant difference was found between the average rating in states with mandated insurance coverage for fertility treatment and that in states without mandated insurance coverage. The average online patient ratings of infertility specialists were found to be significantly higher for physicians working in the South and West.
Collapse
|
11
|
Kast K, Emmert M, Maier CB. [Public Reporting on long-term Care Facilities in Germany: Current State and Evaluation of Quality Information]. DAS GESUNDHEITSWESEN 2020; 83:809-817. [PMID: 32588407 DOI: 10.1055/a-1160-5720] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
OBJECTIVES Little is known about public reporting on long-term care facilities. In this study, we (1) identify the websites that are available for a search on long-term care facilities in Germany, (2) describe them systematically with regard to general information and range of functions, 3) capture the information on quality available on the websites and 4) evaluate the extent to which they can be useful for those in need. METHODS 1) Systematic internet search to identify the websites. 2) Analysis of the websites with regard to defined inclusion and exclusion criteria. 3) Data collection from the included websites. 4) Description of the general content and the range of functions of the websites. 5) Collection of quality-related information on long-term care facilities (structure, process and outcome quality, costs, quality inspections results, user feedback). 6) Evaluation of the usefulness of information by analyzing the information using a catalogue of criteria. RESULTS A total of 24 websites were identified with information on long-term care facilities. Only 4 websites allowed a direct online comparison of several facilities and 17% allowed consumer feedback online. All websites provided information on structural quality, but none on the outcome quality. Across all websites, the usefulness of information for the consumers amounted to 19%. The thematic area on location and accessibility of a facility offered relatively detailed information (79%), while only to 9% was dedicated to the thematic area on care. CONCLUSION There is a large number of websites that can be searched for information on long-term care facilities. They show a range of heterogeneous functions and information. More websites should offer a function of comparison of multiple facilities. With regard to the information available, consumer preferences do not yet seem to be sufficiently taken into account. Further researches should focus on the evaluation of the impact of outcome quality on decision-making and the analysis of the validity of consumer feedback.
Collapse
Affiliation(s)
- Kristina Kast
- Lehrstuhl für Gesundheitsmanagement, Friedrich-Alexander-Universität Erlangen-Nürnberg, Fachbereich Wirtschaftswissenschaften, Nürnberg
| | - Martin Emmert
- Lehrstuhl für Gesundheitsmanagement, Friedrich-Alexander-Universität Erlagen-Nürnberg, Fachbereich Wirtschaftswissenschaften, Nürnberg
| | - Claudia Bettina Maier
- Fachgebiet Management im Gesundheitswesen, Technische Universität Berlin, Fakultät Wirtschaft und Management, Berlin
| |
Collapse
|
12
|
Klietz ML, Kaiser HW, Machens HG, Aitzetmüller MM. Social Media Marketing: What Do Prospective Patients Want to See? Aesthet Surg J 2020; 40:577-583. [PMID: 31361806 DOI: 10.1093/asj/sjz204] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Platforms such as Instagram, Facebook, Twitter, and Google+ have created a worldwide audience of almost 3 billion people. Society is dramatically changing, demanding evolution of marketing strategies by plastic surgeons and aesthetic doctors alike. This unknown territory provides excellent opportunities, but creates many pitfalls as well; uncertainty remains as to the most effective manner to promote one's practice/services. OBJECTIVES The aim of this study was to design a social experiment based on Instagram to give guidance for efficient self-promotion. METHODS An Instagram account called "doctor.aesthetics" was created. Content was produced, and categorized into 4 groups: Aesthetics, Private Life, Disease, and Science. No bots or other Instagram-based promotion were utilized. Every post was evaluated regarding likes, comments, clicks, new followers, impressions, and savings. RESULTS After 5 months and 37 posts, 10,500 people followed the account. "Scientific" posts were excluded from the analysis due to a low response rate. A significantly enhanced number of likes for "Private" postings was found. Additionally, "Private" posts led to most clicks and new followers, whereas "Aesthetics" posts were saved by most people. CONCLUSIONS To benefit the most from social media advertising, it is necessary to offer insights into private life. Although "Aesthetics" and "Disease" postings showed similar response rates, "Scientific" posts failed to attract people.
Collapse
Affiliation(s)
- Marie-Luise Klietz
- Department for Plastic, Reconstructive, Aesthetic, and Hand Surgery, Fachklinik Hornheide, Münster, Germany
| | | | - Hans-Günther Machens
- Department of Plastic Surgery and Hand Surgery, Klinikum rechts der Isar der Technischen Universität München, München, Germany
| | - Matthias Michael Aitzetmüller
- Department of Plastic Surgery and Hand Surgery, Klinikum rechts der Isar der Technischen Universität München, München, Germany
- Department of Trauma, Hand, and Reconstructive Surgery, University Hospital Münster, Münster, Germany
| |
Collapse
|
13
|
Powell J, Atherton H, Williams V, Mazanderani F, Dudhwala F, Woolgar S, Boylan AM, Fleming J, Kirkpatrick S, Martin A, van Velthoven M, de Iongh A, Findlay D, Locock L, Ziebland S. Using online patient feedback to improve NHS services: the INQUIRE multimethod study. HEALTH SERVICES AND DELIVERY RESEARCH 2019. [DOI: 10.3310/hsdr07380] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Background
Online customer feedback has become routine in many industries, but it has yet to be harnessed for service improvement in health care.
Objectives
To identify the current evidence on online patient feedback; to identify public and health professional attitudes and behaviour in relation to online patient feedback; to explore the experiences of patients in providing online feedback to the NHS; and to examine the practices and processes of online patient feedback within NHS trusts.
Design
A multimethod programme of five studies: (1) evidence synthesis and stakeholder consultation; (2) questionnaire survey of the public; (3) qualitative study of patients’ and carers’ experiences of creating and using online comment; (4) questionnaire surveys and a focus group of health-care professionals; and (5) ethnographic organisational case studies with four NHS secondary care provider organisations.
Setting
The UK.
Methods
We searched bibliographic databases and conducted hand-searches to January 2018. Synthesis was guided by themes arising from consultation with 15 stakeholders. We conducted a face-to-face survey of a representative sample of the UK population (n = 2036) and 37 purposively sampled qualitative semistructured interviews with people with experience of online feedback. We conducted online surveys of 1001 quota-sampled doctors and 749 nurses or midwives, and a focus group with five allied health professionals. We conducted ethnographic case studies at four NHS trusts, with a researcher spending 6–10 weeks at each site.
Results
Many people (42% of internet users in the general population) read online feedback from other patients. Fewer people (8%) write online feedback, but when they do one of their main reasons is to give praise. Most online feedback is positive in its tone and people describe caring about the NHS and wanting to help it (‘caring for care’). They also want their feedback to elicit a response as part of a conversation. Many professionals, especially doctors, are cautious about online feedback, believing it to be mainly critical and unrepresentative, and rarely encourage it. From a NHS trust perspective, online patient feedback is creating new forms of response-ability (organisations needing the infrastructure to address multiple channels and increasing amounts of online feedback) and responsivity (ensuring responses are swift and publicly visible).
Limitations
This work provides only a cross-sectional snapshot of a fast-emerging phenomenon. Questionnaire surveys can be limited by response bias. The quota sample of doctors and volunteer sample of nurses may not be representative. The ethnographic work was limited in its interrogation of differences between sites.
Conclusions
Providing and using online feedback are becoming more common for patients who are often motivated to give praise and to help the NHS improve, but health organisations and professionals are cautious and not fully prepared to use online feedback for service improvement. We identified several disconnections between patient motivations and staff and organisational perspectives, which will need to be resolved if NHS services are to engage with this source of constructive criticism and commentary from patients.
Future work
Intervention studies could measure online feedback as an intervention for service improvement and longitudinal studies could examine use over time, including unanticipated consequences. Content analyses could look for new knowledge on specific tests or treatments. Methodological work is needed to identify the best approaches to analysing feedback.
Study registration
The ethnographic case study work was registered as Current Controlled Trials ISRCTN33095169.
Funding
This project was funded by the National institute for Health Research (NIHR) Health Services and Delivery Research programme and will be published in full in Health Services and Delivery Research; Vol. 7, No. 38. See the NIHR Journals Library website for further project information.
Collapse
Affiliation(s)
- John Powell
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - Helen Atherton
- Unit of Academic Primary Care, Warwick Medical School, University of Warwick, Coventry, UK
| | - Veronika Williams
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - Fadhila Mazanderani
- School of Social and Political Science, University of Edinburgh, Edinburgh, UK
| | - Farzana Dudhwala
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - Steve Woolgar
- Saïd Business School, University of Oxford, Oxford, UK
- Department of Thematic Studies, Linköping University, Linköping, Sweden
| | - Anne-Marie Boylan
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - Joanna Fleming
- Unit of Academic Primary Care, Warwick Medical School, University of Warwick, Coventry, UK
| | - Susan Kirkpatrick
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - Angela Martin
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | | | | | | | - Louise Locock
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | - Sue Ziebland
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| |
Collapse
|
14
|
Pike CW, Zillioux J, Rapp D. Online Ratings of Urologists: Comprehensive Analysis. J Med Internet Res 2019; 21:e12436. [PMID: 31267982 PMCID: PMC6632102 DOI: 10.2196/12436] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2018] [Revised: 03/23/2019] [Accepted: 05/02/2019] [Indexed: 12/14/2022] Open
Abstract
Background Physician-rating websites are being increasingly used by patients to help guide physician choice. As such, an understanding of these websites and factors that influence ratings is valuable to physicians. Objective We sought to perform a comprehensive analysis of online urology ratings information, with a specific focus on the relationship between number of ratings or comments and overall physician rating. Methods We analyzed urologist ratings on the Healthgrades website. The data retrieval focused on physician and staff ratings information. Our analysis included descriptive statistics of physician and staff ratings and correlation analysis between physician or staff performance and overall physician rating. Finally, we performed a best-fit analysis to assess for an association between number of physician ratings and overall rating. Results From a total of 9921 urology profiles analyzed, there were 99,959 ratings and 23,492 comments. Most ratings were either 5 (“excellent”) (67.53%, 67,505/99,959) or 1 (“poor”) (24.22%, 24,218/99,959). All physician and staff performance ratings demonstrated a positive and statistically significant correlation with overall physician rating (P<.001 for all analyses). Best-fit analysis demonstrated a negative relationship between number of ratings or comments and overall rating until physicians achieved 21 ratings or 6 comments. Thereafter, a positive relationship was seen. Conclusions In our study, a dichotomous rating distribution was seen with more than 90% of ratings being either excellent or poor. A negative relationship between number of ratings or comments and overall rating was initially seen, after which a positive relationship was demonstrated. Combined, these data suggest that physicians can benefit from understanding online ratings and that proactive steps to encourage patient rating submissions may help optimize overall rating.
Collapse
Affiliation(s)
- C William Pike
- Georgetown University School of Medicine, Washington, DC, United States
| | - Jacqueline Zillioux
- Department of Urology, University of Virginia Medical Center, Charlottesville, VA, United States
| | - David Rapp
- Department of Urology, University of Virginia Medical Center, Charlottesville, VA, United States
| |
Collapse
|
15
|
Li S, Hubner A. The Impact of Web-Based Ratings on Patient Choice of a Primary Care Physician Versus a Specialist: Randomized Controlled Experiment. J Med Internet Res 2019; 21:e11188. [PMID: 31254337 PMCID: PMC6625218 DOI: 10.2196/11188] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Revised: 01/04/2019] [Accepted: 05/11/2019] [Indexed: 11/13/2022] Open
Abstract
Background Physician review websites have empowered prospective patients to acquire information about physicians. However, little is known about how Web-based ratings on different aspects of a physician may affect patients’ selection of physicians differently. Objective The objectives of this study were to examine (1) how patients weigh ratings on a physician’s technical skills and interpersonal skills in their selection of physicians and (2) whether and how people’s choice of a primary care physician versus a specialist is affected differently by Web-based ratings. Methods A 2×2×2×2 between-subjects experiment was conducted. Over 600 participants were recruited through a crowdsourcing website and randomly assigned to view a mockup physician review Web page that contained information on a physician’s basic information and patients’ ratings. After reviewing the Web page, participants were asked to complete a survey on their perceptions of the physician and willingness to seek health care from the physician. Results The results showed that participants were more willing to choose a physician with higher ratings on technical skills than on interpersonal skills compared with a physician with higher ratings on interpersonal skills than on technical skills, t369.96=22.36, P<.001, Cohen d=1.22. In the selection of different types of physicians, patients were more likely to choose a specialist with higher ratings on technical skills than on interpersonal skills, compared with a primary care physician with the same ratings, F1,521=5.34, P=.021. Conclusions The findings suggest that people place more weight on technical skills than interpersonal skills in their selection of a physician based on their ratings on the Web. Specifically, people are more likely to make a compromise on interpersonal skills in their choice of a specialist compared with a primary care physician. This study emphasizes the importance of examining Web-based physician ratings in a more nuanced way in relation to the selection of different types of physicians. Trial Registration ISRCTN Registry ISRCTN91316463; http://www.isrctn.com/ISRCTN91316463
Collapse
Affiliation(s)
- Siyue Li
- College of Media and International Culture, Zhejiang University, Hangzhou, China
| | - Austin Hubner
- School of Communication, The Ohio State University, Columbus, OH, United States
| |
Collapse
|
16
|
Hong YA, Liang C, Radcliff TA, Wigfall LT, Street RL. What Do Patients Say About Doctors Online? A Systematic Review of Studies on Patient Online Reviews. J Med Internet Res 2019; 21:e12521. [PMID: 30958276 PMCID: PMC6475821 DOI: 10.2196/12521] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 12/16/2018] [Accepted: 01/31/2019] [Indexed: 01/20/2023] Open
Abstract
Background The number of patient online reviews (PORs) has grown significantly, and PORs have played an increasingly important role in patients’ choice of health care providers. Objective The objective of our study was to systematically review studies on PORs, summarize the major findings and study characteristics, identify literature gaps, and make recommendations for future research. Methods A major database search was completed in January 2019. Studies were included if they (1) focused on PORs of physicians and hospitals, (2) reported qualitative or quantitative results from analysis of PORs, and (3) peer-reviewed empirical studies. Study characteristics and major findings were synthesized using predesigned tables. Results A total of 63 studies (69 articles) that met the above criteria were included in the review. Most studies (n=48) were conducted in the United States, including Puerto Rico, and the remaining were from Europe, Australia, and China. Earlier studies (published before 2010) used content analysis with small sample sizes; more recent studies retrieved and analyzed larger datasets using machine learning technologies. The number of PORs ranged from fewer than 200 to over 700,000. About 90% of the studies were focused on clinicians, typically specialists such as surgeons; 27% covered health care organizations, typically hospitals; and some studied both. A majority of PORs were positive and patients’ comments on their providers were favorable. Although most studies were descriptive, some compared PORs with traditional surveys of patient experience and found a high degree of correlation and some compared PORs with clinical outcomes but found a low level of correlation. Conclusions PORs contain valuable information that can generate insights into quality of care and patient-provider relationship, but it has not been systematically used for studies of health care quality. With the advancement of machine learning and data analysis tools, we anticipate more research on PORs based on testable hypotheses and rigorous analytic methods. Trial Registration International Prospective Register of Systematic Reviews (PROSPERO) CRD42018085057; https://www.crd.york.ac.uk/PROSPERO/display_record.php?RecordID=85057 (Archived by WebCite at http://www.webcitation.org/76ddvTZ1C)
Collapse
Affiliation(s)
- Y Alicia Hong
- Department of Health Administration and Policy, George Mason University, Fairfax, VA, United States.,School of Public Health, Texas A&M University, College Station, TX, United States
| | - Chen Liang
- Arnold School of Public Health, University of South Carolina, Columbia, SC, United States
| | - Tiffany A Radcliff
- School of Public Health, Texas A&M University, College Station, TX, United States
| | - Lisa T Wigfall
- Department of Health Kinesiology, Texas A&M University, College Station, TX, United States
| | - Richard L Street
- Department of Communication, Texas A&M University, College Station, TX, United States
| |
Collapse
|
17
|
Analysis of Internet Review Site Comments for Spine Surgeons: How Office Staff, Physician Likeability, and Patient Outcome Are Associated With Online Evaluations. Spine (Phila Pa 1976) 2018; 43:1725-1730. [PMID: 29975328 DOI: 10.1097/brs.0000000000002740] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
STUDY DESIGN Observational study. OBJECTIVE To evaluate how online patient comments will affect website ratings for spine surgeons. SUMMARY OF BACKGROUND DATA With the ever-growing utilization of physician review websites, healthcare consumers are assuming more control over whom they choose for care. We evaluated patient feedback and satisfaction scores of spine surgeons using comments from three leading physician rating websites: Healthgrades.com, Vitals.com, Google.com. This is the largest review of online comments and the largest review of spine surgeon comments. METHODS From the North American Spine Society (NASS) membership directory, 210 spine surgeons practicing in Florida (133 orthopedic trained; 77 neurosurgery trained) with online comments available for review were identified, yielding 4701 patient comments. These were categorized according to subject: (1) surgeon competence, (2) surgeon likeability/character, (3) office staff, ease of scheduling, office environment. Type 1 and 2 comments were surgeon-dependent factors whereas type 3 comments were surgeon-independent factors. Patient comments also reported a score (1-5), 5 being the most favorable and 1 being the least favorable. RESULTS There were 1214 (25.8%) comments from Healthgrades, 2839 (60.4%) from Vitals, and 648 (13.8%) from Google. 89.9% (4225) of comments pertained to surgeon outcomes and likeability (comment type 1 and 2), compared with 10.1% (476) surgeon-independent comments (comment type 3) (P < 0.0001). There was a significantly higher number of favorable ratings associated with surgeon-dependent comments (types 1 and 2) compared with surgeon-independent comments (type 3). Surgeon-independent comments were associated with significantly lower scores compared with comments regarding surgeon-dependent factors on all review sites. CONCLUSION Spine surgeons are more likely to receive favorable reviews for factors pertaining to outcomes, likeability/character, and negative reviews based on ancillary staff interactions, billing, and office environment. Surgeons should continue to take an active role in modifying factors patients perceive as negative, even if not directly related to the physician. LEVEL OF EVIDENCE 3.
Collapse
|
18
|
Möhlhenrich SC, Wurbs M, Modabber A, Wolf M, Huber F, Fritz U. Influence of physician evaluation portals on orthodontist selection by patients. J Orofac Orthop 2018; 79:403-411. [PMID: 30187082 DOI: 10.1007/s00056-018-0154-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Accepted: 07/27/2018] [Indexed: 10/28/2022]
Abstract
OBJECTIVE This survey aimed to determine the influence of physician evaluation portals (PEP) on a patient's choice of physicians, particularly orthodontists. MATERIALS AND METHODS Questionnaires were used to collect sociodemographic data, reasons for orthodontist selection, type of Internet use, as well as information on the knowledge, use and evaluation of 14 popular PEPs. A total of 506 questionnaires were evaluated, and a descriptive statistical evaluation was conducted using the χ2 test. RESULTS The majority of the respondents selected orthodontists on the basis of personal recommendations by other physicians (35%), family/friends (33%) or patient referral (14%). Currently, the most popular portals in Germany, which are mostly found through Internet searches, are jameda.de (36%) and arztauskunft.de (19%). A total of 5% of the respondents have already used a PEP to evaluate a physician. Moreover, 70% of the respondents described PEPs as helpful, 28% as recommendable and 2% use PEPs regularly. Knowledge of PEPs is correlated with the level of educational attainment (p = 0.024) and the frequency of Internet use (p < 0.001). CONCLUSION On the selection of healthcare providers, particularly orthodontists, PEPs have little influence. Patients select physicians on the basis of personal recommendations. Physicians' concerns about negative evaluations on PEPs are unfounded given the low level of awareness of PEPs by the general populace.
Collapse
Affiliation(s)
- Stephan Christian Möhlhenrich
- Department of Orthodontics and Dentofacial Orthopedics, University Hospital of the RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
| | - Matthias Wurbs
- Private practice for orthodontics, Brauerstraße 8, 66663, Merzig, Germany
| | - Ali Modabber
- Department of Oral and Maxillofacial Surgery, University Hospital of the RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Michael Wolf
- Department of Orthodontics and Dentofacial Orthopedics, University Hospital of the RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Frank Huber
- Chair of Business Administration and Marketing I, Johannes Gutenberg University of Mainz, Jakob-Welder-Weg 9, 55128, Mainz, Germany
| | - Ulrike Fritz
- Department of Orthodontics and Dentofacial Orthopedics, University Hospital of the RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| |
Collapse
|
19
|
Douglas Lawson T, Tecson KM, Shaver CN, Barnes SA, Kavli S. The impact of informal leader nurses on patient satisfaction. J Nurs Manag 2018; 27:103-108. [DOI: 10.1111/jonm.12653] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 04/17/2018] [Accepted: 04/22/2018] [Indexed: 11/28/2022]
Affiliation(s)
| | - Kristen M. Tecson
- Baylor Heart and Vascular Institute; Dallas Texas
- Baylor Scott & White Research Institute; Dallas Texas
| | | | | | | |
Collapse
|
20
|
Burn MB, Lintner DM, Cosculluela PE, Varner KE, Liberman SR, McCulloch PC, Harris JD. Physician Rating Scales Do Not Accurately Rate Physicians. Orthopedics 2018; 41:e445-e456. [PMID: 29658974 DOI: 10.3928/01477447-20180409-06] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Accepted: 07/31/2017] [Indexed: 02/03/2023]
Abstract
The purpose of this study was to determine the proportion of questions used by online physician rating scales to directly rate physicians themselves. A systematic review was performed of online, patient-reported physician rating scales. Fourteen websites were identified containing patient-reported physician rating scales, with the most common questions pertaining to office staff courtesy, wait time, overall rating (entered, not calculated), trust/confidence in physician, and time spent with patient. Overall, 28% directly rated the physician, 48% rated both the physician and the office, and 24% rated the office alone. There is great variation in the questions used, and most fail to directly rate physicians themselves. [Orthopedics. 2018; 41(4):e445-e456.].
Collapse
|
21
|
Jack RA, Burn MB, McCulloch PC, Liberman SR, Varner KE, Harris JD. Does experience matter? A meta-analysis of physician rating websites of Orthopaedic Surgeons. Musculoskelet Surg 2018; 102:63-71. [PMID: 28853024 DOI: 10.1007/s12306-017-0500-1] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 08/23/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE To perform a systematic review evaluating online ratings of Orthopaedic Surgeons to determine: (1) the number of reviews per surgeon by website, (2) whether the number of reviews and rate of review acquisition correlated with years in practice, and (3) whether the use of ratings websites varied based on the surgeons' geographic region of practice. METHODS The USA was divided into nine geographic regions, and the most populous city in each region was selected. HealthGrades and the American Board of Orthopaedic Surgery (ABOS) database were used to identify and screen (respectively) all Orthopaedic Surgeons within each of these nine cities. These surgeons were divided into three "age" groups by years since board certification (0-10, 10-20, and 20-30 years were assigned as Groups 1, 2, and 3, respectively). An equal number of surgeons were randomly selected from each region for final analysis. The online profiles for each surgeon were reviewed on four online physician rating websites (PRW, i.e. HealthGrades, Vitals, RateMDs, Yelp) for the number of available patient reviews. Descriptive statistics, analysis of variance (ANOVA), and Pearson correlations were used. RESULTS Using HealthGrades, 2802 "Orthopaedic Surgeons" were identified in nine cities. However, 1271 (45%) of these were not found in the ABOS board certification database. After randomization, a total of 351 surgeons were included in the final analysis. For these 351 surgeons, the mean number of reviews per surgeon found on all four websites was 9.0 ± 14.8 (range 0-184). The mean number of reviews did not differ between the three age groups (p > 0.05) with 8.7 ± 14.4, (2) 10.3 ± 18.3, and (3) 8.0 ± 10.8 for Groups 1, 2, and 3, respectively. However, the rate that reviews were obtained (i.e. reviews per surgeon per year) was significantly higher (p < 0.001) in Group 1 (2.6 ± 7.7 reviews per year) compared to Group 2 (1.4 ± 2.4) and Group 3 (1.1 ± 1.4). There was no correlation between the number of reviews and years in practice (R < 0.001), and there was a poor correlation between number of reviews and regional population (R = 0.199). CONCLUSIONS The number of reviews per surgeon did not differ significantly between the three defined age groups based on years in practice. However, surgeons with less than 10 years in practice were accumulating reviews at a significantly higher rate. Interestingly nearly half of "Orthopaedic Surgeons" listed were not found to be ABOS-certified Orthopaedic Surgeons.
Collapse
Affiliation(s)
- R A Jack
- Department of Orthopedics and Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Floor 25, Houston, TX, 77030, USA
| | - M B Burn
- Department of Orthopedics and Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Floor 25, Houston, TX, 77030, USA
| | - P C McCulloch
- Department of Orthopedics and Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Floor 25, Houston, TX, 77030, USA
| | - S R Liberman
- Department of Orthopedics and Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Floor 25, Houston, TX, 77030, USA
| | - K E Varner
- Department of Orthopedics and Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Floor 25, Houston, TX, 77030, USA
| | - J D Harris
- Department of Orthopedics and Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Floor 25, Houston, TX, 77030, USA.
| |
Collapse
|
22
|
Yaraghi N, Wang W, Gao GG, Agarwal R. How Online Quality Ratings Influence Patients' Choice of Medical Providers: Controlled Experimental Survey Study. J Med Internet Res 2018; 20:e99. [PMID: 29581091 PMCID: PMC5891665 DOI: 10.2196/jmir.8986] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2017] [Revised: 11/21/2017] [Accepted: 12/10/2017] [Indexed: 11/19/2022] Open
Abstract
Background In recent years, the information environment for patients to learn about physician quality is being rapidly changed by Web-based ratings from both commercial and government efforts. However, little is known about how various types of Web-based ratings affect individuals’ choice of physicians. Objective The objective of this research was to measure the relative importance of Web-based quality ratings from governmental and commercial agencies on individuals’ choice of primary care physicians. Methods In a choice-based conjoint experiment conducted on a sample of 1000 Amazon Mechanical Turk users in October 2016, individuals were asked to choose their preferred primary care physician from pairs of physicians with different ratings in clinical and nonclinical aspects of care provided by governmental and commercial agencies. Results The relative log odds of choosing a physician increases by 1.31 (95% CI 1.26-1.37; P<.001) and 1.32 (95% CI 1.27-1.39; P<.001) units when the government clinical ratings and commercial nonclinical ratings move from 2 to 4 stars, respectively. The relative log odds of choosing a physician increases by 1.12 (95% CI 1.07-1.18; P<.001) units when the commercial clinical ratings move from 2 to 4 stars. The relative log odds of selecting a physician with 4 stars in nonclinical ratings provided by the government is 1.03 (95% CI 0.98-1.09; P<.001) units higher than a physician with 2 stars in this rating. The log odds of selecting a physician with 4 stars in nonclinical government ratings relative to a physician with 2 stars is 0.23 (95% CI 0.13-0.33; P<.001) units higher for females compared with males. Similar star increase in nonclinical commercial ratings increases the relative log odds of selecting the physician by female respondents by 0.15 (95% CI 0.04-0.26; P=.006) units. Conclusions Individuals perceive nonclinical ratings provided by commercial websites as important as clinical ratings provided by government websites when choosing a primary care physician. There are significant gender differences in how the ratings are used. More research is needed on whether patients are making the best use of different types of ratings, as well as the optimal allocation of resources in improving physician ratings from the government’s perspective.
Collapse
Affiliation(s)
- Niam Yaraghi
- Department of Operations and Information Management, University of Connecticut, Stamford, CT, United States.,Center for Technology Innovation, The Brookings Institution, Washington, DC, United States
| | - Weiguang Wang
- Department of Decision, Operations and Information Technologies, Robert H Smith School of Business, University of Maryland at College Park, College Park, MD, United States
| | - Guodong Gordon Gao
- Department of Decision, Operations and Information Technologies, Robert H Smith School of Business, University of Maryland at College Park, College Park, MD, United States
| | - Ritu Agarwal
- Department of Decision, Operations and Information Technologies, Robert H Smith School of Business, University of Maryland at College Park, College Park, MD, United States
| |
Collapse
|
23
|
Emmert M, Meszmer N, Schlesinger M. A cross-sectional study assessing the association between online ratings and clinical quality of care measures for US hospitals: results from an observational study. BMC Health Serv Res 2018; 18:82. [PMID: 29402321 PMCID: PMC5800028 DOI: 10.1186/s12913-018-2886-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Accepted: 01/23/2018] [Indexed: 11/15/2022] Open
Abstract
Background Little is known about the usefulness of online ratings when searching for a hospital. We therefore assess the association between quantitative and qualitative online ratings for US hospitals and clinical quality of care measures. Methods First, we collected a stratified random sample of 1000 quantitative and qualitative online ratings for hospitals from the website RateMDs. We used an integrated iterative approach to develop a categorization scheme to capture both the topics and sentiment in the narrative comments. Next, we matched the online ratings with hospital-level quality measures published by the Centers for Medicare and Medicaid Services. Regarding nominally scaled measures, we checked for differences in the distribution among the online rating categories. For metrically scaled measures, we applied the Spearman rank coefficient of correlation. Results Thirteen of the twenty-nine quality of care measures were significantly associated with the quantitative online ratings (Spearman p = ±0.143, p < 0.05 for all). Thereof, eight associations indicated better clinical outcomes for better online ratings. Seven of the twenty-nine clinical measures were significantly associated with the sentiment of patient narratives (p = ±0.114, p < 0.05 for all), whereof four associations indicated worse clinical outcomes in more favorable narrative comments. Conclusions There seems to be some association between quantitative online ratings and clinical performance measures. However, the relatively weak strength and inconsistency of the direction of the association as well as the lack of association with several other clinical measures may not enable the drawing of strong conclusions. Narrative comments also seem to have limited potential to reflect the clinical quality of care in its current form. Thus, online ratings are of limited usefulness in guiding patients towards high-performing hospitals from a clinical point of view. Nevertheless, patients might prefer different aspects of care when choosing a hospital. Electronic supplementary material The online version of this article (10.1186/s12913-018-2886-3) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Martin Emmert
- 2014-15 Harkness & Robert Bosch Fellow in Healthcare Policy and Practice; Department of Health Policy and Management, Yale University, School of Public Health, 47 College Street, New Haven, CT, 06520, USA. .,Friedrich-Alexander-University Erlangen-Nuremberg, School of Business and Economics, Institute of Management (IFM), Lange Gasse 20, 90403, Nuremberg, Germany.
| | - Nina Meszmer
- Friedrich-Alexander-University Erlangen-Nuremberg, School of Business and Economics, Institute of Management (IFM), Lange Gasse 20, 90403, Nuremberg, Germany.,Chair of Health Care Management, Lange Gasse 20, 90403, Nuremberg, Germany
| | - Mark Schlesinger
- Yale University, School of Public Health, Room 304 LEPH, 60 College Street, New Haven, CT, 06520, USA
| |
Collapse
|
24
|
Zhang W, Deng Z, Hong Z, Evans R, Ma J, Zhang H. Unhappy Patients Are Not Alike: Content Analysis of the Negative Comments from China's Good Doctor Website. J Med Internet Res 2018; 20:e35. [PMID: 29371176 PMCID: PMC5806007 DOI: 10.2196/jmir.8223] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Revised: 10/16/2017] [Accepted: 12/01/2017] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND With the rise in popularity of Web 2.0 technologies, the sharing of patient experiences about physicians on online forums and medical websites has become a common practice. However, negative comments posted by patients are considered to be more influential by other patients and physicians than those that are satisfactory. OBJECTIVE The aim of this study was to analyze negative comments posted online about physicians and to identify possible solutions to improve patient satisfaction, as well as their relationship with physicians. METHODS A Java-based program was developed to collect patient comments on the Good Doctor website, one of the most popular online health communities in China. A total of 3012 negative comments concerning 1029 physicians (mean 2.93 [SD 4.14]) from 5 highly ranked hospitals in Beijing were extracted for content analysis. An initial coding framework was constructed with 2 research assistants involved in the codification. RESULTS Analysis, based on the collected 3012 negative comments, revealed that unhappy patients are not alike and that their complaints cover a wide range of issues experienced throughout the whole process of medical consultation. Among them, physicians in Obstetrics and Gynecology (606/3012, 20.12%; P=.001) and Internal Medicine (487/3012, 16.17%; P=.80) received the most negative comments. For negative comments per physician, Dermatology and Sexually Transmitted Diseases (mean 5.72, P<.001) and Andrology (mean 5, P=.02) ranked the highest. Complaints relating to insufficient medical consultation duration (577/3012, 19.16%), physician impatience (527/3012, 17.50%), and perceived poor therapeutic effect (370/3012, 12.28%) received the highest number of negative comments. Specific groups of people, such as those accompanying older patients or children, traveling patients, or very important person registrants, were shown to demonstrate little tolerance for poor medical service. CONCLUSIONS Analysis of online patient complaints provides an innovative approach to understand factors associated with patient dissatisfaction. The outcomes of this study could be of benefit to hospitals or physicians seeking to improve their delivery of patient-centered services. Patients are expected to be more understanding of overloaded physicians' workloads, which are impacted by China's stretched medical resources, as efforts are made to build more harmonious physician-patient relationships.
Collapse
Affiliation(s)
- Wei Zhang
- Institute of Smart Health, School of Medicine and Health Management, Huazhong University of Science and Technology, Wuhan, China
| | - Zhaohua Deng
- Institute of Smart Health, School of Medicine and Health Management, Huazhong University of Science and Technology, Wuhan, China
| | - Ziying Hong
- Institute of Smart Health, School of Medicine and Health Management, Huazhong University of Science and Technology, Wuhan, China
| | - Richard Evans
- Department of Business Information Management and Operations, University of Westminster, London, United Kingdom
| | - Jingdong Ma
- Institute of Smart Health, School of Medicine and Health Management, Huazhong University of Science and Technology, Wuhan, China
| | - Hui Zhang
- School of Public Administration, Guangzhou University, Guangzhou, China
| |
Collapse
|
25
|
McLennan S, Strech D, Meyer A, Kahrass H. Public Awareness and Use of German Physician Ratings Websites: Cross-Sectional Survey of Four North German Cities. J Med Internet Res 2017; 19:e387. [PMID: 29122739 PMCID: PMC5701087 DOI: 10.2196/jmir.7581] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Revised: 08/23/2017] [Accepted: 10/11/2017] [Indexed: 11/15/2022] Open
Abstract
Background Physician rating websites (PRWs) allow patients to rate, comment, and discuss physicians’ quality. The ability of PRWs to influence patient decision making and health care quality is dependent, in part, on sufficient awareness and usage of PRWs. However, previous studies have found relatively low levels of awareness and usage of PRWs, which has raised concerns about the representativeness and validity of information on PRWs. Objective The objectives of this study were to examine (1) participants’ awareness, use, and contribution of ratings on PRWs and how this compares with other rating websites; (2) factors that predict awareness, use, and contribution of ratings on PRWs; and (3) participants’ attitudes toward PRWs in relation to selecting a physician. Methods A mailed cross-sectional survey was sent to a random sample (N=1542) from four North German cities (Nordhorn, Hildesheim, Bremen, and Hamburg) between April and July 2016. Survey questions explored respondents’ awareness, use, and contribution of ratings on rating websites for service (physicians, hospitals, and hotels and restaurants) and products (media and technical) in general and the role of PRWs when searching for a new physician. Results A total of 280 completed surveys were returned (280/1542, 18.16% response rate), with the following findings: (1) Overall, 72.5% (200/276) of respondents were aware of PRWs. Of the respondents who were aware of PRWs, 43.6% (86/197) had used PRWs. Of the respondents who had used PRWs, 23% (19/83) had rated physicians at least once. Awareness, use, and contribution of ratings on PRWs were significantly lower in comparison with all other rating websites, except for hospital rating websites. (2) Except for the impact of responders’ gender and marital status on the awareness of PRWs and responders’ age on the use of PRWs, no other predictors had a relevant impact. (3) Whereas 31.8% (85/267) of the respondents reported that PRWs were a very important or somewhat important information source when searching for a new physician, respondents significantly more often reported that family, friends and colleagues (259/277, 93.5%), other physicians (219/274, 79.9%), and practice websites (108/266, 40.6%) were important information sources. Conclusions Whereas awareness of German PRWs appears to have substantially increased, the use of PRWs and contribution of ratings remains relatively low. Further research is needed to examine the reasons why only a few patients are rating physicians. However, given the information inequality between provider and consumer will always be higher for consumers using the services of physicians, it is possible that people will always rely more on interpersonal recommendations than impersonal public information before selecting a physician.
Collapse
Affiliation(s)
- Stuart McLennan
- Institute for Biomedical Ethics, Universität Basel, Basel, Switzerland.,Institute for History, Ethics and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Daniel Strech
- Institute for History, Ethics and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Andrea Meyer
- Division of Clinical Psychology and Epidemiology, Department of Psychology, Universität Basel, Basel, Switzerland
| | - Hannes Kahrass
- Institute for History, Ethics and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| |
Collapse
|
26
|
Online Ratings of ASOPRS Surgeons: What Do Your Patients Really Think of You? Ophthalmic Plast Reconstr Surg 2017; 33:466-470. [DOI: 10.1097/iop.0000000000000829] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
27
|
Ramkumar PN, Navarro SM, Chughtai M, La T, Fisch E, Mont MA. The Patient Experience: An Analysis of Orthopedic Surgeon Quality on Physician-Rating Sites. J Arthroplasty 2017; 32:2905-2910. [PMID: 28455178 DOI: 10.1016/j.arth.2017.03.053] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Revised: 03/18/2017] [Accepted: 03/23/2017] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND With the advent of the Consensus Core of Orthopedic Measures, arthroplasty surgeons are increasingly subjected to public performance reviews on physician-rating sites. Therefore, we evaluated (1) web site details of physician-rating sites, (2) differences between sites and the Consensus Core, (3) published patient experiences, (4) search rank among sites, and (5) differences between academic vs nonacademic and arthroplasty vs nonarthroplasty surgeons. METHODS The 5 busiest physician-rating sites were analyzed. To compare physician-rating sites to the Consensus Core, 3 reviewers analyzed the web site details. To evaluate patient ratings and reviews, orthopedists from the top 5 academic and nonacademic hospitals (2016 US News & World Report) were analyzed. Institution-produced rating sites were also analyzed. Findings were stratified between academic vs nonacademic and arthroplasty vs nonarthroplasty surgeons. Five hundred and six staff surgeons across 10 academic and nonacademic affiliated hospitals yielded 27,792 patient-generated ratings and reviews for 1404 accounts. RESULTS Features on all sites were practice location, languages spoken, and patient experience. Two sites autogenerated profiles of surgeons without consent. No physician-rating site contained all Consensus Core domains. The composite orthopedic surgeon rating was 4.1 of 5. No significant differences were found between academic and nonacademic affiliated surgeons. Arthroplasty surgeons had a greater number of reviews and ratings on 2 sites. CONCLUSION Reliability of physician-rating sites is questionable, as none contained all Consensus Core domains. Autogeneration of surgeon profiles is occurring, and no differences between academic vs nonacademic or arthroplasty vs nonarthroplasty surgeons were found. Institution-produced sites may serve to better promote and market surgeons.
Collapse
Affiliation(s)
- Prem N Ramkumar
- Department of Orthopedic Surgery, Cleveland Clinic, Cleveland, Ohio
| | - Sergio M Navarro
- Department of Orthopedic Surgery, Baylor College of Medicine, Houston, Texas
| | - Morad Chughtai
- Department of Orthopedic Surgery, Cleveland Clinic, Cleveland, Ohio
| | - Ton La
- Department of Orthopedic Surgery, Baylor College of Medicine, Houston, Texas
| | - Evan Fisch
- Department of Orthopedic Surgery, Baylor College of Medicine, Houston, Texas
| | - Michael A Mont
- Department of Orthopedic Surgery, Cleveland Clinic, Cleveland, Ohio
| |
Collapse
|
28
|
McLennan S, Strech D, Reimann S. Developments in the Frequency of Ratings and Evaluation Tendencies: A Review of German Physician Rating Websites. J Med Internet Res 2017; 19:e299. [PMID: 28842391 PMCID: PMC5591403 DOI: 10.2196/jmir.6599] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Revised: 04/03/2017] [Accepted: 06/23/2017] [Indexed: 11/24/2022] Open
Abstract
Background Physician rating websites (PRWs) have been developed to allow all patients to rate, comment, and discuss physicians’ quality online as a source of information for others searching for a physician. At the beginning of 2010, a sample of 298 randomly selected physicians from the physician associations in Hamburg and Thuringia were searched for on 6 German PRWs to examine the frequency of ratings and evaluation tendencies. Objective The objective of this study was to examine (1) the number of identifiable physicians on German PRWs; (2) the number of rated physicians on German PRWs; (3) the average and maximum number of ratings per physician on German PRWs; (4) the average rating on German PRWs; (5) the website visitor ranking positions of German PRWs; and (6) how these data compare with 2010 results. Methods A random stratified sample of 298 selected physicians from the physician associations in Hamburg and Thuringia was generated. Every selected physician was searched for on the 6 PRWs (Jameda, Imedo, Docinsider, Esando, Topmedic, and Medführer) used in the 2010 study and a PRW, Arztnavigator, launched by Allgemeine Ortskrankenkasse (AOK). Results The results were as follows: (1) Between 65.1% (194/298) on Imedo to 94.6% (282/298) on AOK-Arztnavigator of the physicians were identified on the selected PRWs. (2) Between 16.4% (49/298) on Esando to 83.2% (248/298) on Jameda of the sample had been rated at least once. (3) The average number of ratings per physician ranged from 1.2 (Esando) to 7.5 (AOK-Arztnavigator). The maximum number of ratings per physician ranged from 3 (Esando) to 115 (Docinsider), indicating an increase compared with the ratings of 2 to 27 in the 2010 study sample. (4) The average converted standardized rating (1=positive, 2=neutral, and 3=negative) ranged from 1.0 (Medführer) to 1.2 (Jameda and Topmedic). (5) Only Jameda (position 317) and Medführer (position 9796) were placed among the top 10,000 visited websites in Germany. Conclusions Whereas there has been an overall increase in the number of ratings when summing up ratings from all 7 analyzed German PRWs, this represents an average addition of only 4 new ratings per physician in a year. The increase has also not been even across the PRWs, and it would be advisable for the users of PRWs to utilize a number of PRWs to ascertain the rating of any given physician. Further research is needed to identify barriers for patients to rate their physicians and to assist efforts to increase the number of ratings on PRWs to consequently improve the fairness and practical importance of PRWs.
Collapse
Affiliation(s)
- Stuart McLennan
- Institute for History, Ethics and Philosophy of Medicine, Hannover Medical School, Hannover, Germany.,Institute for Biomedical Ethics, Universität Basel, Basel, Switzerland
| | - Daniel Strech
- Institute for History, Ethics and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Swantje Reimann
- Institute for History, Ethics and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| |
Collapse
|
29
|
Emmert M, Sauter L, Jablonski L, Sander U, Taheri-Zadeh F. Do Physicians Respond to Web-Based Patient Ratings? An Analysis of Physicians' Responses to More Than One Million Web-Based Ratings Over a Six-Year Period. J Med Internet Res 2017; 19:e275. [PMID: 28747292 PMCID: PMC5550732 DOI: 10.2196/jmir.7538] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2017] [Revised: 03/21/2017] [Accepted: 06/02/2017] [Indexed: 11/13/2022] Open
Abstract
Background Physician-rating websites (PRWs) may lead to quality improvements in case they enable and establish a peer-to-peer communication between patients and physicians. Yet, we know little about whether and how physicians respond on the Web to patient ratings. Objective The objective of this study was to describe trends in physicians’ Web-based responses to patient ratings over time, to identify what physician characteristics influence Web-based responses, and to examine the topics physicians are likely to respond to. Methods We analyzed physician responses to more than 1 million patient ratings displayed on the German PRW, jameda, from 2010 to 2015. Quantitative analysis contained chi-square analyses and the Mann-Whitney U test. Quantitative content techniques were applied to determine the topics physicians respond to based on a randomly selected sample of 600 Web-based ratings and corresponding physician responses. Results Overall, physicians responded to 1.58% (16,640/1,052,347) of all Web-based ratings, with an increasing trend over time from 0.70% (157/22,355) in 2010 to 1.88% (6377/339,919) in 2015. Web-based ratings that were responded to had significantly worse rating results than ratings that were not responded to (2.15 vs 1.74, P<.001). Physicians who respond on the Web to patient ratings differ significantly from nonresponders regarding several characteristics such as gender and patient recommendation results (P<.001 each). Regarding scaled-survey rating elements, physicians were most likely to respond to the waiting time within the practice (19.4%, 99/509) and the time spent with the patient (18.3%, 110/600). Almost one-third of topics in narrative comments were answered by the physicians (30.66%, 382/1246). Conclusions So far, only a minority of physicians have taken the chance to respond on the Web to patient ratings. This is likely because of (1) the low awareness of PRWs among physicians, (2) the fact that only a few PRWs enable physicians to respond on the Web to patient ratings, and (3) the lack of an active moderator to establish peer-to-peer communication. PRW providers should foster more frequent communication between the patient and the physician and encourage physicians to respond on the Web to patient ratings. Further research is needed to learn more about the motivation of physicians to respond or not respond to Web-based patient ratings.
Collapse
Affiliation(s)
- Martin Emmert
- Institute of Management, School of Business and Economics, Health Services Management, Friedrich-Alexander-University Erlangen-Nuremberg, Nuremberg, Germany
| | - Lisa Sauter
- Institute of Management, School of Business and Economics, Health Services Management, Friedrich-Alexander-University Erlangen-Nuremberg, Nuremberg, Germany
| | - Lisa Jablonski
- Institute of Management, School of Business and Economics, Health Services Management, Friedrich-Alexander-University Erlangen-Nuremberg, Nuremberg, Germany
| | - Uwe Sander
- Media, Information and Design, Department of Information and Communication, University of Applied Sciences and Arts, Hannover, Germany
| | - Fatemeh Taheri-Zadeh
- Media, Information and Design, Department of Information and Communication, University of Applied Sciences and Arts, Hannover, Germany
| |
Collapse
|
30
|
Emmert M, Taheri-Zadeh F, Kolb B, Sander U. Public reporting of hospital quality shows inconsistent ranking results. Health Policy 2016; 121:17-26. [PMID: 27890391 DOI: 10.1016/j.healthpol.2016.11.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2016] [Revised: 09/18/2016] [Accepted: 11/07/2016] [Indexed: 11/15/2022]
Abstract
BACKGROUND Evidence from the US has demonstrated that hospital report cards might generate confusion for consumers who are searching for a hospital. So far, little is known regarding hospital ranking agreement on German report cards as well as underlying factors creating disagreement. OBJECTIVE This study examined the consistency of hospital recommendations on German hospital report cards and discussed underlying reasons for differences. METHODS We compared hospital recommendations for three procedures on four German hospital report cards. The agreement between two report cards was determined by Cohen's-Kappa. Fleiss' kappa was applied to evaluate the overlap across all four report cards. RESULTS Overall, 43.40% of all hospitals were labeled equally as low, middle, or top performers on two report cards (hip replacement: 43.2%; knee replacement: 42.8%; percutaneous coronary intervention: 44.3%). In contrast, 8.5% of all hospitals were rated a top performer on one report card and a low performer on another report card. The inter-report card agreement was slight at best between two report cards (κmax=0.148) and poor between all four report cards (κmax=0.111). CONCLUSIONS To increase the benefit of public reporting, increasing the transparency about the concept of - medical - "quality" that is represented on each report card seems to be important. This would help patients and other consumers use the report cards that most represent one's individual preferences.
Collapse
Affiliation(s)
- Martin Emmert
- Institute of Management (IFM), School of Business and Economics, Friedrich-Alexander-University Erlangen-Nuremberg, Health Services Management, Lange Gasse 20, 90403 Nuremberg, Germany.
| | - Fatemeh Taheri-Zadeh
- Department of Information and Communication, Faculty for Media, Information and Design, University of Applied Sciences and Arts Hannover, Hannover, Germany
| | - Benjamin Kolb
- Department of Information and Communication, Faculty for Media, Information and Design, University of Applied Sciences and Arts Hannover, Hannover, Germany
| | - Uwe Sander
- Department of Information and Communication, Faculty for Media, Information and Design, University of Applied Sciences and Arts Hannover, Hannover, Germany
| |
Collapse
|
31
|
Liu JJ, Matelski J, Cram P, Urbach DR, Bell CM. Association Between Online Physician Ratings and Cardiac Surgery Mortality. Circ Cardiovasc Qual Outcomes 2016; 9:788-791. [DOI: 10.1161/circoutcomes.116.003016] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
- Jessica J. Liu
- From the Department of Medicine (J.J.L., J.M., P.C., C.M.B.), Institute for Health Policy Management and Evaluation (D.R.U., C.M.B., P.C.), and Department of Surgery (D.R.U.), University of Toronto, ON, Canada; Division of Internal Medicine, Toronto General Hospital, Sinai Health System, University Health Network, ON, Canada (J.J.L., P.C.); Institute for Clinical Evaluative Sciences, Toronto, ON, Canada (D.R.U., C.M.B., P.C.); and Department of Medicine, Mount Sinai Hospital, Toronto, ON, Canada (C
| | - John Matelski
- From the Department of Medicine (J.J.L., J.M., P.C., C.M.B.), Institute for Health Policy Management and Evaluation (D.R.U., C.M.B., P.C.), and Department of Surgery (D.R.U.), University of Toronto, ON, Canada; Division of Internal Medicine, Toronto General Hospital, Sinai Health System, University Health Network, ON, Canada (J.J.L., P.C.); Institute for Clinical Evaluative Sciences, Toronto, ON, Canada (D.R.U., C.M.B., P.C.); and Department of Medicine, Mount Sinai Hospital, Toronto, ON, Canada (C
| | - Peter Cram
- From the Department of Medicine (J.J.L., J.M., P.C., C.M.B.), Institute for Health Policy Management and Evaluation (D.R.U., C.M.B., P.C.), and Department of Surgery (D.R.U.), University of Toronto, ON, Canada; Division of Internal Medicine, Toronto General Hospital, Sinai Health System, University Health Network, ON, Canada (J.J.L., P.C.); Institute for Clinical Evaluative Sciences, Toronto, ON, Canada (D.R.U., C.M.B., P.C.); and Department of Medicine, Mount Sinai Hospital, Toronto, ON, Canada (C
| | - David R. Urbach
- From the Department of Medicine (J.J.L., J.M., P.C., C.M.B.), Institute for Health Policy Management and Evaluation (D.R.U., C.M.B., P.C.), and Department of Surgery (D.R.U.), University of Toronto, ON, Canada; Division of Internal Medicine, Toronto General Hospital, Sinai Health System, University Health Network, ON, Canada (J.J.L., P.C.); Institute for Clinical Evaluative Sciences, Toronto, ON, Canada (D.R.U., C.M.B., P.C.); and Department of Medicine, Mount Sinai Hospital, Toronto, ON, Canada (C
| | - Chaim M. Bell
- From the Department of Medicine (J.J.L., J.M., P.C., C.M.B.), Institute for Health Policy Management and Evaluation (D.R.U., C.M.B., P.C.), and Department of Surgery (D.R.U.), University of Toronto, ON, Canada; Division of Internal Medicine, Toronto General Hospital, Sinai Health System, University Health Network, ON, Canada (J.J.L., P.C.); Institute for Clinical Evaluative Sciences, Toronto, ON, Canada (D.R.U., C.M.B., P.C.); and Department of Medicine, Mount Sinai Hospital, Toronto, ON, Canada (C
| |
Collapse
|
32
|
Patel S, Cain R, Neailey K, Hooberman L. Exploring Patients' Views Toward Giving Web-Based Feedback and Ratings to General Practitioners in England: A Qualitative Descriptive Study. J Med Internet Res 2016; 18:e217. [PMID: 27496366 PMCID: PMC4992166 DOI: 10.2196/jmir.5865] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Revised: 06/23/2016] [Accepted: 07/11/2016] [Indexed: 11/13/2022] Open
Abstract
Background Patient feedback websites or doctor rating websites are increasingly being used by patients to give feedback about their health care experiences. There is little known about why patients in England may give Web-based feedback and what may motivate or dissuade them from giving Web-based feedback. Objective The aim of this study was to explore patients’ views toward giving Web-based feedback and ratings to general practitioners (GPs), within the context of other feedback methods available in primary care in England, and in particular, paper-based feedback cards. Methods A descriptive exploratory qualitative approach using face-to-face semistructured interviews was used in this study. Purposive sampling was used to recruit 18 participants from different age groups in London and Coventry. Interviews were transcribed verbatim and analyzed using applied thematic analysis. Results Half of the participants in this study were not aware of the opportunity to leave feedback for GPs, and there was limited awareness about the methods available to leave feedback for a GP. The majority of participants were not convinced that formal patient feedback was needed by GPs or would be used by GPs for improvement, regardless of whether they gave it via a website or on paper. Some participants said or suggested that they may leave feedback on a website rather than on a paper-based feedback card for several reasons: because of the ability and ease of giving it remotely; because it would be shared with the public; and because it would be taken more seriously by GPs. Others, however, suggested that they would not use a website to leave feedback for the opposite reasons: because of accessibility issues; privacy and security concerns; and because they felt feedback left on a website may be ignored. Conclusions Patient feedback and rating websites as they currently are will not replace other mechanisms for patients in England to leave feedback for a GP. Rather, they may motivate a small number of patients who have more altruistic motives or wish to place collective pressure on a GP to give Web-based feedback. If the National Health Service or GP practices want more patients to leave Web-based feedback, we suggest they first make patients aware that they can leave anonymous feedback securely on a website for a GP. They can then convince them that their feedback is needed and wanted by GPs for improvement, and that the reviews they leave on the website will be of benefit to other patients to decide which GP to see or which GP practice to join.
Collapse
|
33
|
Kleefstra SM, Zandbelt LC, Borghans I, de Haes HJCJM, Kool RB. Investigating the Potential Contribution of Patient Rating Sites to Hospital Supervision: Exploratory Results From an Interview Study in the Netherlands. J Med Internet Res 2016; 18:e201. [PMID: 27439392 PMCID: PMC4972989 DOI: 10.2196/jmir.5552] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2016] [Revised: 05/20/2016] [Accepted: 06/21/2016] [Indexed: 11/24/2022] Open
Abstract
Background Over the last decades, the patient perspective on health care quality has been unconditionally integrated into quality management. For several years now, patient rating sites have been rapidly gaining attention. These offer a new approach toward hearing the patient’s perspective on the quality of health care. Objective The aim of our study was to explore whether and how patient reviews of hospitals, as reported on rating sites, have the potential to contribute to health care inspector’s daily supervision of hospital care. Methods Given the unexplored nature of the topic, an interview study among hospital inspectors was designed in the Netherlands. We performed 2 rounds of interviews with 10 senior inspectors, addressing their use and their judgment on the relevance of review data from a rating site. Results All 10 Dutch senior hospital inspectors participated in this research. The inspectors initially showed some reluctance to use the major patient rating site in their daily supervision. This was mainly because of objections such as worries about how representative they are, subjectivity, and doubts about the relevance of patient reviews for supervision. However, confrontation with, and assessment of, negative reviews by the inspectors resulted in 23% of the reviews being deemed relevant for risk identification. Most inspectors were cautiously positive about the contribution of the reviews to their risk identification. Conclusions Patient rating sites may be of value to the risk-based supervision of hospital care carried out by the Health Care Inspectorate. Health care inspectors do have several objections against the use of patient rating sites for daily supervision. However, when they are presented with texts of negative reviews from a hospital under their supervision, it appears that most inspectors consider it as an additional source of information to detect poor quality of care. Still, it should always be accompanied and verified by other quality and safety indicators. More research on the value and usability of patient rating sites in daily hospital supervision and other health settings is needed.
Collapse
Affiliation(s)
- Sophia Martine Kleefstra
- Dutch Health Care Inspectorate, Department of Risk Detection and Development, Utrecht, Netherlands.
| | | | | | | | | |
Collapse
|
34
|
Kool RB, Kleefstra SM, Borghans I, Atsma F, van de Belt TH. Influence of Intensified Supervision by Health Care Inspectorates on Online Patient Ratings of Hospitals: A Multilevel Study of More Than 43,000 Online Ratings. J Med Internet Res 2016; 18:e198. [PMID: 27421302 PMCID: PMC4967180 DOI: 10.2196/jmir.5884] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2016] [Revised: 06/09/2016] [Accepted: 06/24/2016] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND In the Netherlands, hospitals with quality or safety issues are put under intensified supervision by the Dutch Health Care Inspectorate, which involves frequent announced and unannounced site visits and other measures. Patient rating sites are an upcoming phenomenon in health care. Patient reviews might be influenced by perceived quality including the media coverage of health care providers when the health care inspectorate imposes intensified supervision, but no data are available to show how these are related. OBJECTIVE The aim of this study was to investigate whether and how being under intensified supervision of the health care inspectorate influences online patient ratings of hospitals. METHODS We performed a longitudinal study using data from the patient rating site Zorgkaart Nederland, from January 1, 2010 to December 31, 2015. We compared data of 7 hospitals under intensified supervision with a control group of 28 hospitals. The dataset contained 43,856 ratings. We performed a multilevel logistic regression analysis to account for clustering of ratings within hospitals. Fixed effects in our analysis were hospital type, time, and the period of intensified supervision. Random effect was the hospital. The outcome variable was the dichotomized rating score. RESULTS The period of intensified supervision was associated with a low rating score for the hospitals compared with control group hospitals; both 1 year before intensified supervision (odds ratio, OR, 1.67, 95% CI 1.06-2.63) and 1 year after (OR 1.79, 95% CI 1.14-2.81) the differences are significant. For all periods, the odds on a low rating score for hospitals under intensified supervision are higher than for the control group hospitals, corrected for time. Time is also associated with low rating scores, with decreasing ORs over time since 2010. CONCLUSIONS Hospitals that are confronted with intensified supervision by the health care inspectorate have lower ratings on patient rating sites. The scores are independent of the period: before, during, or just after the intervention by the health care inspectorate. Health care inspectorates might learn from these results because they indicate that the inspectorate identifies the same hospitals as "at risk" as the patients rate as underperformers.
Collapse
Affiliation(s)
- Rudolf Bertijn Kool
- Radboud University Medical Center, Radboud Institute for Health Sciences, IQ Healthcare, Nijmegen, Netherlands.
| | | | | | | | | |
Collapse
|
35
|
Wiley MT, Rivas RL, Hristidis V. Provider attributes correlation analysis to their referral frequency and awards. BMC Health Serv Res 2016; 16:90. [PMID: 26975310 PMCID: PMC4790057 DOI: 10.1186/s12913-016-1338-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2015] [Accepted: 03/07/2016] [Indexed: 11/10/2022] Open
Abstract
Background There has been a recent growth in health provider search portals, where patients specify filters—such as specialty or insurance—and providers are ranked by patient ratings or other attributes. Previous work has identified attributes associated with a provider’s quality through user surveys. Other work supports that intuitive quality-indicating attributes are associated with a provider’s quality. Methods We adopt a data-driven approach to study how quality indicators of providers are associated with a rich set of attributes including medical school, graduation year, procedures, fellowships, patient reviews, location, and technology usage. In this work, we only consider providers as individuals (e.g., general practitioners) and not organizations (e.g., hospitals). As quality indicators, we consider the referral frequency of a provider and a peer-nominated quality designation. We combined data from the Centers for Medicare and Medicaid Services (CMS) and several provider rating web sites to perform our analysis. Results Our data-driven analysis identified several attributes that correlate with and discriminate against referral volume and peer-nominated awards. In particular, our results consistently demonstrate that these attributes vary by locality and that the frequency of an attribute is more important than its value (e.g., the number of patient reviews or hospital affiliations are more important than the average review rating or the ranking of the hospital affiliations, respectively). We demonstrate that it is possible to build accurate classifiers for referral frequency and quality designation, with accuracies over 85 %. Conclusions Our findings show that a one-size-fits-all approach to ranking providers is inadequate and that provider search portals should calibrate their ranking function based on location and specialty. Further, traditional filters of provider search portals should be reconsidered, and patients should be aware of existing pitfalls with these filters and educated on local factors that affect quality. These findings enable provider search portals to empower patients and to “load balance” patients between younger and older providers. Electronic supplementary material The online version of this article (doi:10.1186/s12913-016-1338-1) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Matthew T Wiley
- Department of Computer Science and Engineering, University of California, Riverside, CA, USA. .,SmartDocFinder LLC, 3499 10th Street, Riverside, CA, USA.
| | - Ryan L Rivas
- Department of Computer Science and Engineering, University of California, Riverside, CA, USA
| | - Vagelis Hristidis
- Department of Computer Science and Engineering, University of California, Riverside, CA, USA
| |
Collapse
|
36
|
Samora JB, Lifchez SD, Blazar PE. Physician-Rating Web Sites: Ethical Implications. J Hand Surg Am 2016; 41:104-10.e1. [PMID: 26304734 DOI: 10.1016/j.jhsa.2015.05.034] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/02/2015] [Revised: 05/07/2015] [Accepted: 05/08/2015] [Indexed: 02/02/2023]
Abstract
PURPOSE To understand the ethical and professional implications of physician behavior changes secondary to online physician-rating Web sites (PRWs). METHODS The American Society for Surgery of the Hand (ASSH) Ethics and Professionalism Committee surveyed the ASSH membership regarding PRWs. We sent a 14-item questionnaire to 2,664 active ASSH members who practice in both private and academic settings in the United States. RESULTS We received 312 responses, a 12% response incidence. More than 65% of the respondents had a slightly or highly unfavorable impression of these Web sites. Only 34% of respondents had ever updated or created a profile for PRWs, although 62% had observed inaccuracies in their profile. Almost 90% of respondents had not made any changes in their practice owing to comments or reviews. One-third of respondents had solicited favorable reviews from patients, and 3% of respondents have paid to improve their ratings. CONCLUSIONS PRWs are going to become more prevalent, and more research is needed to fully understand the implications. There are several ethical implications that PRWs pose to practicing physicians. We contend that it is morally unsound to pay for good reviews. The recourse for physicians when an inaccurate and potentially libelous review has been written is unclear. Some physicians have required patients to sign a waiver preventing them from posting negative comments online. We propose the development of a task force to assess the professional, ethical, and legal implications of PRWs, including working with companies to improve accuracy of information, oversight, and feedback opportunities. CLINICAL RELEVANCE It is expected that PRWs will play an increasing role in the future; it is unclear whether there will be a uniform reporting system, or whether these online ratings will influence referral patterns and/or quality improvement.
Collapse
Affiliation(s)
- Julie Balch Samora
- Department of Orthopaedic Surgery, Brigham and Women's Hospital, Boston, MA
| | - Scott D Lifchez
- Department of Plastic and Reconstructive Surgery, Johns Hopkins Bayview Medical Center, Baltimore, MD
| | - Philip E Blazar
- Department of Orthopaedic Surgery, Brigham and Women's Hospital, Boston, MA.
| | | |
Collapse
|
37
|
Abstract
OBJECTIVE To assess what is known about the relationship between patient experience measures and incentives designed to improve care, and to identify how public policy and medical practices can promote patient-valued outcomes in health systems with strong financial incentives. DATA SOURCES/STUDY SETTING Existing literature (gray and peer-reviewed) on measuring patient experience and patient-reported outcomes, identified from Medline and Cochrane databases; evaluations of pay-for-performance programs in the United States, Europe, and the Commonwealth countries. STUDY DESIGN/DATA COLLECTION We analyzed (1) studies of pay-for-performance, to identify those including metrics for patient experience, and (2) studies of patient experience and of patient-reported outcomes to identify evidence of influence on clinical practice, whether through public reporting or private reporting to clinicians. PRINCIPAL FINDINGS First, we identify four forms of "patient-reported information" (PRI), each with distinctive roles shaping clinical practice: (1) patient-reported outcomes measuring self-assessed physical and mental well-being, (2) surveys of patient experience with clinicians and staff, (3) narrative accounts describing encounters with clinicians in patients' own words, and (4) complaints/grievances signaling patients' distress when treatment or outcomes fall short of expectations. Because these forms vary in crucial ways, each must be distinctively measured, deployed, and linked with financial incentives. Second, although the literature linking incentives to patients experience is limited, implementing pay-for-performance systems appears to threaten certain patient-valued aspects of health care. But incentives can be made compatible with the outcomes patients value if: (a) a sufficient portion of incentives is tied to patient-reported outcomes and experiences, (b) incentivized forms of PRI are complemented by other forms of patient feedback, and (c) health care organizations assist clinicians to interpret and respond to PRI. Finally, we identify roles for the public and private sectors in financing PRI and orchestrating an appropriate balance among its four forms. CONCLUSIONS Unless public policies are attentive to patients' perspectives, stronger financial incentives for clinicians can threaten aspects of care that patients most value. Certain policy parameters are already clear, but additional research is required to clarify how best to collect patient narratives in varied settings, how to report narratives to consumers in conjunction with quantified metrics, and how to promote a "culture of learning" at the practice level that incorporates patient feedback.
Collapse
Affiliation(s)
- Mark Schlesinger
- Department of Health Policy and ManagementYale University School of Public HealthRoom 304 LEPH 60 College StNew HavenCT 06520
| | - Rachel Grob
- Center for Patient PartnershipsUW Law SchoolUniversity of Wisconsin‐MadisonMadisonWI
- Department of Family MedicineUW Medical SchoolUniversity of Wisconsin‐MadisonMadisonWI
| | | |
Collapse
|
38
|
Burkle CM, Keegan MT. Popularity of internet physician rating sites and their apparent influence on patients' choices of physicians. BMC Health Serv Res 2015; 15:416. [PMID: 26410383 PMCID: PMC4583763 DOI: 10.1186/s12913-015-1099-2] [Citation(s) in RCA: 67] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2015] [Accepted: 09/22/2015] [Indexed: 01/26/2023] Open
Abstract
Background There has been a substantial increase in the number of on-line health care grading sites that offer patient feedback on physicians, staff and hospitals. Despite a growing interest among some consumers of medical services, most studies of Internet physician rating sites (IPRS) have restricted their analysis to sampling data from individual sites alone. Our objective was to explore the frequency with which patients visit and leave comments on IPRS, evaluate the nature of comments written and quantify the influence that positive comments, negative comments and physician medical malpractice history might have on patients’ decisions to seek care from a particular physician. Methods One-thousand consecutive patients visiting the Pre-Operative Evaluation (POE) Clinic at Mayo Clinic in Rochester Minnesota between June 2013 and October 2013 were surveyed using a written questionnaire. Results A total of 854 respondents completed the survey to some degree. A large majority (84 %) stated that they had not previously visited an IPRS. Of those writing comments on an IPRS in the past, just over a third (36 %) provided either unfavorable (9 %) or a combination of favorable and unfavorable (27 %) reviews of physician interactions. Among all respondents, 28.1 % strongly agreed that a positive physician review alone on an IPRS would cause them to seek care from that practitioner. Similarly, 27 % indicated that a negative IPRS review would cause them to choose against seeking care from that physician. Fewer than a third indicated that knowledge of a malpractice suit alone would negatively impact their decision to seek care from a physician. Whether a respondent had visited an IPRS in the past had no impact on the answers provided. Conclusions Few patients had visited IPRS, with a limited number reporting that information provided on these sites would play a significant role in their decision to seek care from a particular physician.
Collapse
Affiliation(s)
| | - Mark T Keegan
- Department of Anesthesiology, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
39
|
Emmert M, Adelhardt T, Sander U, Wambach V, Lindenthal J. A cross-sectional study assessing the association between online ratings and structural and quality of care measures: results from two German physician rating websites. BMC Health Serv Res 2015; 15:414. [PMID: 26404452 PMCID: PMC4582723 DOI: 10.1186/s12913-015-1051-5] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2014] [Accepted: 09/07/2015] [Indexed: 12/30/2022] Open
Abstract
Background Even though physician rating websites (PRWs) have been gaining in importance in both practice and research, little evidence is available on the association of patients’ online ratings with the quality of care of physicians. It thus remains unclear whether patients should rely on these ratings when selecting a physician. The objective of this study was to measure the association between online ratings and structural and quality of care measures for 65 physician practices from the German Integrated Health Care Network “Quality and Efficiency” (QuE). Methods Online reviews from two German PRWs were included which covered a three-year period (2011 to 2013) and included 1179 and 991 ratings, respectively. Information for 65 QuE practices was obtained for the year 2012 and included 21 measures related to structural information (N = 6), process quality (N = 10), intermediate outcomes (N = 2), patient satisfaction (N = 1), and costs (N = 2). The Spearman rank coefficient of correlation was applied to measure the association between ratings and practice-related information. Results Patient satisfaction results from offline surveys and the patients per doctor ratio in a practice were shown to be significantly associated with online ratings on both PRWs. For one PRW, additional significant associations could be shown between online ratings and cost-related measures for medication, preventative examinations, and one diabetes type 2-related intermediate outcome measure. There again, results from the second PRW showed significant associations with the age of the physicians and the number of patients per practice, four process-related quality measures for diabetes type 2 and asthma, and one cost-related measure for medication. Conclusions Several significant associations were found which varied between the PRWs. Patients interested in the satisfaction of other patients with a physician might select a physician on the basis of online ratings. Even though our results indicate associations with some diabetes and asthma measures, but not with coronary heart disease measures, there is still insufficient evidence to draw strong conclusions. The limited number of practices in our study may have weakened our findings. Electronic supplementary material The online version of this article (doi:10.1186/s12913-015-1051-5) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Martin Emmert
- Friedrich-Alexander-University Erlangen-Nuremberg, School of Business and Economics, Institute of Management (IFM), Lange Gasse 20, 90403, Nuremberg, Germany.
| | - Thomas Adelhardt
- Chair of Health Management, Friedrich-Alexander-University Erlangen-Nuremberg, School of Business and Economics, Institute of Management (IFM), Lange Gasse 20, 90403, Nuremberg, Germany.
| | - Uwe Sander
- University of Applied Sciences and Arts, Hannover, Germany.
| | - Veit Wambach
- Integrated Healthcare Network "Quality and Efficiency" (QuE eG), Nuremberg, Vogelsgarten 1, 90402, Nuremberg, Germany.
| | - Jörg Lindenthal
- Integrated Healthcare Network "Quality and Efficiency" (QuE eG), Nuremberg, Vogelsgarten 1, 90402, Nuremberg, Germany.
| |
Collapse
|
40
|
Grabner-Kräuter S, Waiguny MKJ. Insights into the impact of online physician reviews on patients' decision making: randomized experiment. J Med Internet Res 2015; 17:e93. [PMID: 25862516 PMCID: PMC4408377 DOI: 10.2196/jmir.3991] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2014] [Revised: 01/21/2015] [Accepted: 02/04/2015] [Indexed: 11/25/2022] Open
Abstract
Background Physician-rating websites combine public reporting with social networking and offer an attractive means by which users can provide feedback on their physician and obtain information about other patients’ satisfaction and experiences. However, research on how users evaluate information on these portals is still scarce and only little knowledge is available about the potential influence of physician reviews on a patient’s choice. Objective Starting from the perspective of prospective patients, this paper sets out to explore how certain characteristics of physician reviews affect the evaluation of the review and users’ attitudes toward the rated physician. We propose a model that relates review style and review number to constructs of review acceptance and check it with a Web-based experiment. Methods We employed a randomized 2x2 between-subject, factorial experiment manipulating the style of a physician review (factual vs emotional) and the number of reviews for a certain physician (low vs high) to test our hypotheses. A total of 168 participants were presented with a Web-based questionnaire containing a short description of a dentist search scenario and the manipulated reviews for a fictitious dental physician. To investigate the proposed hypotheses, we carried out moderated regression analyses and a moderated mediation analysis using the PROCESS macro 2.11 for SPSS version 22. Results Our analyses indicated that a higher number of reviews resulted in a more positive attitude toward the rated physician. The results of the regression model for attitude toward the physician suggest a positive main effect of the number of reviews (mean [low] 3.73, standard error [SE] 0.13, mean [high] 4.15, SE 0.13). We also observed an interaction effect with the style of the review—if the physician received only a few reviews, fact-oriented reviews (mean 4.09, SE 0.19) induced a more favorable attitude toward the physician compared to emotional reviews (mean 3.44, SE 0.19), but there was no such effect when the physician received many reviews. Furthermore, we found that review style also affected the perceived expertise of the reviewer. Fact-oriented reviews (mean 3.90, SE 0.13) lead to a higher perception of reviewer expertise compared to emotional reviews (mean 3.19, SE 0.13). However, this did not transfer to the attitude toward the physician. A similar effect of review style and number on the perceived credibility of the review was observed. While no differences between emotional and factual style were found if the physician received many reviews, a low number of reviews received lead to a significant difference in the perceived credibility, indicating that emotional reviews were rated less positively (mean 3.52, SE 0.18) compared to fact-oriented reviews (mean 4.15, SE 0.17). Our analyses also showed that perceived credibility of the review fully mediated the observed interaction effect on attitude toward the physician. Conclusions Physician-rating websites are an interesting new source of information about the quality of health care from the patient’s perspective. This paper makes a unique contribution to an understudied area of research by providing some insights into how people evaluate online reviews of individual doctors. Information attributes, such as review style and review number, have an impact on the evaluation of the review and on the patient’s attitude toward the rated doctor. Further research is necessary to improve our understanding of the influence of such rating sites on the patient's choice of a physician.
Collapse
Affiliation(s)
- Sonja Grabner-Kräuter
- Department of Marketing and International Management, Alpen-Adria-Universität Klagenfurt, Klagenfurt, Austria.
| | | |
Collapse
|
41
|
Li S, Feng B, Chen M, Bell RA. Physician review websites: effects of the proportion and position of negative reviews on readers' willingness to choose the doctor. JOURNAL OF HEALTH COMMUNICATION 2015; 20:453-461. [PMID: 25749406 DOI: 10.1080/10810730.2014.977467] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Health consumers are increasingly turning to physician review websites to research potential health care providers. This experiment examined how the proportion and position of negative reviews on such websites influence readers' willingness to choose the reviewed physician. A 5 × 2 (Proportion of Negative Reviews × Position of Negative Reviews) factorial design was implemented, augmented with two standalone comparison groups. Five hundred participants were recruited through a crowdsource website and were randomly assigned to read a webpage screenshot corresponding to 1 of 12 experimental conditions. The participants then completed a questionnaire that assessed evaluations of and cognitive elaborations (thoughts) about the physician. The authors hypothesized that readers would be less willing to use a physician's services when reviews were predominantly negative and negative comments were positioned before positive comments. As hypothesized, an increase in the proportion of negative reviews led to a reduced willingness to use the physician's services. However, this effect was not moderated by the level of cognitive elaboration. A primacy effect was found for negative reviews such that readers were less willing to use the physician's services when negative reviews were presented before positive reviews, rather than after. Implications for future research are discussed.
Collapse
Affiliation(s)
- Siyue Li
- a Department of Communication , University of California , Davis, Davis , California , USA
| | | | | | | |
Collapse
|
42
|
Emmert M, Hessemer S, Meszmer N, Sander U. Do German hospital report cards have the potential to improve the quality of care? Health Policy 2014; 118:386-95. [PMID: 25074783 DOI: 10.1016/j.healthpol.2014.07.006] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2014] [Revised: 06/12/2014] [Accepted: 07/02/2014] [Indexed: 10/25/2022]
Abstract
BACKGROUND Hospitals report cards have been put in place within the past few years to increase the amount of publicly reported quality information in Germany. OBJECTIVE The aim of this study was to assess the potential of German hospital report cards to improve quality of care. METHODS First, a systematic Internet search aimed at identifying available report cards was conducted. Second, cross-sectional data (August/September 2013) were analyzed with respect to awareness, comprehension, and impact of report cards by using descriptive analysis and binary multivariate logistic regression models. RESULTS Hospital report cards (N=62) have become broadly available. However, awareness remains low, about one third (35.6%) of all respondents (N=2027) were aware of German hospital report card. Regarding comprehensibility, in 60.7% of all experiments (N=6081), respondents selected the hospital with the lowest risk-adjusted mortality; significant differences could be determined between the report cards (p<.001) with scores ranging from 27.5% to 77.2%. Binary multivariate logistic regression analysis revealed different significant respondent-related predictors on each report card. Finally, an impact on hospital choice making was determined. CONCLUSIONS To increase the potential of hospital report cards, health policy makers should promote the availability of report cards. In addition, the comprehensibility of German hospital report cards cannot be regarded as satisfying and should be enhanced in the future.
Collapse
Affiliation(s)
- Martin Emmert
- Institute of Management (IFM), Friedrich-Alexander-University Erlangen-Nuremberg, Germany.
| | - Stefanie Hessemer
- Institute of Management (IFM), Friedrich-Alexander-University Erlangen-Nuremberg, Germany
| | - Nina Meszmer
- Institute of Management (IFM), Friedrich-Alexander-University Erlangen-Nuremberg, Germany
| | - Uwe Sander
- University of Applied Sciences and Arts, Hannover, Germany
| |
Collapse
|
43
|
Bidmon S, Terlutter R, Röttl J. What explains usage of mobile physician-rating apps? Results from a web-based questionnaire. J Med Internet Res 2014; 16:e148. [PMID: 24918859 PMCID: PMC4071227 DOI: 10.2196/jmir.3122] [Citation(s) in RCA: 65] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2013] [Revised: 04/10/2014] [Accepted: 04/27/2014] [Indexed: 01/27/2023] Open
Abstract
Background Consumers are increasingly accessing health-related information via mobile devices. Recently, several apps to rate and locate physicians have been released in the United States and Germany. However, knowledge about what kinds of variables explain usage of mobile physician-rating apps is still lacking. Objective This study analyzes factors influencing the adoption of and willingness to pay for mobile physician-rating apps. A structural equation model was developed based on the Technology Acceptance Model and the literature on health-related information searches and usage of mobile apps. Relationships in the model were analyzed for moderating effects of physician-rating website (PRW) usage. Methods A total of 1006 randomly selected German patients who had visited a general practitioner at least once in the 3 months before the beginning of the survey were randomly selected and surveyed. A total of 958 usable questionnaires were analyzed by partial least squares path modeling and moderator analyses. Results The suggested model yielded a high model fit. We found that perceived ease of use (PEOU) of the Internet to gain health-related information, the sociodemographic variables age and gender, and the psychographic variables digital literacy, feelings about the Internet and other Web-based applications in general, patients’ value of health-related knowledgeability, as well as the information-seeking behavior variables regarding the amount of daily private Internet use for health-related information, frequency of using apps for health-related information in the past, and attitude toward PRWs significantly affected the adoption of mobile physician-rating apps. The sociodemographic variable age, but not gender, and the psychographic variables feelings about the Internet and other Web-based applications in general and patients’ value of health-related knowledgeability, but not digital literacy, were significant predictors of willingness to pay. Frequency of using apps for health-related information in the past and attitude toward PRWs, but not the amount of daily Internet use for health-related information, were significant predictors of willingness to pay. The perceived usefulness of the Internet to gain health-related information and the amount of daily Internet use in general did not have any significant effect on both of the endogenous variables. The moderation analysis with the group comparisons for users and nonusers of PRWs revealed that the attitude toward PRWs had significantly more impact on the adoption and willingness to pay for mobile physician-rating apps in the nonuser group. Conclusions Important variables that contribute to the adoption of a mobile physician-rating app and the willingness to pay for it were identified. The results of this study are important for researchers because they can provide important insights about the variables that influence the acceptance of apps that allow for ratings of physicians. They are also useful for creators of mobile physician-rating apps because they can help tailor mobile physician-rating apps to the consumers’ characteristics and needs.
Collapse
Affiliation(s)
- Sonja Bidmon
- Department of Marketing and International Management, Alpen-Adria Universitaet Klagenfurt, Klagenfurt am Woerthersee, Austria.
| | | | | |
Collapse
|
44
|
Emmert M, Meier F, Heider AK, Dürr C, Sander U. What do patients say about their physicians? an analysis of 3000 narrative comments posted on a German physician rating website. Health Policy 2014; 118:66-73. [PMID: 24836021 DOI: 10.1016/j.healthpol.2014.04.015] [Citation(s) in RCA: 69] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2013] [Revised: 04/09/2014] [Accepted: 04/22/2014] [Indexed: 11/16/2022]
Abstract
BACKGROUND Physician rating websites (PRWs) could be shown to have an impact on physician choice making. However, little research has been carried out to assess the content and nature of narrative comments. OBJECTIVE The aim of this study was to explore the concerns of patients who commented on physician care and to address and enhance patient satisfaction. METHODS Content analysis of 3000 randomly selected narrative comments from the German PRW, jameda, from 2012. We therefore developed a theoretical categorization framework addressing physician, staff, and practice related patient concerns. FINDINGS In total, 50 sub-categories addressing the physician (N=20), the office staff (N=13), and the practice (N=17) were derived from the content of all comments. The most frequently mentioned concerns were assessing the professional competence of the physician (63%, N=1874) and friendliness of the physician (38%, N=1148). Thereby, 80% of all comments (mean length 45.3 words ± 42.8) were classified as positive, 4% as neutral and 16% as negative. CONCLUSION Users of the German PRW, jameda, are mostly satisfied with their physicians. However, physicians should focus on the time spent with the patients, waiting time, as well as on taking the patients more seriously.
Collapse
Affiliation(s)
- Martin Emmert
- Health Services Management, Institute of Management (IFM), Friedrich-Alexander-University Erlangen-Nuremberg, Germany.
| | - Florian Meier
- Health Management, Institute of Management (IFM), Friedrich-Alexander-University Erlangen-Nuremberg, Germany
| | - Ann-Kathrin Heider
- Health Services Management, Institute of Management (IFM), Friedrich-Alexander-University Erlangen-Nuremberg, Germany
| | - Christoph Dürr
- Health Services Management, Institute of Management (IFM), Friedrich-Alexander-University Erlangen-Nuremberg, Germany
| | - Uwe Sander
- University of Applied Sciences and Arts, Hannover, Germany
| |
Collapse
|
45
|
Terlutter R, Bidmon S, Röttl J. Who uses physician-rating websites? Differences in sociodemographic variables, psychographic variables, and health status of users and nonusers of physician-rating websites. J Med Internet Res 2014; 16:e97. [PMID: 24686918 PMCID: PMC4004145 DOI: 10.2196/jmir.3145] [Citation(s) in RCA: 105] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2013] [Revised: 02/25/2014] [Accepted: 02/27/2014] [Indexed: 11/21/2022] Open
Abstract
Background The number of physician-rating websites (PRWs) is rising rapidly, but usage is still poor. So far, there has been little discussion about what kind of variables influence usage of PRWs. Objective We focused on sociodemographic variables, psychographic variables, and health status of PRW users and nonusers. Methods An online survey of 1006 randomly selected German patients was conducted in September 2012. We analyzed the patients’ knowledge and use of online PRWs. We also analyzed the impact of sociodemographic variables (gender, age, and education), psychographic variables (eg, feelings toward the Internet, digital literacy), and health status on use or nonuse as well as the judgment of and behavior intentions toward PRWs. The survey instrument was based on existing literature and was guided by several research questions. Results A total of 29.3% (289/986) of the sample knew of a PRW and 26.1% (257/986) had already used a PRW. Younger people were more prone than older ones to use PRWs (t967=2.27, P=.02). Women used them more than men (χ21=9.4, P=.002), the more highly educated more than less educated people (χ24=19.7, P=.001), and people with chronic diseases more than people without (χ21=5.6, P=.02). No differences were found between users and nonusers in their daily private Internet use and in their use of the Internet for health-related information. Users had more positive feelings about the Internet and other Web-based applications in general (t489=3.07, P=.002) than nonusers, and they had higher digital literacy (t520=4.20, P<.001). Users ascribed higher usefulness to PRWs than nonusers (t612=11.61, P<.001) and users trusted information on PRWs to a greater degree than nonusers (t559=11.48, P<.001). Users were also more likely to rate a physician on a PRW in the future (t367=7.63, P<.001) and to use a PRW in the future (t619=15.01, P<.001). The results of 2 binary logistic regression analyses demonstrated that sociodemographic variables (gender, age, education) and health status alone did not predict whether persons were prone to use PRWs or not. Adding psychographic variables and information-seeking behavior variables to the binary logistic regression analyses led to a satisfying fit of the model and revealed that higher education, poorer health status, higher digital literacy (at the 10% level of significance), lower importance of family and pharmacist for health-related information, higher trust in information on PRWs, and higher appraisal of usefulness of PRWs served as significant predictors for usage of PRWs. Conclusions Sociodemographic variables alone do not sufficiently predict use or nonuse of PRWs; specific psychographic variables and health status need to be taken into account. The results can help designers of PRWs to better tailor their product to specific target groups, which may increase use of PRWs in the future.
Collapse
Affiliation(s)
- Ralf Terlutter
- Department of Marketing and International Management, Alpen-Adria Universitaet Klagenfurt, Klagenfurt am Woerthersee, Austria.
| | | | | |
Collapse
|
46
|
Verhoef LM, Van de Belt TH, Engelen LJLPG, Schoonhoven L, Kool RB. Social media and rating sites as tools to understanding quality of care: a scoping review. J Med Internet Res 2014; 16:e56. [PMID: 24566844 PMCID: PMC3961699 DOI: 10.2196/jmir.3024] [Citation(s) in RCA: 82] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2013] [Revised: 01/17/2014] [Accepted: 01/19/2014] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Insight into the quality of health care is important for any stakeholder including patients, professionals, and governments. In light of a patient-centered approach, it is essential to assess the quality of health care from a patient's perspective, which is commonly done with surveys or focus groups. Unfortunately, these "traditional" methods have significant limitations that include social desirability bias, a time lag between experience and measurement, and difficulty reaching large groups of people. Information on social media could be of value to overcoming these limitations, since these new media are easy to use and are used by the majority of the population. Furthermore, an increasing number of people share health care experiences online or rate the quality of their health care provider on physician rating sites. The question is whether this information is relevant to determining or predicting the quality of health care. OBJECTIVE The goal of our research was to systematically analyze the relation between information shared on social media and quality of care. METHODS We performed a scoping review with the following goals: (1) to map the literature on the association between social media and quality of care, (2) to identify different mechanisms of this relationship, and (3) to determine a more detailed agenda for this relatively new research area. A recognized scoping review methodology was used. We developed a search strategy based on four themes: social media, patient experience, quality, and health care. Four online scientific databases were searched, articles were screened, and data extracted. Results related to the research question were described and categorized according to type of social media. Furthermore, national and international stakeholders were consulted throughout the study, to discuss and interpret results. RESULTS Twenty-nine articles were included, of which 21 were concerned with health care rating sites. Several studies indicate a relationship between information on social media and quality of health care. However, some drawbacks exist, especially regarding the use of rating sites. For example, since rating is anonymous, rating values are not risk adjusted and therefore vulnerable to fraud. Also, ratings are often based on only a few reviews and are predominantly positive. Furthermore, people providing feedback on health care via social media are presumably not always representative for the patient population. CONCLUSIONS Social media and particularly rating sites are an interesting new source of information about quality of care from the patient's perspective. This new source should be used to complement traditional methods, since measuring quality of care via social media has other, but not less serious, limitations. Future research should explore whether social media are suitable in practice for patients, health insurers, and governments to help them judge the quality performance of professionals and organizations.
Collapse
Affiliation(s)
- Lise M Verhoef
- IQ healthcare, Radboud University Medical Center, Nijmegen, Netherlands.
| | | | | | | | | |
Collapse
|
47
|
Emmert M, Meier F. An analysis of online evaluations on a physician rating website: evidence from a German public reporting instrument. J Med Internet Res 2013; 15:e157. [PMID: 23919987 PMCID: PMC3742398 DOI: 10.2196/jmir.2655] [Citation(s) in RCA: 91] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2013] [Accepted: 06/11/2013] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Physician rating websites (PRW) have been gaining in popularity among patients who are seeking a physician. However, little evidence is available on the number, distribution, or trend of evaluations on PRWs. Furthermore, there is no published evidence available that analyzes the characteristics of the patients who provide ratings on PRWs. OBJECTIVE The objective of the study was to analyze all physician evaluations that were posted on the German PRW, jameda, in 2012. METHODS Data from the German PRW, jameda, from 2012 were analyzed and contained 127,192 ratings of 53,585 physicians from 107,148 patients. Information included medical specialty and gender of the physician, age, gender, and health insurance status of the patient, as well as the results of the physician ratings. Statistical analysis was carried out using the median test and Kendall Tau-b test. RESULTS Thirty-seven percent of all German physicians were rated on jameda in 2012. Nearly half of those physicians were rated once, and less than 2% were rated more than ten times (mean number of ratings 2.37, SD 3.17). About one third of all rated physicians were female. Rating patients were mostly female (60%), between 30-50 years (51%) and covered by Statutory Health Insurance (83%). A mean of 1.19 evaluations per patient could be calculated (SD 0.778). Most of the rated medical specialties were orthopedists, dermatologists, and gynecologists. Two thirds of all ratings could be assigned to the best category, "very good". Female physicians had significantly better ratings than did their male colleagues (P<.001). Additionally, significant rating differences existed between medical specialties (P<.001). It could further be shown that older patients gave better ratings than did their younger counterparts (P<.001). The same was true for patients covered by private health insurance; they gave more favorable evaluations than did patients covered by statutory health insurance (P<.001). No significant rating differences could be detected between female and male patients (P=.505). The likelihood of a good rating was shown to increase with a rising number of both physician and patient ratings. CONCLUSIONS Our findings are mostly in line with those published for PRWs from the United States. It could be shown that most of the ratings were positive, and differences existed regarding sociodemographic characteristics of both physicians and patients. An increase in the usage of PRWs might contribute to reducing the lack of publicly available information on physician quality. However, it remains unclear whether PRWs have the potential to reflect the quality of care offered by individual health care providers. Further research should assess in more detail the motivation of patients who rate their physicians online.
Collapse
Affiliation(s)
- Martin Emmert
- Institute of Management-IFM, School of Business and Economics, Friedrich-Alexander-University Erlangen-Nuremberg, Nuremberg, Germany.
| | | |
Collapse
|
48
|
Detz A, López A, Sarkar U. Long-term doctor-patient relationships: patient perspective from online reviews. J Med Internet Res 2013; 15:e131. [PMID: 23819959 PMCID: PMC3713916 DOI: 10.2196/jmir.2552] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2013] [Revised: 04/10/2013] [Accepted: 04/24/2013] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Continuity of patient care is one of the cornerstones of primary care. OBJECTIVE To examine publicly available, Internet-based reviews of adult primary care physicians, specifically written by patients who report long-term relationships with their physicians. METHODS This substudy was nested within a larger qualitative content analysis of online physician ratings. We focused on reviews reflecting an established patient-physician relationship, that is, those seeing their physicians for at least 1 year. RESULTS Of the 712 Internet reviews of primary care physicians, 93 reviews (13.1%) were from patients that self-identified as having a long-term relationship with their physician, 11 reviews (1.5%) commented on a first-time visit to a physician, and the remainder of reviews (85.4%) did not specify the amount of time with their physician. Analysis revealed six overarching domains: (1) personality traits or descriptors of the physician, (2) technical competence, (3) communication, (4) access to physician, (5) office staff/environment, and (6) coordination of care. CONCLUSIONS Our analysis shows that patients who have been with their physician for at least 1 year write positive reviews on public websites and focus on physician attributes.
Collapse
Affiliation(s)
- Alissa Detz
- UCLA Division of General Internal Medicine and Health Services Research, University of California, Los Angeles, Los Angeles, CA, United States
| | | | | |
Collapse
|
49
|
Emmert M, Sander U, Pisch F. Eight questions about physician-rating websites: a systematic review. J Med Internet Res 2013; 15:e24. [PMID: 23372115 PMCID: PMC3636311 DOI: 10.2196/jmir.2360] [Citation(s) in RCA: 100] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2012] [Revised: 11/05/2012] [Accepted: 11/09/2012] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Physician-rating websites (PRWs) are currently gaining in popularity because they increase transparency in the health care system. However, research on the characteristics and content of these portals remains limited. OBJECTIVE To identify and synthesize published evidence in peer-reviewed journals regarding frequently discussed issues about PRWs. METHODS Peer-reviewed English and German language literature was searched in seven databases (Medline (via PubMed), the Cochrane Library, Business Source Complete, ABI/Inform Complete, PsycInfo, Scopus, and ISI web of knowledge) without any time constraints. Additionally, reference lists of included studies were screened to assure completeness. The following eight previously defined questions were addressed: 1) What percentage of physicians has been rated? 2) What is the average number of ratings on PRWs? 3) Are there any differences among rated physicians related to socioeconomic status? 4) Are ratings more likely to be positive or negative? 5) What significance do patient narratives have? 6) How should physicians deal with PRWs? 7) What major shortcomings do PRWs have? 8) What recommendations can be made for further improvement of PRWs? RESULTS Twenty-four articles published in peer-reviewed journals met our inclusion criteria. Most studies were published by US (n=13) and German (n=8) researchers; however, the focus differed considerably. The current usage of PRWs is still low but is increasing. International data show that 1 out of 6 physicians has been rated, and approximately 90% of all ratings on PRWs were positive. Although often a concern, we could not find any evidence of "doctor-bashing". Physicians should not ignore these websites, but rather, monitor the information available and use it for internal and ex-ternal purpose. Several shortcomings limit the significance of the results published on PRWs; some recommendations to address these limitations are presented. CONCLUSIONS Although the number of publications is still low, PRWs are gaining more attention in research. But the current condition of PRWs is lacking. This is the case both in the United States and in Germany. Further research is necessary to increase the quality of the websites, especially from the patients' perspective.
Collapse
Affiliation(s)
- Martin Emmert
- Institute of Management IFM, School of Business and Economics, Friedrich-Alexander-University Erlangen-Nuremberg, Nuremberg 90411, Germany.
| | | | | |
Collapse
|
50
|
Ellimoottil C, Hart A, Greco K, Quek ML, Farooq A. Online reviews of 500 urologists. J Urol 2012; 189:2269-73. [PMID: 23228385 DOI: 10.1016/j.juro.2012.12.013] [Citation(s) in RCA: 74] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/06/2012] [Indexed: 11/29/2022]
Abstract
PURPOSE Patient demand for easily accessible information about physician quality has led to the development of physician review websites. These sites concern some physicians who argue that ratings can be misleading. In this study we describe the landscape of online reviews of urologists by looking at a sample of ratings and written reviews from popular physician review websites. MATERIALS AND METHODS A total of 500 urologists were randomly selected from a database of 9,940. Numerical ratings from 10 popular physician review websites were collected for each physician and analyzed. Written reviews from a single physician review website were also collected and then categorized as extremely negative/positive, negative/positive or neutral. RESULTS Our sample consisted of 471 male and 29 female urologists from 39 states including small and large cities and 4 census regions. There were 398 (79.6%) urologists who had at least 1 rating on any of the 10 physician review websites (range 0 to 64). On average the composite rating was based on scores from only 2.4 submitted ratings. Most physicians had positive ratings (86%), with 36% having highly positive ratings. No difference was seen in the median number of reviews when gender (p = 0.72), region (p = 0.87) and city size (p = 0.87) were compared. Written reviews were mostly positive or extremely positive (53%). CONCLUSIONS We advise physicians and patients to be aware that most urologists are rated on at least 1 physician review website, and while most ratings and reviews are favorable, composite scores are typically based on a small number of reviews and, therefore, can be volatile.
Collapse
Affiliation(s)
- Chandy Ellimoottil
- Department of Urology, Loyola University Medical Center, Maywood, Illinois 60153, USA.
| | | | | | | | | |
Collapse
|