1
|
Patel R, Tseng CC, Choudhry HS, Lemdani MS, Talmor G, Paskhover B. Applying Machine Learning to Determine Popular Patient Questions About Mentoplasty on Social Media. Aesthetic Plast Surg 2022; 46:2273-2279. [PMID: 35201377 DOI: 10.1007/s00266-022-02808-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 01/22/2022] [Indexed: 11/28/2022]
Abstract
PURPOSE Patient satisfaction in esthetic surgery often necessitates synergy between patient and physician goals. The authors aim to characterize patient questions before and after mentoplasty to reflect the patient perspective and enhance the physician-patient relationship. METHODS Mentoplasty reviews were gathered from Realself.com using an automated web crawler. Questions were defined as preoperative or postoperative. Each question was reviewed and characterized by the authors into general categories to best reflect the overall theme of the question. A machine learning approach was utilized to create a list of the most common patient questions, asked both preoperatively and postoperatively. RESULTS A total of 2,012 questions were collected. Of these, 1,708 (84.9%) and 304 (15.1%) preoperative and postoperative questions, respectively. The primary category for patients preoperatively was "eligibility for surgery" (86.3%), followed by "surgical techniques and logistics" (5.4%) and "cost" (5.4%). Of the postoperative questions, the most common questions were about "options to revise surgery" (44.1%), "symptoms after surgery" (27.0%), and "appearance" (26.3%). Our machine learning approach generated the 10 most common pre- and postoperative questions about mentoplasty. The majority of preoperative questions dealt with potential surgical indications, while most postoperative questions principally addressed appearance. CONCLUSIONS The majority of mentoplasty patient questions were preoperative and asked about eligibility of surgery. Our study also found a significant proportion of postoperative questions inquired about revision, suggesting a small but nontrivial subset of patients highly dissatisfied with their results. Our 10 most common preoperative and postoperative question handout can help better inform physicians about the patient perspective on mentoplasty throughout their surgical course. Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Rushi Patel
- Department of Otolaryngology - Head and Neck Surgery, Rutgers New Jersey Medical School, 90 Bergen St., Suite 8100, Newark, NJ, 07103, USA
| | - Christopher C Tseng
- Department of Otolaryngology - Head and Neck Surgery, Rutgers New Jersey Medical School, 90 Bergen St., Suite 8100, Newark, NJ, 07103, USA
| | - Hannaan S Choudhry
- Department of Otolaryngology - Head and Neck Surgery, Rutgers New Jersey Medical School, 90 Bergen St., Suite 8100, Newark, NJ, 07103, USA
| | - Mehdi S Lemdani
- Department of Otolaryngology - Head and Neck Surgery, Rutgers New Jersey Medical School, 90 Bergen St., Suite 8100, Newark, NJ, 07103, USA
| | - Guy Talmor
- Department of Otolaryngology - Head and Neck Surgery, Rutgers New Jersey Medical School, 90 Bergen St., Suite 8100, Newark, NJ, 07103, USA
| | - Boris Paskhover
- Department of Otolaryngology - Head and Neck Surgery, Rutgers New Jersey Medical School, 90 Bergen St., Suite 8100, Newark, NJ, 07103, USA.
| |
Collapse
|
2
|
Assessment of Accuracy of a Physician Ratings Website in One Metropolitan Area. J Surg Res 2021; 268:521-526. [PMID: 34461603 DOI: 10.1016/j.jss.2021.07.039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 07/05/2021] [Accepted: 07/27/2021] [Indexed: 11/23/2022]
Abstract
BACKGROUND Patients frequently use online physician ratings websites (PRWs) to identify physicians for care. PRWs provide physician information and reviews. However, the accuracy of PRWs is uncertain. We investigated the accuracy and validity of Healthgrades with respect to endocrine surgery. We identified factors associated with reported board certification inaccuracy, higher ratings, greater quantity of ratings. MATERIALS AND METHODS The search term "endocrine surgery specialist" was used and the search was limited to a 25-mile radius around Philadelphia, PA. Data was collected on physician sex, age, board certification, surgical specialty, quantity of ratings, average rating, response to comments, and provision of a self-description. Descriptive statistics were performed to examine surgeon characteristics, ratings, and reported board certifications. Board certification accuracy was determined by searching the corresponding American Board website and calculating a kappa statistic. Logistic regression was performed to identify factors associated with board certification inaccuracy, higher average ratings, and higher quantity of ratings. RESULTS A total of 300 physicians were identified. Eighty-four percent of listed board certifications were accurate; the kappa statistic for accuracy of board certification was 0.634. Providing a response to comments and greater quantity of ratings were associated with higher average ratings. Provision of a self-description, male sex, and younger age were identified as factors associated with higher quantity of ratings. CONCLUSIONS A wide range of specialties are identified as endocrine surgery specialists. The reliability of board certification reporting was moderate. Increased surgeon involvement with the Healthgrades site was inconsistently associated with higher average ratings and higher quantity of ratings but lower accuracy.
Collapse
|
3
|
Emmert M, McLennan S. One Decade of Online Patient Feedback: Longitudinal Analysis of Data From a German Physician Rating Website. J Med Internet Res 2021; 23:e24229. [PMID: 34309579 PMCID: PMC8367114 DOI: 10.2196/24229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Revised: 12/21/2020] [Accepted: 06/30/2021] [Indexed: 01/13/2023] Open
Abstract
Background Feedback from patients is an essential element of a patient-oriented health care system. Physician rating websites (PRWs) are a key way patients can provide feedback online. This study analyzes an entire decade of online ratings for all medical specialties on a German PRW. Objective The aim of this study was to examine how ratings posted on a German PRW have developed over the past decade. In particular, it aimed to explore (1) the distribution of ratings according to time-related aspects (year, month, day of the week, and hour of the day) between 2010 and 2019, (2) the number of physicians with ratings, (3) the average number of ratings per physician, (4) the average rating, (5) whether differences exist between medical specialties, and (6) the characteristics of the patients rating physicians. Methods All scaled-survey online ratings that were posted on the German PRW jameda between 2010 and 2019 were obtained. Results In total, 1,906,146 ratings were posted on jameda between 2010 and 2019 for 127,921 physicians. The number of rated physicians increased constantly from 19,305 in 2010 to 82,511 in 2018. The average number of ratings per rated physicians increased from 1.65 (SD 1.56) in 2010 to 3.19 (SD 4.69) in 2019. Overall, 75.2% (1,432,624/1,906,146) of all ratings were in the best rating category of “very good,” and 5.7% (107,912/1,906,146) of the ratings were in the lowest category of “insufficient.” However, the mean of all ratings was 1.76 (SD 1.53) on the German school grade 6-point rating scale (1 being the best) with a relatively constant distribution over time. General practitioners, internists, and gynecologists received the highest number of ratings (343,242, 266,899, and 232,914, respectively). Male patients, those of higher age, and those covered by private health insurance gave significantly (P<.001) more favorable evaluations compared to their counterparts. Physicians with a lower number of ratings tended to receive ratings across the rating scale, while physicians with a higher number of ratings tended to have better ratings. Physicians with between 21 and 50 online ratings received the lowest ratings (mean 1.95, SD 0.84), while physicians with >100 ratings received the best ratings (mean 1.34, SD 0.47). Conclusions This study is one of the most comprehensive analyses of PRW ratings to date. More than half of all German physicians have been rated on jameda each year since 2016, and the overall average number of ratings per rated physicians nearly doubled over the decade. Nevertheless, we could also observe a decline in the number of ratings over the last 2 years. Future studies should investigate the most recent development in the number of ratings on both other German and international PRWs as well as reasons for the heterogeneity in online ratings by medical specialty.
Collapse
Affiliation(s)
- Martin Emmert
- Institute for Healthcare Management & Health Sciences, University of Bayreuth, Bayreuth, Germany
| | - Stuart McLennan
- Institute of History and Ethics in Medicine, Technical University of Munich, Munich, Germany.,Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| |
Collapse
|
4
|
Abstract
BackgroundTelelactation is a modality for delivering remote clinical lactation care using telecommunications technology. Sonder Health, in partnership with Amwell, began offering synchronous video telelactation services to health plans and employer groups in 2016.MethodsWe completed a retrospective data analysis on a randomized selection of 1,087 telelactation visits covered by a health plan or employee-sponsored health plan conducted between 2016–2019. Our aim is to describe a telelactation model and review selected visits for technical modalities utilized, clinical workflow, top self-reported chief conditions, patient satisfaction, visit duration, acuity levels, alternative care options, peak visit time, visits conducted during or after business hours, and days visits took place, and discuss the potential for telelactation to bridge the gaps in timely access to IBCLC-level breastfeeding support.ResultsUsing a 5-star rating system, 95% of patients gave a 5-star rating; 52% of visits occurred outside normal business hours. Top three conditions identified: latching (31%), supply (24%), and nipple/breast pain (15%). Without access to the service, 59% reported they would have accessed an urgent care, emergency department, retail health clinic, or other office appointment; 41% reported they would have sought care “nowhere.”ConclusionsThis telelactation program provided access to skilled, comprehensive clinical lactation care and documents a strong use case for telelactation services.
Collapse
|
5
|
Yan Q, Jensen KJ, Thomas R, Field AR, Jiang Z, Goei C, Davies MG. Digital Footprint of Academic Vascular Surgeons in the Southern United States on Physician Rating Websites: Cross-sectional Evaluation Study. JMIR Cardio 2021; 5:e22975. [PMID: 33625359 PMCID: PMC8411431 DOI: 10.2196/22975] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 01/04/2021] [Accepted: 01/14/2021] [Indexed: 01/19/2023] Open
Abstract
Background The internet has become a popular platform for patients to obtain information and to review the health care providers they interact with. However, little is known about the digital footprint of vascular surgeons and their interactions with patients on social media. Objective This study aims to understand the activity of academic vascular surgeons on physician rating websites. Methods Information on attending vascular surgeons affiliated with vascular residency or with fellowships in the Southern Association for Vascular Surgery (SAVS) was collected from public sources. A listing of websites containing physician ratings was obtained via literature reviews and Google search. Open access websites with either qualitative or quantitative evaluations of vascular surgeons were included. Closed access websites were excluded. Ranking scores from each website were converted to a standard 5-point scale for comparison. Results A total of 6238 quantitative and 967 qualitative reviews were written for 287 physicians (236 males, 82.2%) across 16 websites that met the inclusion criteria out of the 62 websites screened. The surgeons affiliated with the integrated vascular residency and vascular fellowship programs in SAVS had a median of 8 (IQR 7-10) profiles across 16 websites, with only 1 surgeon having no web presence in any of the websites. The median number of quantitative ratings for each physician was 17 (IQR 6-34, range 1-137) and the median number of narrative reviews was 3 (IQR 2-6, range 1-28). Vitals, WebMD, and Healthgrades were the only 3 websites where over a quarter of the physicians were rated, and those rated had more than 5 ratings on average. The median score for the quantitative reviews was 4.4 (IQR 4.0-4.9). Most narrative reviews (758/967, 78.4%) were positive, but 20.2% (195/967) were considered negative; only 1.4% (14/967) were considered equivocal. No statistical difference was found in the number of quantitative reviews or in the overall average score in the physician ratings between physicians with social media profiles and those without social media profiles (departmental social media profile: median 23 vs 15, respectively, P=.22; personal social media profile: median 19 vs 14, respectively, P=.08). Conclusions The representation of vascular surgeons on physician rating websites is varied, with the majority of the vascular surgeons represented only in half of the physician rating websites The number of quantitative and qualitative reviews for academic vascular surgeons is low. No vascular surgeon responded to any of the reviews. The activity of vascular surgeons in this area of social media is low and reflects only a small digital footprint that patients can reach and review.
Collapse
Affiliation(s)
- Qi Yan
- Division of Vascular Surgery, Department of Surgery, UT Health San Antonio, San Antonio, TX, United States
| | - Katherine J Jensen
- Department of Surgery, UT Health San Antonio, San Antonio, TX, United States
| | - Rose Thomas
- Department of Surgery, UT Health San Antonio, San Antonio, TX, United States
| | - Alyssa R Field
- Department of Surgery, UT Health San Antonio, San Antonio, TX, United States
| | - Zheng Jiang
- Department of Surgery, Shanghai Medical College, Fudan University, China
| | - Christian Goei
- Department of Surgery, UT Health San Antonio, San Antonio, TX, United States
| | - Mark G Davies
- Division of Vascular Surgery, Department of Surgery, UT Health San Antonio, San Antonio, TX, United States.,South Texas Center for Vascular Care, San Antonio, TX, United States
| |
Collapse
|
6
|
Mulgund P, Sharman R, Anand P, Shekhar S, Karadi P. Data Quality Issues With Physician-Rating Websites: Systematic Review. J Med Internet Res 2020; 22:e15916. [PMID: 32986000 PMCID: PMC7551103 DOI: 10.2196/15916] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Revised: 04/24/2020] [Accepted: 08/13/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND In recent years, online physician-rating websites have become prominent and exert considerable influence on patients' decisions. However, the quality of these decisions depends on the quality of data that these systems collect. Thus, there is a need to examine the various data quality issues with physician-rating websites. OBJECTIVE This study's objective was to identify and categorize the data quality issues afflicting physician-rating websites by reviewing the literature on online patient-reported physician ratings and reviews. METHODS We performed a systematic literature search in ACM Digital Library, EBSCO, Springer, PubMed, and Google Scholar. The search was limited to quantitative, qualitative, and mixed-method papers published in the English language from 2001 to 2020. RESULTS A total of 423 articles were screened. From these, 49 papers describing 18 unique data quality issues afflicting physician-rating websites were included. Using a data quality framework, we classified these issues into the following four categories: intrinsic, contextual, representational, and accessible. Among the papers, 53% (26/49) reported intrinsic data quality errors, 61% (30/49) highlighted contextual data quality issues, 8% (4/49) discussed representational data quality issues, and 27% (13/49) emphasized accessibility data quality. More than half the papers discussed multiple categories of data quality issues. CONCLUSIONS The results from this review demonstrate the presence of a range of data quality issues. While intrinsic and contextual factors have been well-researched, accessibility and representational issues warrant more attention from researchers, as well as practitioners. In particular, representational factors, such as the impact of inline advertisements and the positioning of positive reviews on the first few pages, are usually deliberate and result from the business model of physician-rating websites. The impact of these factors on data quality has not been addressed adequately and requires further investigation.
Collapse
Affiliation(s)
- Pavankumar Mulgund
- School of Management, State University of New York Buffalo, Buffalo, NY, United States
| | - Raj Sharman
- School of Management, State University of New York Buffalo, Buffalo, NY, United States
| | - Priya Anand
- Institute of Computational and Data Sciences, State University of New York Buffalo, Buffalo, NY, United States
| | - Shashank Shekhar
- School of Management, State University of New York Buffalo, Buffalo, NY, United States
| | - Priya Karadi
- Institute of Computational and Data Sciences, State University of New York Buffalo, Buffalo, NY, United States
| |
Collapse
|
7
|
McLennan S. Rejected Online Feedback From a Swiss Physician Rating Website Between 2008 and 2017: Analysis of 2352 Ratings. J Med Internet Res 2020; 22:e18374. [PMID: 32687479 PMCID: PMC7432139 DOI: 10.2196/18374] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Revised: 05/22/2020] [Accepted: 06/11/2020] [Indexed: 11/15/2022] Open
Abstract
Background Previous research internationally has only analyzed publicly available feedback on physician rating websites (PRWs). However, it appears that many PRWs are not publishing all the feedback they receive. Analysis of this rejected feedback could provide a better understanding of the types of feedback that are currently not published and whether this is appropriate. Objective The aim of this study was to examine (1) the number of patient feedback rejected from the Swiss PRW Medicosearch, (2) the evaluation tendencies of the rejected patient feedback, and (3) the types of issues raised in the rejected narrative comments. Methods The Swiss PRW Medicosearch provided all the feedback that had been rejected between September 16, 2008, and September 22, 2017. The feedback were analyzed and classified according to a theoretical categorization framework of physician-, staff-, and practice-related issues. Results Between September 16, 2008, and September 22, 2017, Medicosearch rejected a total of 2352 patient feedback. The majority of feedback rejected (1754/2352, 74.6%) had narrative comments in the German language. However, 11.9% (279/2352) of the rejected feedback only provided a quantitative rating with no narrative comment. Overall, 25% (588/2352) of the rejected feedback were positive, 18.7% (440/2352) were neutral, and 56% (1316/2352) were negative. The average rating of the rejected feedback was 2.8 (SD 1.4). In total, 44 subcategories addressing the physician (n=20), staff (n=9), and practice (n=15) were identified. In total, 3804 distinct issues were identified within the 44 subcategories of the categorization framework; 75% (2854/3804) of the issues were related to the physician, 6.4% (242/3804) were related to the staff, and 18.6% (708/3804) were related to the practice. Frequently mentioned issues identified from the rejected feedback included (1) satisfaction with treatment (533/1903, 28%); (2) the overall assessment of the physician (392/1903, 20.6%); (3) recommending the physician (345/1903, 18.1%); (4) the physician’s communication (261/1903, 13.7%); (5) the physician’s caring attitude (220/1903, 11.6%); and (6) the physician’s friendliness (203/1903, 10.6%). Conclusions It is unclear why the majority of the feedback were rejected. This is problematic and raises concerns that online patient feedback are being inappropriately manipulated. If online patient feedback is going to be collected, there needs to be clear policies and practices about how this is handled. It cannot be left to the whims of PRWs, who may have financial incentives to suppress negative feedback, to decide which feedback is or is not published online. Further research is needed to examine how many PRWs are using criteria for determining which feedback is published or not, what those criteria are, and what measures PRWs are using to address the manipulation of online patient feedback.
Collapse
Affiliation(s)
- Stuart McLennan
- Institute of History and Ethics in Medicine, Technical University of Munich, Munich, Germany.,Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| |
Collapse
|
8
|
Chen AT, Flaherty MG, Threats M. Attitudes, Provider and Treatment Selection of Complementary and Integrative Health among Individuals with Pain-Related Conditions. Complement Ther Med 2020; 51:102410. [PMID: 32507427 DOI: 10.1016/j.ctim.2020.102410] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 02/03/2020] [Accepted: 04/10/2020] [Indexed: 11/15/2022] Open
Abstract
Complementary and integrative therapies are used by people to address many conditions, including pain-related conditions. There has been concern about the quality of online health information, including information pertaining to complementary and integrative health (CIH). In this qualitative interview study, we sought to investigate how individuals interact with CIH-related information online and how this might affect their subsequent behavior. We conducted semi-structured interviews with 14 individuals with chronic pain conditions. We report findings based on three main themes: individuals' beliefs about CIH; approach to CIH, including how people view provider information and personalize their CIH use strategy; and factors that affect trust in the information encountered. Overall, study participants believed there was value in CIH therapies and that treatments were effective. Many described experiences that had influenced their views of complementary therapies over time. We also found that individuals form impressions of CIH providers based on structural and personal characteristics, particularly cost and proximity, that are conveyed in information to which they are exposed. These findings have various implications. First, over time individuals with chronic pain conditions develop their own beliefs and attitudes, which play a role in their selection of providers and modalities relating to CIH. Health care providers should consider how people view information relating to, and make decisions about, CIH therapies and work collaboratively with patients to develop effective health management strategies. Information services should also consider patients' perspectives in developing websites and other informational materials.
Collapse
Affiliation(s)
- Annie T Chen
- Biomedical Informatics and Medical Education, University of Washington School of Medicine, UW Medicine South Lake Union, 850 Republican Street, Box 358047, Seattle, WA 98109, United States.
| | - Mary Grace Flaherty
- School of Information and Library Science, University of North Carolina at Chapel Hill, 216 Lenoir Drive, CB #3360, 100 Manning Hall, Chapel Hill, NC, 27599-3360, United States.
| | - Megan Threats
- School of Information and Library Science, University of North Carolina at Chapel Hill, 216 Lenoir Drive, CB #3360, 100 Manning Hall, Chapel Hill, NC, 27599-3360, United States.
| |
Collapse
|
9
|
Schulz PJ, Rothenfluh F. Influence of Health Literacy on Effects of Patient Rating Websites: Survey Study Using a Hypothetical Situation and Fictitious Doctors. J Med Internet Res 2020; 22:e14134. [PMID: 32250275 PMCID: PMC7171560 DOI: 10.2196/14134] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Revised: 07/10/2019] [Accepted: 12/19/2019] [Indexed: 11/16/2022] Open
Abstract
Background Physician rating websites (PRWs) are a device people use actively and passively, although their objective capabilities are insufficient when it comes to judging the medical performance and qualification of physicians. PRWs are an innovation born of the potential of the Internet and boosted very much by the longstanding policy of improving and encouraging patient participation in medical decision-making. A mismatch is feared between patient motivations to participate and their capabilities of doing so well. Awareness of such a mismatch might contribute to some skepticism of patient-written physician reviews on PRWs. Objective We intend to test whether health literacy is able to dampen the effects that a patient-written review of a physician’s performance might have on physician choice. Methods An experiment was conducted within a survey interview. Participants were put into a fictitious decision situation in which they had to choose between two physicians on the basis of their profiles on a PRW. One of the physician profiles contained the experimental stimulus in the form of a friendly and a critical written review. The dependent variable was physician choice. An attitude differential, trust differential, and two measures of health literacy, the newest vital sign as an example of a performance-based measure and eHealth Literacy Scale as an example of a perception-based measure, were tested for roles as intermediary variables. Analysis traced the influence of the review tendency on the dependent variables and a possible moderating effect of health literacy on these influences. Results Reviews of a physician’s competence and medical skill affected participant choice of a physician. High health literacy dampened these effects only in the case of the perception-based measure and only for the negative review. Correspondingly, the effect of the review tendency appeared to be stronger for the positive review. Attitudes and trust only affected physician choice when included as covariants, considerably increasing the variance explained by regression models. Conclusions Findings sustain physician worries that even one negative PRW review can affect patient choice and damage doctors’ reputations. Hopes that health literacy might raise awareness of the poor basis of physician reviews and ratings given by patients have some foundation.
Collapse
|
10
|
Shandley LM, Hipp HS, Anderson-Bialis J, Anderson-Bialis D, Boulet SL, McKenzie LJ, Kawwass JF. Patient-centered care: factors associated with reporting a positive experience at United States fertility clinics. Fertil Steril 2020; 113:797-810. [DOI: 10.1016/j.fertnstert.2019.12.040] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Revised: 12/05/2019] [Accepted: 12/09/2019] [Indexed: 12/13/2022]
|
11
|
Okike K, Uhr NR, Shin SYM, Xie KC, Kim CY, Funahashi TT, Kanter MH. A Comparison of Online Physician Ratings and Internal Patient-Submitted Ratings from a Large Healthcare System. J Gen Intern Med 2019; 34:2575-2579. [PMID: 31531811 PMCID: PMC6848281 DOI: 10.1007/s11606-019-05265-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Revised: 06/03/2019] [Accepted: 07/11/2019] [Indexed: 10/26/2022]
Abstract
BACKGROUND Physician online ratings are ubiquitous and influential, but they also have their detractors. Given the lack of scientific survey methodology used in online ratings, some health systems have begun to publish their own internal patient-submitted ratings of physicians. OBJECTIVE The purpose of this study was to compare online physician ratings with internal ratings from a large healthcare system. DESIGN Retrospective cohort study comparing online ratings with internal ratings from a large healthcare system. SETTING Kaiser Permanente, a large integrated healthcare delivery system. PARTICIPANTS Physicians in the Southern California region of Kaiser Permanente, including all specialties with ambulatory clinic visits. MAIN MEASURES The primary outcome measure was correlation between online physician ratings and internal ratings from the integrated healthcare delivery system. RESULTS Of 5438 physicians who met inclusion and exclusion criteria, 4191 (77.1%) were rated both online and internally. The online ratings were based on a mean of 3.5 patient reviews, while the internal ratings were based on a mean of 119 survey returns. The overall correlation between the online and internal ratings was weak (Spearman's rho .23), but increased with the number of reviews used to formulate each online rating. CONCLUSIONS Physician online ratings did not correlate well with internal ratings from a large integrated healthcare delivery system, although the correlation increased with the number of reviews used to formulate each online rating. Given that many consumers are not aware of the statistical issues associated with small sample sizes, we would recommend that online rating websites refrain from displaying a physician's rating until the sample size is sufficiently large (for example, at least 15 patient reviews). However, hospitals and health systems may be able to provide better information for patients by publishing the internal ratings of their physicians.
Collapse
Affiliation(s)
- Kanu Okike
- Hawaii Permanente Medical Group, Kaiser Moanalua Medical Center, Moanalua Road, Honolulu, HI, USA.
| | | | | | | | - Chong Y Kim
- Southern California Permanente Medical Group, Pasadena, CA, USA
| | | | - Michael H Kanter
- Department of Clinical Science, Kaiser Permanente, School of Medicine, Pasadena, CA, USA.,Department of Research and Evaluation, Southern California Permanente Medical Group, Pasadena, CA, USA
| |
Collapse
|
12
|
Powell J, Atherton H, Williams V, Mazanderani F, Dudhwala F, Woolgar S, Boylan AM, Fleming J, Kirkpatrick S, Martin A, van Velthoven M, de Iongh A, Findlay D, Locock L, Ziebland S. Using online patient feedback to improve NHS services: the INQUIRE multimethod study. HEALTH SERVICES AND DELIVERY RESEARCH 2019. [DOI: 10.3310/hsdr07380] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Background
Online customer feedback has become routine in many industries, but it has yet to be harnessed for service improvement in health care.
Objectives
To identify the current evidence on online patient feedback; to identify public and health professional attitudes and behaviour in relation to online patient feedback; to explore the experiences of patients in providing online feedback to the NHS; and to examine the practices and processes of online patient feedback within NHS trusts.
Design
A multimethod programme of five studies: (1) evidence synthesis and stakeholder consultation; (2) questionnaire survey of the public; (3) qualitative study of patients’ and carers’ experiences of creating and using online comment; (4) questionnaire surveys and a focus group of health-care professionals; and (5) ethnographic organisational case studies with four NHS secondary care provider organisations.
Setting
The UK.
Methods
We searched bibliographic databases and conducted hand-searches to January 2018. Synthesis was guided by themes arising from consultation with 15 stakeholders. We conducted a face-to-face survey of a representative sample of the UK population (n = 2036) and 37 purposively sampled qualitative semistructured interviews with people with experience of online feedback. We conducted online surveys of 1001 quota-sampled doctors and 749 nurses or midwives, and a focus group with five allied health professionals. We conducted ethnographic case studies at four NHS trusts, with a researcher spending 6–10 weeks at each site.
Results
Many people (42% of internet users in the general population) read online feedback from other patients. Fewer people (8%) write online feedback, but when they do one of their main reasons is to give praise. Most online feedback is positive in its tone and people describe caring about the NHS and wanting to help it (‘caring for care’). They also want their feedback to elicit a response as part of a conversation. Many professionals, especially doctors, are cautious about online feedback, believing it to be mainly critical and unrepresentative, and rarely encourage it. From a NHS trust perspective, online patient feedback is creating new forms of response-ability (organisations needing the infrastructure to address multiple channels and increasing amounts of online feedback) and responsivity (ensuring responses are swift and publicly visible).
Limitations
This work provides only a cross-sectional snapshot of a fast-emerging phenomenon. Questionnaire surveys can be limited by response bias. The quota sample of doctors and volunteer sample of nurses may not be representative. The ethnographic work was limited in its interrogation of differences between sites.
Conclusions
Providing and using online feedback are becoming more common for patients who are often motivated to give praise and to help the NHS improve, but health organisations and professionals are cautious and not fully prepared to use online feedback for service improvement. We identified several disconnections between patient motivations and staff and organisational perspectives, which will need to be resolved if NHS services are to engage with this source of constructive criticism and commentary from patients.
Future work
Intervention studies could measure online feedback as an intervention for service improvement and longitudinal studies could examine use over time, including unanticipated consequences. Content analyses could look for new knowledge on specific tests or treatments. Methodological work is needed to identify the best approaches to analysing feedback.
Study registration
The ethnographic case study work was registered as Current Controlled Trials ISRCTN33095169.
Funding
This project was funded by the National institute for Health Research (NIHR) Health Services and Delivery Research programme and will be published in full in Health Services and Delivery Research; Vol. 7, No. 38. See the NIHR Journals Library website for further project information.
Collapse
Affiliation(s)
- John Powell
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - Helen Atherton
- Unit of Academic Primary Care, Warwick Medical School, University of Warwick, Coventry, UK
| | - Veronika Williams
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - Fadhila Mazanderani
- School of Social and Political Science, University of Edinburgh, Edinburgh, UK
| | - Farzana Dudhwala
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - Steve Woolgar
- Saïd Business School, University of Oxford, Oxford, UK
- Department of Thematic Studies, Linköping University, Linköping, Sweden
| | - Anne-Marie Boylan
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - Joanna Fleming
- Unit of Academic Primary Care, Warwick Medical School, University of Warwick, Coventry, UK
| | - Susan Kirkpatrick
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - Angela Martin
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | | | | | | | - Louise Locock
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | - Sue Ziebland
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| |
Collapse
|
13
|
McLennan S. The Content and Nature of Narrative Comments on Swiss Physician Rating Websites: Analysis of 849 Comments. J Med Internet Res 2019; 21:e14336. [PMID: 31573918 PMCID: PMC6792026 DOI: 10.2196/14336] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Revised: 06/28/2019] [Accepted: 07/15/2019] [Indexed: 11/13/2022] Open
Abstract
Background The majority of physician rating websites (PRWs) provide users the option to leave narrative comments about their physicians. Narrative comments potentially provide richer insights into patients’ experiences and feelings that cannot be fully captured in predefined quantitative rating scales and are increasingly being examined. However, the content and nature of narrative comments on Swiss PRWs has not been examined to date. Objective This study aimed to examine (1) the types of issues raised in narrative comments on Swiss PRWs and (2) the evaluation tendencies of the narrative comments. Methods A random stratified sample of 966 physicians was generated from the regions of Zürich and Geneva. Every selected physician was searched for on 3 PRWs (OkDoc, DocApp, and Medicosearch) and Google, and narrative comments were collected. Narrative comments were analyzed and classified according to a theoretical categorization framework of physician-, staff-, and practice-related issues. Results The selected physicians had a total of 849 comments. In total, 43 subcategories addressing the physician (n=21), staff (n=8), and practice (n=14) were identified. None of the PRWs’ comments covered all 43 subcategories of the categorization framework; comments on Google covered 86% (37/43) of the subcategories, Medicosearch covered 72% (31/43), DocApp covered 60% (26/43), and OkDoc covered 56% (24/43). In total, 2441 distinct issues were identified within the 43 subcategories of the categorization framework; 83.65% (2042/2441) of the issues related to the physician, 6.63% (162/2441) related to the staff, and 9.70% (237/2441) related to the practice. Overall, 95% (41/43) of the subcategories of the categorization framework and 81.60% (1992/2441) of the distinct issues identified were concerning aspects of performance (interpersonal skills of the physician and staff, infrastructure, and organization and management of the practice) that are considered assessable by patients. Overall, 83.0% (705/849) of comments were classified as positive, 2.5% (21/849) as neutral, and 14.5% (123/849) as negative. However, there were significant differences between PRWs, regions, and specialty regarding negative comments: 90.2% (111/123) of negative comments were on Google, 74.7% (92/123) were regarding physicians in Zurich, and 73.2% (90/123) were from specialists. Conclusions From the narrative comments analyzed, it can be reported that interpersonal issues make up nearly half of all negative issues identified, and it is recommended that physicians should focus on improving these issues. The current suppression of negative comments by Swiss PRWs is concerning, and there is a need for a consensus-based criterion to be developed to determine which comments should be published publicly. Finally, it would be helpful if Swiss patients are made aware of the current large differences between Swiss PRWs regarding the frequency and nature of ratings to help them determine which PRW will provide them with the most useful information.
Collapse
Affiliation(s)
- Stuart McLennan
- Institute of History and Ethics in Medicine, Technical University of Munich, Munich, Germany.,Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| |
Collapse
|
14
|
McLennan S. Quantitative Ratings and Narrative Comments on Swiss Physician Rating Websites: Frequency Analysis. J Med Internet Res 2019; 21:e13816. [PMID: 31350838 PMCID: PMC6688440 DOI: 10.2196/13816] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 05/02/2019] [Accepted: 05/17/2019] [Indexed: 11/26/2022] Open
Abstract
Background Physician rating websites (PRWs) have been developed as part of a wider move toward transparency around health care quality, and these allow patients to anonymously rate, comment, and discuss physicians’ quality on the Web. The first Swiss PRWs were established in 2008, at the same time as many international PRWs. However, there has been limited research conducted on PRWs in Switzerland to date. International research has indicated that a key shortcoming of PRWs is that they have an insufficient number of ratings. Objective The aim of this study was to examine the frequency of quantitative ratings and narrative comments on the Swiss PRWs. Methods In November 2017, a random stratified sample of 966 physicians was generated from the regions of Zürich and Geneva. Every selected physician was searched for on 4 rating websites (OkDoc, DocApp, Medicosearch, and Google) between November 2017 and July 2018. It was recorded whether the physician could be identified, what the physician’s quantitative rating was, and whether the physician had received narrative comments. In addition, Alexa Internet was used to examine the number of visitors to the PRWs, compared with other websites. Results Overall, the portion of physicians able to be identified on the PRWs ranged from 42.4% (410/966) on OkDoc to 87.3% (843/966) on DocApp. Of the identifiable physicians, only a few of the selected physicians had been rated quantitatively (4.5% [38/843] on DocApp to 49.8% [273/548] on Google) or received narrative comments (4.5% [38/843] on DocApp to 31.2% [171/548] on Google) at least once. Rated physicians also had, on average, a low number of quantitative ratings (1.47 ratings on OkDoc to 3.74 rating on Google) and narrative comments (1.23 comment on OkDoc to 3.03 comments on Google). All 3 websites allowing ratings used the same rating scale (1-5 stars) and had a very positive average rating: DocApp (4.71), Medicosearch (4.69), and Google (4.41). There were significant differences among the PRWs (with the majority of ratings being posted on Google in past 2 years) and regions (with physicians in Zurich more likely to have been rated and have more ratings on average). Only Google (position 1) and Medicosearch (position 8358) are placed among the top 10,000 visited websites in Switzerland. Conclusions It appears that this is the first time Google has been included in a study examining physician ratings internationally and it is noticeable how Google has had substantially more ratings than the 3 dedicated PRWs in Switzerland over the past 2 and a half years. Overall, this study indicates that Swiss PRWs are not yet a reliable source of unbiased information regarding patient experiences and satisfaction with Swiss physicians; many selected physicians were unable to be identified, only a few physicians had been rated, and the ratings posted were overwhelmingly positive.
Collapse
Affiliation(s)
- Stuart McLennan
- Institute of History and Ethics in Medicine, Technical University of Munich, Munich, Germany.,Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| |
Collapse
|
15
|
Hong YA, Liang C, Radcliff TA, Wigfall LT, Street RL. What Do Patients Say About Doctors Online? A Systematic Review of Studies on Patient Online Reviews. J Med Internet Res 2019; 21:e12521. [PMID: 30958276 PMCID: PMC6475821 DOI: 10.2196/12521] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 12/16/2018] [Accepted: 01/31/2019] [Indexed: 01/20/2023] Open
Abstract
Background The number of patient online reviews (PORs) has grown significantly, and PORs have played an increasingly important role in patients’ choice of health care providers. Objective The objective of our study was to systematically review studies on PORs, summarize the major findings and study characteristics, identify literature gaps, and make recommendations for future research. Methods A major database search was completed in January 2019. Studies were included if they (1) focused on PORs of physicians and hospitals, (2) reported qualitative or quantitative results from analysis of PORs, and (3) peer-reviewed empirical studies. Study characteristics and major findings were synthesized using predesigned tables. Results A total of 63 studies (69 articles) that met the above criteria were included in the review. Most studies (n=48) were conducted in the United States, including Puerto Rico, and the remaining were from Europe, Australia, and China. Earlier studies (published before 2010) used content analysis with small sample sizes; more recent studies retrieved and analyzed larger datasets using machine learning technologies. The number of PORs ranged from fewer than 200 to over 700,000. About 90% of the studies were focused on clinicians, typically specialists such as surgeons; 27% covered health care organizations, typically hospitals; and some studied both. A majority of PORs were positive and patients’ comments on their providers were favorable. Although most studies were descriptive, some compared PORs with traditional surveys of patient experience and found a high degree of correlation and some compared PORs with clinical outcomes but found a low level of correlation. Conclusions PORs contain valuable information that can generate insights into quality of care and patient-provider relationship, but it has not been systematically used for studies of health care quality. With the advancement of machine learning and data analysis tools, we anticipate more research on PORs based on testable hypotheses and rigorous analytic methods. Trial Registration International Prospective Register of Systematic Reviews (PROSPERO) CRD42018085057; https://www.crd.york.ac.uk/PROSPERO/display_record.php?RecordID=85057 (Archived by WebCite at http://www.webcitation.org/76ddvTZ1C)
Collapse
Affiliation(s)
- Y Alicia Hong
- Department of Health Administration and Policy, George Mason University, Fairfax, VA, United States.,School of Public Health, Texas A&M University, College Station, TX, United States
| | - Chen Liang
- Arnold School of Public Health, University of South Carolina, Columbia, SC, United States
| | - Tiffany A Radcliff
- School of Public Health, Texas A&M University, College Station, TX, United States
| | - Lisa T Wigfall
- Department of Health Kinesiology, Texas A&M University, College Station, TX, United States
| | - Richard L Street
- Department of Communication, Texas A&M University, College Station, TX, United States
| |
Collapse
|
16
|
Prabhu AV, Randhawa S, Clump D, Heron DE, Beriwal S. What Do Patients Think About Their Radiation Oncologists? An Assessment of Online Patient Reviews on Healthgrades. Cureus 2018; 10:e2165. [PMID: 29644154 PMCID: PMC5889152 DOI: 10.7759/cureus.2165] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Introduction An increasing number of patients search for their physicians online. Many hospital systems utilize Press-Ganey studies as internal tools to analyze patient satisfaction, but independent third-party websites have a large presence online. Patients’ trust in these third-party sites may occur despite a low number of reviews and a lack of validity of patients’ entries. Healthgrades.com has been shown as the most popular site to appear on Google searches for radiation oncologists (ROs) in the United States (US). The aim of this study was to analyze patient satisfaction scores and the factors that influence those scores for American ROs on Healthgrades. Methods The physician ratings website Healthgrades was manually queried to obtain reviews from all Medicare-participating ROs with reviews (n=2,679). Patient Review Satisfaction Scores (PRSS) were recorded in response to a variety of questions. All information in the survey was scored from 1 (poor) to 5 (excellent) for the following characteristics: likelihood to recommend (LTR), office environment, ease of scheduling, trust in the physician’s decision, staff friendliness, ability of the physician to listen and answer questions, ability of the physician to explain the condition, and whether the physician spent sufficient time with the patients. Associations amongst these factors were considered by computing Spearman correlation coefficients and utilizing Mann-Whitney and Kruskal-Wallis tests. Results The ROs’ mean LTR score was 4.51±0.9 (median 5.0, 66% received the highest possible score of 5; 95% received a score>2). Patient reviews per RO ranged from 1 to 242 (4.50±0.9, median 2.0). LTR scores correlated very strongly with physician-related factors, ranging from r=0.85 (with appropriate time spent with patients) to r=0.89 (with level of trust in physician). LTR scores were not statistically significantly associated with gender, wait time, ROs’ years since graduation, academic status, or geographic region. Conclusion Satisfaction scores for ROs on a leading physician ratings website are very strong, and most patients leaving reviews are likely to recommend their own ROs to their friends and family. Understanding online ratings and identifying factors associated with positive ratings are important for both patients and ROs due to the recent growth in physician-rating third-party sites. ROs should have increased awareness regarding sites like Healthgrades and their online reputation.
Collapse
Affiliation(s)
- Arpan V Prabhu
- Department of Radiation Oncology, University of Pittsburgh Cancer Institute, UPMC
| | - Simrath Randhawa
- Department of Radiation Oncology, University of Pittsburgh Cancer Institute, UPMC
| | - David Clump
- Department of Radiation Oncology, University of Pittsburgh Cancer Institute, UPMC
| | - Dwight E Heron
- Department of Radiation Oncology, University of Pittsburgh Cancer Institute, UPMC
| | - Sushil Beriwal
- Department of Radiation Oncology, University of Pittsburgh Cancer Institute, UPMC
| |
Collapse
|
17
|
McLennan S, Strech D, Reimann S. Developments in the Frequency of Ratings and Evaluation Tendencies: A Review of German Physician Rating Websites. J Med Internet Res 2017; 19:e299. [PMID: 28842391 PMCID: PMC5591403 DOI: 10.2196/jmir.6599] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Revised: 04/03/2017] [Accepted: 06/23/2017] [Indexed: 11/24/2022] Open
Abstract
Background Physician rating websites (PRWs) have been developed to allow all patients to rate, comment, and discuss physicians’ quality online as a source of information for others searching for a physician. At the beginning of 2010, a sample of 298 randomly selected physicians from the physician associations in Hamburg and Thuringia were searched for on 6 German PRWs to examine the frequency of ratings and evaluation tendencies. Objective The objective of this study was to examine (1) the number of identifiable physicians on German PRWs; (2) the number of rated physicians on German PRWs; (3) the average and maximum number of ratings per physician on German PRWs; (4) the average rating on German PRWs; (5) the website visitor ranking positions of German PRWs; and (6) how these data compare with 2010 results. Methods A random stratified sample of 298 selected physicians from the physician associations in Hamburg and Thuringia was generated. Every selected physician was searched for on the 6 PRWs (Jameda, Imedo, Docinsider, Esando, Topmedic, and Medführer) used in the 2010 study and a PRW, Arztnavigator, launched by Allgemeine Ortskrankenkasse (AOK). Results The results were as follows: (1) Between 65.1% (194/298) on Imedo to 94.6% (282/298) on AOK-Arztnavigator of the physicians were identified on the selected PRWs. (2) Between 16.4% (49/298) on Esando to 83.2% (248/298) on Jameda of the sample had been rated at least once. (3) The average number of ratings per physician ranged from 1.2 (Esando) to 7.5 (AOK-Arztnavigator). The maximum number of ratings per physician ranged from 3 (Esando) to 115 (Docinsider), indicating an increase compared with the ratings of 2 to 27 in the 2010 study sample. (4) The average converted standardized rating (1=positive, 2=neutral, and 3=negative) ranged from 1.0 (Medführer) to 1.2 (Jameda and Topmedic). (5) Only Jameda (position 317) and Medführer (position 9796) were placed among the top 10,000 visited websites in Germany. Conclusions Whereas there has been an overall increase in the number of ratings when summing up ratings from all 7 analyzed German PRWs, this represents an average addition of only 4 new ratings per physician in a year. The increase has also not been even across the PRWs, and it would be advisable for the users of PRWs to utilize a number of PRWs to ascertain the rating of any given physician. Further research is needed to identify barriers for patients to rate their physicians and to assist efforts to increase the number of ratings on PRWs to consequently improve the fairness and practical importance of PRWs.
Collapse
Affiliation(s)
- Stuart McLennan
- Institute for History, Ethics and Philosophy of Medicine, Hannover Medical School, Hannover, Germany.,Institute for Biomedical Ethics, Universität Basel, Basel, Switzerland
| | - Daniel Strech
- Institute for History, Ethics and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Swantje Reimann
- Institute for History, Ethics and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| |
Collapse
|
18
|
Okike K, Peter-Bibb TK, Xie KC, Okike ON. Association Between Physician Online Rating and Quality of Care. J Med Internet Res 2016; 18:e324. [PMID: 27965191 PMCID: PMC5192234 DOI: 10.2196/jmir.6612] [Citation(s) in RCA: 77] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2016] [Revised: 10/04/2016] [Accepted: 10/24/2016] [Indexed: 11/20/2022] Open
Abstract
Background Patients are increasingly using physician review websites to find “a good doctor.” However, to our knowledge, no prior study has examined the relationship between online rating and an accepted measure of quality. Objective The purpose of this study was to assess the association between online physician rating and an accepted measure of quality: 30-day risk-adjusted mortality rate following coronary artery bypass graft (CABG) surgery. Methods In the US states of California, Massachusetts, New Jersey, New York, and Pennsylvania—which together account for over one-quarter of the US population—risk-adjusted mortality rates are publicly reported for all cardiac surgeons. From these reports, we recorded the 30-day mortality rate following isolated CABG surgery for each surgeon practicing in these 5 states. For each surgeon listed in the state reports, we then conducted Internet-based searches to determine his or her online rating(s). We then assessed the relationship between physician online rating and risk-adjusted mortality rate. Results Of the 614 surgeons listed in the state reports, we found 96.1% (590/614) to be rated online. The average online rating was 4.4 out of 5, and 78.7% (483/614) of the online ratings were 4 or higher. The median number of reviews used to formulate each rating was 4 (range 1-89), and 32.70% (503/1538) of the ratings were based on 2 or fewer reviews. Overall, there was no correlation between surgeon online rating and risk-adjusted mortality rate (P=.13). Risk-adjusted mortality rates were similar for surgeons across categories of average online rating (P>.05), and surgeon average online rating was similar across quartiles of surgeon risk-adjusted mortality rate (P>.05). Conclusions In this study of cardiac surgeons practicing in the 5 US states that publicly report outcomes, we found no correlation between online rating and risk-adjusted mortality rates. Patients using online rating websites to guide their choice of physician should recognize that these ratings may not reflect actual quality of care as defined by accepted metrics.
Collapse
Affiliation(s)
- Kanu Okike
- Department of Orthopedic Surgery, Kaiser Permanente Moanalua Medical Center, Honolulu, HI, United States
| | | | | | - Okike N Okike
- Department of Patient Experience, University of Massachusetts Memorial Healthcare, Worcester, MA, United States
| |
Collapse
|
19
|
Patel S, Cain R, Neailey K, Hooberman L. Exploring Patients' Views Toward Giving Web-Based Feedback and Ratings to General Practitioners in England: A Qualitative Descriptive Study. J Med Internet Res 2016; 18:e217. [PMID: 27496366 PMCID: PMC4992166 DOI: 10.2196/jmir.5865] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Revised: 06/23/2016] [Accepted: 07/11/2016] [Indexed: 11/13/2022] Open
Abstract
Background Patient feedback websites or doctor rating websites are increasingly being used by patients to give feedback about their health care experiences. There is little known about why patients in England may give Web-based feedback and what may motivate or dissuade them from giving Web-based feedback. Objective The aim of this study was to explore patients’ views toward giving Web-based feedback and ratings to general practitioners (GPs), within the context of other feedback methods available in primary care in England, and in particular, paper-based feedback cards. Methods A descriptive exploratory qualitative approach using face-to-face semistructured interviews was used in this study. Purposive sampling was used to recruit 18 participants from different age groups in London and Coventry. Interviews were transcribed verbatim and analyzed using applied thematic analysis. Results Half of the participants in this study were not aware of the opportunity to leave feedback for GPs, and there was limited awareness about the methods available to leave feedback for a GP. The majority of participants were not convinced that formal patient feedback was needed by GPs or would be used by GPs for improvement, regardless of whether they gave it via a website or on paper. Some participants said or suggested that they may leave feedback on a website rather than on a paper-based feedback card for several reasons: because of the ability and ease of giving it remotely; because it would be shared with the public; and because it would be taken more seriously by GPs. Others, however, suggested that they would not use a website to leave feedback for the opposite reasons: because of accessibility issues; privacy and security concerns; and because they felt feedback left on a website may be ignored. Conclusions Patient feedback and rating websites as they currently are will not replace other mechanisms for patients in England to leave feedback for a GP. Rather, they may motivate a small number of patients who have more altruistic motives or wish to place collective pressure on a GP to give Web-based feedback. If the National Health Service or GP practices want more patients to leave Web-based feedback, we suggest they first make patients aware that they can leave anonymous feedback securely on a website for a GP. They can then convince them that their feedback is needed and wanted by GPs for improvement, and that the reviews they leave on the website will be of benefit to other patients to decide which GP to see or which GP practice to join.
Collapse
|
20
|
Patel S, Cain R, Neailey K, Hooberman L. General Practitioners' Concerns About Online Patient Feedback: Findings From a Descriptive Exploratory Qualitative Study in England. J Med Internet Res 2015; 17:e276. [PMID: 26681299 PMCID: PMC4704896 DOI: 10.2196/jmir.4989] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2015] [Revised: 10/16/2015] [Accepted: 11/03/2015] [Indexed: 11/23/2022] Open
Abstract
Background The growth in the volume of online patient feedback, including online patient ratings and comments, suggests that patients are embracing the opportunity to review online their experience of receiving health care. Very little is known about health care professionals’ attitudes toward online patient feedback and whether health care professionals are comfortable with the public nature of the feedback. Objective The aim of the overall study was to explore and describe general practitioners’ attitudes toward online patient feedback. This paper reports on the findings of one of the aims of the study, which was to explore and understand the concerns that general practitioners (GPs) in England have about online patient feedback. This could then be used to improve online patient feedback platforms and help to increase usage of online patient feedback by GPs and, by extension, their patients. Methods A descriptive qualitative approach using face-to-face semistructured interviews was used in this study. A topic guide was developed following a literature review and discussions with key stakeholders. GPs (N=20) were recruited from Cambridgeshire, London, and Northwest England through probability and snowball sampling. Interviews were transcribed verbatim and analyzed in NVivo using the framework method, a form of thematic analysis. Results Most participants in this study had concerns about online patient feedback. They questioned the validity of online patient feedback because of data and user biases and lack of representativeness, the usability of online patient feedback due to the feedback being anonymous, the transparency of online patient feedback because of the risk of false allegations and breaching confidentiality, and the resulting impact of all those factors on them, their professional practice, and their relationship with their patients. Conclusions The majority of GPs interviewed had reservations and concerns about online patient feedback and questioned its validity and usefulness among other things. Based on the findings from the study, recommendations for online patient feedback website providers in England are given. These include suggestions to make some specific changes to the platform and the need to promote online patient feedback more among both GPs and health care users, which may help to reduce some of the concerns raised by GPs about online patient feedback in this study.
Collapse
Affiliation(s)
- Salma Patel
- WMG, University of Warwick, Coventry, United Kingdom.
| | | | | | | |
Collapse
|
21
|
Burkle CM, Keegan MT. Popularity of internet physician rating sites and their apparent influence on patients' choices of physicians. BMC Health Serv Res 2015; 15:416. [PMID: 26410383 PMCID: PMC4583763 DOI: 10.1186/s12913-015-1099-2] [Citation(s) in RCA: 67] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2015] [Accepted: 09/22/2015] [Indexed: 01/26/2023] Open
Abstract
Background There has been a substantial increase in the number of on-line health care grading sites that offer patient feedback on physicians, staff and hospitals. Despite a growing interest among some consumers of medical services, most studies of Internet physician rating sites (IPRS) have restricted their analysis to sampling data from individual sites alone. Our objective was to explore the frequency with which patients visit and leave comments on IPRS, evaluate the nature of comments written and quantify the influence that positive comments, negative comments and physician medical malpractice history might have on patients’ decisions to seek care from a particular physician. Methods One-thousand consecutive patients visiting the Pre-Operative Evaluation (POE) Clinic at Mayo Clinic in Rochester Minnesota between June 2013 and October 2013 were surveyed using a written questionnaire. Results A total of 854 respondents completed the survey to some degree. A large majority (84 %) stated that they had not previously visited an IPRS. Of those writing comments on an IPRS in the past, just over a third (36 %) provided either unfavorable (9 %) or a combination of favorable and unfavorable (27 %) reviews of physician interactions. Among all respondents, 28.1 % strongly agreed that a positive physician review alone on an IPRS would cause them to seek care from that practitioner. Similarly, 27 % indicated that a negative IPRS review would cause them to choose against seeking care from that physician. Fewer than a third indicated that knowledge of a malpractice suit alone would negatively impact their decision to seek care from a physician. Whether a respondent had visited an IPRS in the past had no impact on the answers provided. Conclusions Few patients had visited IPRS, with a limited number reporting that information provided on these sites would play a significant role in their decision to seek care from a particular physician.
Collapse
Affiliation(s)
| | - Mark T Keegan
- Department of Anesthesiology, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
22
|
Emmert M, Adelhardt T, Sander U, Wambach V, Lindenthal J. A cross-sectional study assessing the association between online ratings and structural and quality of care measures: results from two German physician rating websites. BMC Health Serv Res 2015; 15:414. [PMID: 26404452 PMCID: PMC4582723 DOI: 10.1186/s12913-015-1051-5] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2014] [Accepted: 09/07/2015] [Indexed: 12/30/2022] Open
Abstract
Background Even though physician rating websites (PRWs) have been gaining in importance in both practice and research, little evidence is available on the association of patients’ online ratings with the quality of care of physicians. It thus remains unclear whether patients should rely on these ratings when selecting a physician. The objective of this study was to measure the association between online ratings and structural and quality of care measures for 65 physician practices from the German Integrated Health Care Network “Quality and Efficiency” (QuE). Methods Online reviews from two German PRWs were included which covered a three-year period (2011 to 2013) and included 1179 and 991 ratings, respectively. Information for 65 QuE practices was obtained for the year 2012 and included 21 measures related to structural information (N = 6), process quality (N = 10), intermediate outcomes (N = 2), patient satisfaction (N = 1), and costs (N = 2). The Spearman rank coefficient of correlation was applied to measure the association between ratings and practice-related information. Results Patient satisfaction results from offline surveys and the patients per doctor ratio in a practice were shown to be significantly associated with online ratings on both PRWs. For one PRW, additional significant associations could be shown between online ratings and cost-related measures for medication, preventative examinations, and one diabetes type 2-related intermediate outcome measure. There again, results from the second PRW showed significant associations with the age of the physicians and the number of patients per practice, four process-related quality measures for diabetes type 2 and asthma, and one cost-related measure for medication. Conclusions Several significant associations were found which varied between the PRWs. Patients interested in the satisfaction of other patients with a physician might select a physician on the basis of online ratings. Even though our results indicate associations with some diabetes and asthma measures, but not with coronary heart disease measures, there is still insufficient evidence to draw strong conclusions. The limited number of practices in our study may have weakened our findings. Electronic supplementary material The online version of this article (doi:10.1186/s12913-015-1051-5) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Martin Emmert
- Friedrich-Alexander-University Erlangen-Nuremberg, School of Business and Economics, Institute of Management (IFM), Lange Gasse 20, 90403, Nuremberg, Germany.
| | - Thomas Adelhardt
- Chair of Health Management, Friedrich-Alexander-University Erlangen-Nuremberg, School of Business and Economics, Institute of Management (IFM), Lange Gasse 20, 90403, Nuremberg, Germany.
| | - Uwe Sander
- University of Applied Sciences and Arts, Hannover, Germany.
| | - Veit Wambach
- Integrated Healthcare Network "Quality and Efficiency" (QuE eG), Nuremberg, Vogelsgarten 1, 90402, Nuremberg, Germany.
| | - Jörg Lindenthal
- Integrated Healthcare Network "Quality and Efficiency" (QuE eG), Nuremberg, Vogelsgarten 1, 90402, Nuremberg, Germany.
| |
Collapse
|
23
|
Hao H. The development of online doctor reviews in China: an analysis of the largest online doctor review website in China. J Med Internet Res 2015; 17:e134. [PMID: 26032933 PMCID: PMC4526894 DOI: 10.2196/jmir.4365] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2015] [Revised: 04/25/2015] [Accepted: 05/10/2015] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Since the time of Web 2.0, more and more consumers have used online doctor reviews to rate their doctors or to look for a doctor. This phenomenon has received health care researchers' attention worldwide, and many studies have been conducted on online doctor reviews in the United States and Europe. But no study has yet been done in China. Also, in China, without a mature primary care physician recommendation system, more and more Chinese consumers seek online doctor reviews to look for a good doctor for their health care concerns. OBJECTIVE This study sought to examine the online doctor review practice in China, including addressing the following questions: (1) How many doctors and specialty areas are available for online review? (2) How many online reviews are there on those doctors? (3) What specialty area doctors are more likely to be reviewed or receive more reviews? (4) Are those reviews positive or negative? METHODS This study explores an empirical dataset from Good Doctor website, haodf.com—the earliest and largest online doctor review and online health care community website in China—from 2006 to 2014, to examine the stated research questions by using descriptive statistics, binary logistic regression, and multivariate linear regression. RESULTS The dataset from the Good Doctor website contained 314,624 doctors across China and among them, 112,873 doctors received 731,543 quantitative reviews and 772,979 qualitative reviews as of April 11, 2014. On average, 37% of the doctors had been reviewed on the Good Doctor website. Gynecology-obstetrics-pediatrics doctors were most likely to be reviewed, with an odds ratio (OR) of 1.497 (95% CI 1.461-1.535), and internal medicine doctors were less likely to be reviewed, with an OR of 0.94 (95% CI 0.921-0.960), relative to the combined small specialty areas. Both traditional Chinese medicine doctors and surgeons were more likely to be reviewed than the combined small specialty areas, with an OR of 1.483 (95% CI 1.442-1.525) and an OR of 1.366 (95% CI 1.337-1.395), respectively. Quantitatively, traditional Chinese medicine doctors (P<.001) and gynecology-obstetrics-pediatrics doctors (P<.001) received more reviews than the combined small specialty areas. But internal medicine doctors received fewer reviews than the combined small specialty areas (P<.001). Also, the majority of quantitative reviews were positive-about 88% were positive for the doctors' treatment effect measure and 91% were positive for the bedside manner measure. This was the case for the four major specialty areas, which had the most number of doctors—internal medicine, gynecology-obstetrics-pediatrics, surgery, and traditional Chinese medicine. CONCLUSIONS Like consumers in the United States and Europe, Chinese consumers have started to use online doctor reviews. Similar to previous research on other countries' online doctor reviews, the online reviews in China covered almost every medical specialty, and most of the reviews were positive even though all of the reviewing procedures and the final available information were anonymous. The average number of reviews per rated doctor received in this dataset was 6, which was higher than that for doctors in the United States or Germany, probably because this dataset covered a longer time period than did the US or German dataset. But this number is still very small compared to any doctor's real patient population, and it cannot represent the reality of that population. Also, since all the data used for analysis were from one single website, the data might be biased and might not be a representative national sample of China.
Collapse
Affiliation(s)
- Haijing Hao
- University of Massachusetts Boston, Department of Management Science and Information Systems, Boston, MA, United States.
| |
Collapse
|
24
|
Terlutter R, Bidmon S, Röttl J. Who uses physician-rating websites? Differences in sociodemographic variables, psychographic variables, and health status of users and nonusers of physician-rating websites. J Med Internet Res 2014; 16:e97. [PMID: 24686918 PMCID: PMC4004145 DOI: 10.2196/jmir.3145] [Citation(s) in RCA: 105] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2013] [Revised: 02/25/2014] [Accepted: 02/27/2014] [Indexed: 11/21/2022] Open
Abstract
Background The number of physician-rating websites (PRWs) is rising rapidly, but usage is still poor. So far, there has been little discussion about what kind of variables influence usage of PRWs. Objective We focused on sociodemographic variables, psychographic variables, and health status of PRW users and nonusers. Methods An online survey of 1006 randomly selected German patients was conducted in September 2012. We analyzed the patients’ knowledge and use of online PRWs. We also analyzed the impact of sociodemographic variables (gender, age, and education), psychographic variables (eg, feelings toward the Internet, digital literacy), and health status on use or nonuse as well as the judgment of and behavior intentions toward PRWs. The survey instrument was based on existing literature and was guided by several research questions. Results A total of 29.3% (289/986) of the sample knew of a PRW and 26.1% (257/986) had already used a PRW. Younger people were more prone than older ones to use PRWs (t967=2.27, P=.02). Women used them more than men (χ21=9.4, P=.002), the more highly educated more than less educated people (χ24=19.7, P=.001), and people with chronic diseases more than people without (χ21=5.6, P=.02). No differences were found between users and nonusers in their daily private Internet use and in their use of the Internet for health-related information. Users had more positive feelings about the Internet and other Web-based applications in general (t489=3.07, P=.002) than nonusers, and they had higher digital literacy (t520=4.20, P<.001). Users ascribed higher usefulness to PRWs than nonusers (t612=11.61, P<.001) and users trusted information on PRWs to a greater degree than nonusers (t559=11.48, P<.001). Users were also more likely to rate a physician on a PRW in the future (t367=7.63, P<.001) and to use a PRW in the future (t619=15.01, P<.001). The results of 2 binary logistic regression analyses demonstrated that sociodemographic variables (gender, age, education) and health status alone did not predict whether persons were prone to use PRWs or not. Adding psychographic variables and information-seeking behavior variables to the binary logistic regression analyses led to a satisfying fit of the model and revealed that higher education, poorer health status, higher digital literacy (at the 10% level of significance), lower importance of family and pharmacist for health-related information, higher trust in information on PRWs, and higher appraisal of usefulness of PRWs served as significant predictors for usage of PRWs. Conclusions Sociodemographic variables alone do not sufficiently predict use or nonuse of PRWs; specific psychographic variables and health status need to be taken into account. The results can help designers of PRWs to better tailor their product to specific target groups, which may increase use of PRWs in the future.
Collapse
Affiliation(s)
- Ralf Terlutter
- Department of Marketing and International Management, Alpen-Adria Universitaet Klagenfurt, Klagenfurt am Woerthersee, Austria.
| | | | | |
Collapse
|
25
|
Verhoef LM, Van de Belt TH, Engelen LJLPG, Schoonhoven L, Kool RB. Social media and rating sites as tools to understanding quality of care: a scoping review. J Med Internet Res 2014; 16:e56. [PMID: 24566844 PMCID: PMC3961699 DOI: 10.2196/jmir.3024] [Citation(s) in RCA: 82] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2013] [Revised: 01/17/2014] [Accepted: 01/19/2014] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Insight into the quality of health care is important for any stakeholder including patients, professionals, and governments. In light of a patient-centered approach, it is essential to assess the quality of health care from a patient's perspective, which is commonly done with surveys or focus groups. Unfortunately, these "traditional" methods have significant limitations that include social desirability bias, a time lag between experience and measurement, and difficulty reaching large groups of people. Information on social media could be of value to overcoming these limitations, since these new media are easy to use and are used by the majority of the population. Furthermore, an increasing number of people share health care experiences online or rate the quality of their health care provider on physician rating sites. The question is whether this information is relevant to determining or predicting the quality of health care. OBJECTIVE The goal of our research was to systematically analyze the relation between information shared on social media and quality of care. METHODS We performed a scoping review with the following goals: (1) to map the literature on the association between social media and quality of care, (2) to identify different mechanisms of this relationship, and (3) to determine a more detailed agenda for this relatively new research area. A recognized scoping review methodology was used. We developed a search strategy based on four themes: social media, patient experience, quality, and health care. Four online scientific databases were searched, articles were screened, and data extracted. Results related to the research question were described and categorized according to type of social media. Furthermore, national and international stakeholders were consulted throughout the study, to discuss and interpret results. RESULTS Twenty-nine articles were included, of which 21 were concerned with health care rating sites. Several studies indicate a relationship between information on social media and quality of health care. However, some drawbacks exist, especially regarding the use of rating sites. For example, since rating is anonymous, rating values are not risk adjusted and therefore vulnerable to fraud. Also, ratings are often based on only a few reviews and are predominantly positive. Furthermore, people providing feedback on health care via social media are presumably not always representative for the patient population. CONCLUSIONS Social media and particularly rating sites are an interesting new source of information about quality of care from the patient's perspective. This new source should be used to complement traditional methods, since measuring quality of care via social media has other, but not less serious, limitations. Future research should explore whether social media are suitable in practice for patients, health insurers, and governments to help them judge the quality performance of professionals and organizations.
Collapse
Affiliation(s)
- Lise M Verhoef
- IQ healthcare, Radboud University Medical Center, Nijmegen, Netherlands.
| | | | | | | | | |
Collapse
|
26
|
Emmert M, Meier F, Pisch F, Sander U. Physician choice making and characteristics associated with using physician-rating websites: cross-sectional study. J Med Internet Res 2013; 15:e187. [PMID: 23985220 PMCID: PMC3758064 DOI: 10.2196/jmir.2702] [Citation(s) in RCA: 124] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2013] [Revised: 06/12/2013] [Accepted: 06/14/2013] [Indexed: 11/21/2022] Open
Abstract
Background Over the past decade, physician-rating websites have been gaining attention in scientific literature and in the media. However, little knowledge is available about the awareness and the impact of using such sites on health care professionals. It also remains unclear what key predictors are associated with the knowledge and the use of physician-rating websites. Objective To estimate the current level of awareness and use of physician-rating websites in Germany and to determine their impact on physician choice making and the key predictors which are associated with the knowledge and the use of physician-rating websites. Methods This study was designed as a cross-sectional survey. An online panel was consulted in January 2013. A questionnaire was developed containing 28 questions; a pretest was carried out to assess the comprehension of the questionnaire. Several sociodemographic (eg, age, gender, health insurance status, Internet use) and 2 health-related independent variables (ie, health status and health care utilization) were included. Data were analyzed using descriptive statistics, chi-square tests, and t tests. Binary multivariate logistic regression models were performed for elaborating the characteristics of physician-rating website users. Results from the logistic regression are presented for both the observed and weighted sample. Results In total, 1505 respondents (mean age 43.73 years, SD 14.39; 857/1505, 57.25% female) completed our survey. Of all respondents, 32.09% (483/1505) heard of physician-rating websites and 25.32% (381/1505) already had used a website when searching for a physician. Furthermore, 11.03% (166/1505) had already posted a rating on a physician-rating website. Approximately 65.35% (249/381) consulted a particular physician based on the ratings shown on the websites; in contrast, 52.23% (199/381) had not consulted a particular physician because of the publicly reported ratings. Significantly higher likelihoods for being aware of the websites could be demonstrated for female participants (P<.001), those who were widowed (P=.01), covered by statutory health insurance (P=.02), and with higher health care utilization (P<.001). Health care utilization was significantly associated with all dependent variables in our multivariate logistic regression models (P<.001). Furthermore, significantly higher scores could be shown for health insurance status in the unweighted and Internet use in the weighted models. Conclusions Neither health policy makers nor physicians should underestimate the influence of physician-rating websites. They already play an important role in providing information to help patients decide on an appropriate physician. Assuming there will be a rising level of public awareness, the influence of their use will increase well into the future. Future studies should assess the impact of physician-rating websites under experimental conditions and investigate whether physician-rating websites have the potential to reflect the quality of care offered by health care providers.
Collapse
Affiliation(s)
- Martin Emmert
- Institute of Management-IFM, School of Business and Economics, Friedrich-Alexander-University Erlangen-Nuremberg, Nuremberg, Germany.
| | | | | | | |
Collapse
|
27
|
Emmert M, Meier F. An analysis of online evaluations on a physician rating website: evidence from a German public reporting instrument. J Med Internet Res 2013; 15:e157. [PMID: 23919987 PMCID: PMC3742398 DOI: 10.2196/jmir.2655] [Citation(s) in RCA: 91] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2013] [Accepted: 06/11/2013] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Physician rating websites (PRW) have been gaining in popularity among patients who are seeking a physician. However, little evidence is available on the number, distribution, or trend of evaluations on PRWs. Furthermore, there is no published evidence available that analyzes the characteristics of the patients who provide ratings on PRWs. OBJECTIVE The objective of the study was to analyze all physician evaluations that were posted on the German PRW, jameda, in 2012. METHODS Data from the German PRW, jameda, from 2012 were analyzed and contained 127,192 ratings of 53,585 physicians from 107,148 patients. Information included medical specialty and gender of the physician, age, gender, and health insurance status of the patient, as well as the results of the physician ratings. Statistical analysis was carried out using the median test and Kendall Tau-b test. RESULTS Thirty-seven percent of all German physicians were rated on jameda in 2012. Nearly half of those physicians were rated once, and less than 2% were rated more than ten times (mean number of ratings 2.37, SD 3.17). About one third of all rated physicians were female. Rating patients were mostly female (60%), between 30-50 years (51%) and covered by Statutory Health Insurance (83%). A mean of 1.19 evaluations per patient could be calculated (SD 0.778). Most of the rated medical specialties were orthopedists, dermatologists, and gynecologists. Two thirds of all ratings could be assigned to the best category, "very good". Female physicians had significantly better ratings than did their male colleagues (P<.001). Additionally, significant rating differences existed between medical specialties (P<.001). It could further be shown that older patients gave better ratings than did their younger counterparts (P<.001). The same was true for patients covered by private health insurance; they gave more favorable evaluations than did patients covered by statutory health insurance (P<.001). No significant rating differences could be detected between female and male patients (P=.505). The likelihood of a good rating was shown to increase with a rising number of both physician and patient ratings. CONCLUSIONS Our findings are mostly in line with those published for PRWs from the United States. It could be shown that most of the ratings were positive, and differences existed regarding sociodemographic characteristics of both physicians and patients. An increase in the usage of PRWs might contribute to reducing the lack of publicly available information on physician quality. However, it remains unclear whether PRWs have the potential to reflect the quality of care offered by individual health care providers. Further research should assess in more detail the motivation of patients who rate their physicians online.
Collapse
Affiliation(s)
- Martin Emmert
- Institute of Management-IFM, School of Business and Economics, Friedrich-Alexander-University Erlangen-Nuremberg, Nuremberg, Germany.
| | | |
Collapse
|
28
|
Emmert M, Sander U, Pisch F. Eight questions about physician-rating websites: a systematic review. J Med Internet Res 2013; 15:e24. [PMID: 23372115 PMCID: PMC3636311 DOI: 10.2196/jmir.2360] [Citation(s) in RCA: 100] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2012] [Revised: 11/05/2012] [Accepted: 11/09/2012] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Physician-rating websites (PRWs) are currently gaining in popularity because they increase transparency in the health care system. However, research on the characteristics and content of these portals remains limited. OBJECTIVE To identify and synthesize published evidence in peer-reviewed journals regarding frequently discussed issues about PRWs. METHODS Peer-reviewed English and German language literature was searched in seven databases (Medline (via PubMed), the Cochrane Library, Business Source Complete, ABI/Inform Complete, PsycInfo, Scopus, and ISI web of knowledge) without any time constraints. Additionally, reference lists of included studies were screened to assure completeness. The following eight previously defined questions were addressed: 1) What percentage of physicians has been rated? 2) What is the average number of ratings on PRWs? 3) Are there any differences among rated physicians related to socioeconomic status? 4) Are ratings more likely to be positive or negative? 5) What significance do patient narratives have? 6) How should physicians deal with PRWs? 7) What major shortcomings do PRWs have? 8) What recommendations can be made for further improvement of PRWs? RESULTS Twenty-four articles published in peer-reviewed journals met our inclusion criteria. Most studies were published by US (n=13) and German (n=8) researchers; however, the focus differed considerably. The current usage of PRWs is still low but is increasing. International data show that 1 out of 6 physicians has been rated, and approximately 90% of all ratings on PRWs were positive. Although often a concern, we could not find any evidence of "doctor-bashing". Physicians should not ignore these websites, but rather, monitor the information available and use it for internal and ex-ternal purpose. Several shortcomings limit the significance of the results published on PRWs; some recommendations to address these limitations are presented. CONCLUSIONS Although the number of publications is still low, PRWs are gaining more attention in research. But the current condition of PRWs is lacking. This is the case both in the United States and in Germany. Further research is necessary to increase the quality of the websites, especially from the patients' perspective.
Collapse
Affiliation(s)
- Martin Emmert
- Institute of Management IFM, School of Business and Economics, Friedrich-Alexander-University Erlangen-Nuremberg, Nuremberg 90411, Germany.
| | | | | |
Collapse
|
29
|
Gao GG, McCullough JS, Agarwal R, Jha AK. A changing landscape of physician quality reporting: analysis of patients' online ratings of their physicians over a 5-year period. J Med Internet Res 2012; 14:e38. [PMID: 22366336 PMCID: PMC3374528 DOI: 10.2196/jmir.2003] [Citation(s) in RCA: 236] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2011] [Revised: 02/15/2012] [Accepted: 02/15/2012] [Indexed: 11/15/2022] Open
Abstract
Background Americans increasingly post and consult online physician rankings, yet we know little about this new phenomenon of public physician quality reporting. Physicians worry these rankings will become an outlet for disgruntled patients. Objective To describe trends in patients’ online ratings over time, across specialties, to identify what physician characteristics influence online ratings, and to examine how the value of ratings reflects physician quality. Methods We used data from RateMDs.com, which included over 386,000 national ratings from 2005 to 2010 and provided insight into the evolution of patients’ online ratings. We obtained physician demographic data from the US Department of Health and Human Services’ Area Resource File. Finally, we matched patients’ ratings with physician-level data from the Virginia Medical Board and examined the probability of being rated and resultant rating levels. Results We estimate that 1 in 6 practicing US physicians received an online review by January 2010. Obstetrician/gynecologists were twice as likely to be rated (P < .001) as other physicians. Online reviews were generally quite positive (mean 3.93 on a scale of 1 to 5). Based on the Virginia physician population, long-time graduates were more likely to be rated, while physicians who graduated in recent years received higher average ratings (P < .001). Patients gave slightly higher ratings to board-certified physicians (P = .04), those who graduated from highly rated medical schools (P = .002), and those without malpractice claims (P = .1). Conclusion Online physician rating is rapidly growing in popularity and becoming commonplace with no evidence that they are dominated by disgruntled patients. There exist statistically significant correlations between the value of ratings and physician experience, board certification, education, and malpractice claims, suggesting a positive correlation between online ratings and physician quality. However, the magnitude is small. The average number of ratings per physician is still low, and most rating variation reflects evaluations of punctuality and staff. Understanding whether they truly reflect better care and how they are used will be critically important.
Collapse
Affiliation(s)
- Guodong Gordon Gao
- Center for Health Information and Decision Systems, Robert H Smith School of Business, University of Maryland, College Park, MD 20742, USA.
| | | | | | | |
Collapse
|
30
|
Strech D. Ethical principles for physician rating sites. J Med Internet Res 2011; 13:e113. [PMID: 22146737 PMCID: PMC3278099 DOI: 10.2196/jmir.1899] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2011] [Revised: 10/21/2011] [Accepted: 10/25/2011] [Indexed: 11/23/2022] Open
Abstract
During the last 5 years, an ethical debate has emerged, often in public media, about the potential positive and negative effects of physician rating sites and whether physician rating sites created by insurance companies or government agencies are ethical in their current states. Due to the lack of direct evidence of physician rating sites' effects on physicians' performance, patient outcomes, or the public's trust in health care, most contributions refer to normative arguments, hypothetical effects, or indirect evidence. This paper aims, first, to structure the ethical debate about the basic concept of physician rating sites: allowing patients to rate, comment, and discuss physicians' performance, online and visible to everyone. Thus, it provides a more thorough and transparent starting point for further discussion and decision making on physician rating sites: what should physicians and health policy decision makers take into account when discussing the basic concept of physician rating sites and its possible implications on the physician-patient relationship? Second, it discusses where and how the preexisting evidence from the partly related field of public reporting of physician performance can serve as an indicator for specific needs of evaluative research in the field of physician rating sites. This paper defines the ethical principles of patient welfare, patient autonomy, physician welfare, and social justice in the context of physician rating sites. It also outlines basic conditions for a fair decision-making process concerning the implementation and regulation of physician rating sites, namely, transparency, justification, participation, minimization of conflicts of interest, and openness for revision. Besides other issues described in this paper, one trade-off presents a special challenge and will play an important role when deciding about more- or less-restrictive physician rating sites regulations: the potential psychological and financial harms for physicians that can result from physician rating sites need to be contained without limiting the potential benefits for patients with respect to health, health literacy, and equity.
Collapse
Affiliation(s)
- Daniel Strech
- Institute for History, Ethics and Philosophy of Medicine, CELLS - Centre for Ethics and Law in the Life Sciences, Hannover Medical School, Hannover, Germany.
| |
Collapse
|