1
|
Janssen A, Donnelly C, Shaw T. A Taxonomy for Health Information Systems. J Med Internet Res 2024; 26:e47682. [PMID: 38820575 DOI: 10.2196/47682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 10/05/2023] [Accepted: 01/31/2024] [Indexed: 06/02/2024] Open
Abstract
The health sector is highly digitized, which is enabling the collection of vast quantities of electronic data about health and well-being. These data are collected by a diverse array of information and communication technologies, including systems used by health care organizations, consumer and community sources such as information collected on the web, and passively collected data from technologies such as wearables and devices. Understanding the breadth of IT that collect these data and how it can be actioned is a challenge for the significant portion of the digital health workforce that interact with health data as part of their duties but are not for informatics experts. This viewpoint aims to present a taxonomy categorizing common information and communication technologies that collect electronic data. An initial classification of key information systems collecting electronic health data was undertaken via a rapid review of the literature. Subsequently, a purposeful search of the scholarly and gray literature was undertaken to extract key information about the systems within each category to generate definitions of the systems and describe the strengths and limitations of these systems.
Collapse
Affiliation(s)
- Anna Janssen
- Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Candice Donnelly
- Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Tim Shaw
- Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| |
Collapse
|
2
|
Gillespie A, Reader TW. Online patient feedback as a safety valve: An automated language analysis of unnoticed and unresolved safety incidents. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2022; 43:1463-1477. [PMID: 35945156 DOI: 10.1111/risa.14002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Safety reporting systems are widely used in healthcare to identify risks to patient safety. But, their effectiveness is undermined if staff do not notice or report incidents. Patients, however, might observe and report these overlooked incidents because they experience the consequences, are highly motivated, and independent of the organization. Online patient feedback may be especially valuable because it is a channel of reporting that allows patients to report without fear of consequence (e.g., anonymously). Harnessing this potential is challenging because online feedback is unstructured and lacks demonstrable validity and added value. Accordingly, we developed an automated language analysis method for measuring the likelihood of patient-reported safety incidents in online patient feedback. Feedback from patients and families (n = 146,685, words = 22,191,427, years = 2013-2019) about acute NHS trusts (hospital conglomerates; n = 134) in England were analyzed. The automated measure had good precision (0.69) and excellent recall (0.98) in identifying incidents; was independent of staff-reported incidents (r = -0.04 to 0.19); and was associated with hospital-level mortality rates (z = 3.87; p < 0.001). The identified safety incidents were often reported as unnoticed (89%) or unresolved (21%), suggesting that patients use online platforms to give visibility to safety concerns they believe have been missed or ignored. Online stakeholder feedback is akin to a safety valve; being independent and unconstrained it provides an outlet for reporting safety issues that may have been unnoticed or unresolved within formal channels.
Collapse
Affiliation(s)
- Alex Gillespie
- Department of Psychological & Behavioural Science, London School of Economics, London, UK
- Department of Psychology, Oslo New University College, Oslo, Norway
| | - Tom W Reader
- Department of Psychological & Behavioural Science, London School of Economics, London, UK
| |
Collapse
|
3
|
Nursing Homes: Affiliation to Large Chains, Quality and Public–Private Collaboration. Healthcare (Basel) 2022; 10:healthcare10081431. [PMID: 36011087 PMCID: PMC9408552 DOI: 10.3390/healthcare10081431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 07/27/2022] [Accepted: 07/27/2022] [Indexed: 11/20/2022] Open
Abstract
The objective of this paper was to estimate the influence of being affiliated with an NH chain on perceived consumer quality, and whether this relationship is affected by maintaining a collaboration agreement with public administrations. We used a combination of theoretical foundations: (1) From the consumer perspective, we focussed on online reviews of the quality of nursing homes (NHs); (2) from the industrial organisation literature, we proposed arguments regarding the advantages and disadvantages of belonging to a chain; (3) the theory of transaction costs was used to explain public–private collaboration. The study was carried out on a sample of 642 chain-affiliated Spanish NHs, with data from quality scores downloaded from the website topMayores.es. We distinguished between the six largest chains and the rest. We applied linear regression models. The results show that NHs affiliated with one of the largest NH chains obtained worse quality scores in the assessment made by users, although quality scores improved for the largest chains of NHs involved in an agreement with the public administration.
Collapse
|
4
|
Ramasubramanian H, Joshi S, Krishnan R. Wisdom of the Experts Versus Opinions of the Crowd in Hospital Quality Ratings: Analysis of Hospital Compare Star Ratings and Google Star Ratings. J Med Internet Res 2022; 24:e34030. [PMID: 35881418 PMCID: PMC9364164 DOI: 10.2196/34030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 05/08/2022] [Accepted: 06/13/2022] [Indexed: 11/13/2022] Open
Abstract
Background Popular web-based portals provide free and convenient access to user-generated hospital quality reviews. The Centers for Medicare & Medicaid Services (CMS) also publishes Hospital Compare Star Ratings (HCSR), a comprehensive expert rating of US hospital quality that aggregates multiple measures of quality. CMS revised the HCSR methods in 2021. It is important to analyze the degree to which web-based ratings reflect expert measures of hospital quality because easily accessible, crowdsourced hospital ratings influence consumers’ hospital choices. Objective This study aims to assess the association between web-based, Google hospital quality ratings that reflect the opinions of the crowd and HCSR representing the wisdom of the experts, as well as the changes in these associations following the 2021 revision of the CMS rating system. Methods We extracted Google star ratings using the Application Programming Interface in June 2020. The HCSR data of April 2020 (before the revision of HCSR methodology) and April 2021 (after the revision of HCSR methodology) were obtained from the CMS Hospital Compare website. We also extracted scores for the individual components of hospital quality for each of the hospitals in our sample using the code provided by Hospital Compare. Fractional response models were used to estimate the association between Google star ratings and HCSR as well as individual components of quality (n=2619). Results The Google star ratings are statistically associated with HCSR (P<.001) after controlling for hospital-level effects; however, they are not associated with clinical components of HCSR that require medical expertise for evaluation such as safety of care (P=.30) or readmission (P=.52). The revised CMS rating system ameliorates previous partial inconsistencies in the association between Google star ratings and quality component scores of HCSR. Conclusions Crowdsourced Google star hospital ratings are informative regarding expert CMS overall hospital quality ratings and individual quality components that are easier for patients to evaluate. Improvements in hospital quality metrics that require expertise to assess, such as safety of care and readmission, may not lead to improved Google star ratings. Hospitals can benefit from using crowdsourced ratings as timely and easily available indicators of their quality performance while recognizing their limitations and biases.
Collapse
Affiliation(s)
- Hari Ramasubramanian
- Accounting Department, Frankfurt School of Finance and Management, Frankfurt am Main, Germany
| | - Satish Joshi
- College of Agriculture & Natural Resources, Department of Agricultural, Food, and Resource Economics, Michigan State University, East Lansing, MI, United States
| | - Ranjani Krishnan
- Accounting and Information Systems, Broad College of Business, Michigan State University, East Lansing, MI, United States
| |
Collapse
|
5
|
Patel R, Tseng CC, Choudhry HS, Lemdani MS, Talmor G, Paskhover B. Applying Machine Learning to Determine Popular Patient Questions About Mentoplasty on Social Media. Aesthetic Plast Surg 2022; 46:2273-2279. [PMID: 35201377 DOI: 10.1007/s00266-022-02808-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 01/22/2022] [Indexed: 11/28/2022]
Abstract
PURPOSE Patient satisfaction in esthetic surgery often necessitates synergy between patient and physician goals. The authors aim to characterize patient questions before and after mentoplasty to reflect the patient perspective and enhance the physician-patient relationship. METHODS Mentoplasty reviews were gathered from Realself.com using an automated web crawler. Questions were defined as preoperative or postoperative. Each question was reviewed and characterized by the authors into general categories to best reflect the overall theme of the question. A machine learning approach was utilized to create a list of the most common patient questions, asked both preoperatively and postoperatively. RESULTS A total of 2,012 questions were collected. Of these, 1,708 (84.9%) and 304 (15.1%) preoperative and postoperative questions, respectively. The primary category for patients preoperatively was "eligibility for surgery" (86.3%), followed by "surgical techniques and logistics" (5.4%) and "cost" (5.4%). Of the postoperative questions, the most common questions were about "options to revise surgery" (44.1%), "symptoms after surgery" (27.0%), and "appearance" (26.3%). Our machine learning approach generated the 10 most common pre- and postoperative questions about mentoplasty. The majority of preoperative questions dealt with potential surgical indications, while most postoperative questions principally addressed appearance. CONCLUSIONS The majority of mentoplasty patient questions were preoperative and asked about eligibility of surgery. Our study also found a significant proportion of postoperative questions inquired about revision, suggesting a small but nontrivial subset of patients highly dissatisfied with their results. Our 10 most common preoperative and postoperative question handout can help better inform physicians about the patient perspective on mentoplasty throughout their surgical course. Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Rushi Patel
- Department of Otolaryngology - Head and Neck Surgery, Rutgers New Jersey Medical School, 90 Bergen St., Suite 8100, Newark, NJ, 07103, USA
| | - Christopher C Tseng
- Department of Otolaryngology - Head and Neck Surgery, Rutgers New Jersey Medical School, 90 Bergen St., Suite 8100, Newark, NJ, 07103, USA
| | - Hannaan S Choudhry
- Department of Otolaryngology - Head and Neck Surgery, Rutgers New Jersey Medical School, 90 Bergen St., Suite 8100, Newark, NJ, 07103, USA
| | - Mehdi S Lemdani
- Department of Otolaryngology - Head and Neck Surgery, Rutgers New Jersey Medical School, 90 Bergen St., Suite 8100, Newark, NJ, 07103, USA
| | - Guy Talmor
- Department of Otolaryngology - Head and Neck Surgery, Rutgers New Jersey Medical School, 90 Bergen St., Suite 8100, Newark, NJ, 07103, USA
| | - Boris Paskhover
- Department of Otolaryngology - Head and Neck Surgery, Rutgers New Jersey Medical School, 90 Bergen St., Suite 8100, Newark, NJ, 07103, USA.
| |
Collapse
|
6
|
Rahim AIA, Ibrahim MI, Chua SL, Musa KI. Hospital Facebook Reviews Analysis Using a Machine Learning Sentiment Analyzer and Quality Classifier. Healthcare (Basel) 2021; 9:1679. [PMID: 34946405 PMCID: PMC8701188 DOI: 10.3390/healthcare9121679] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 11/30/2021] [Accepted: 12/02/2021] [Indexed: 02/05/2023] Open
Abstract
While experts have recognised the significance and necessity of social media integration in healthcare, no systematic method has been devised in Malaysia or Southeast Asia to include social media input into the hospital quality improvement process. The goal of this work is to explain how to develop a machine learning system for classifying Facebook reviews of public hospitals in Malaysia by using service quality (SERVQUAL) dimensions and sentiment analysis. We developed a Machine Learning Quality Classifier (MLQC) based on the SERVQUAL model and a Machine Learning Sentiment Analyzer (MLSA) by manually annotated multiple batches of randomly chosen reviews. Logistic regression (LR), naive Bayes (NB), support vector machine (SVM), and other methods were used to train the classifiers. The performance of each classifier was tested using 5-fold cross validation. For topic classification, the average F1-score was between 0.687 and 0.757 for all models. In a 5-fold cross validation of each SERVQUAL dimension and in sentiment analysis, SVM consistently outperformed other methods. The study demonstrates how to use supervised learning to automatically identify SERVQUAL domains and sentiments from patient experiences on a hospital's Facebook page. Malaysian healthcare providers can gather and assess data on patient care via the use of these content analysis technology to improve hospital quality of care.
Collapse
Affiliation(s)
- Afiq Izzudin A. Rahim
- Department of Community Medicine, School of Medical Science, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu 16150, Kelantan, Malaysia; (A.I.A.R.); (K.I.M.)
| | - Mohd Ismail Ibrahim
- Department of Community Medicine, School of Medical Science, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu 16150, Kelantan, Malaysia; (A.I.A.R.); (K.I.M.)
| | - Sook-Ling Chua
- Faculty of Computing and Informatics, Multimedia University, Persiaran Multimedia, Cyberjaya 63100, Selangor, Malaysia
| | - Kamarul Imran Musa
- Department of Community Medicine, School of Medical Science, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu 16150, Kelantan, Malaysia; (A.I.A.R.); (K.I.M.)
| |
Collapse
|
7
|
Rahim AIA, Ibrahim MI, Musa KI, Chua SL, Yaacob NM. Patient Satisfaction and Hospital Quality of Care Evaluation in Malaysia Using SERVQUAL and Facebook. Healthcare (Basel) 2021; 9:1369. [PMID: 34683050 PMCID: PMC8544585 DOI: 10.3390/healthcare9101369] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 09/27/2021] [Accepted: 10/12/2021] [Indexed: 02/05/2023] Open
Abstract
Social media sites, dubbed patient online reviews (POR), have been proposed as new methods for assessing patient satisfaction and monitoring quality of care. However, the unstructured nature of POR data derived from social media creates a number of challenges. The objectives of this research were to identify service quality (SERVQUAL) dimensions automatically from hospital Facebook reviews using a machine learning classifier, and to examine their associations with patient dissatisfaction. From January 2017 to December 2019, empirical research was conducted in which POR were gathered from the official Facebook page of Malaysian public hospitals. To find SERVQUAL dimensions in POR, a machine learning topic classification utilising supervised learning was developed, and this study's objective was established using logistic regression analysis. It was discovered that 73.5% of patients were satisfied with the public hospital service, whereas 26.5% were dissatisfied. SERVQUAL dimensions identified were 13.2% reviews of tangible, 68.9% of reliability, 6.8% of responsiveness, 19.5% of assurance, and 64.3% of empathy. After controlling for hospital variables, all SERVQUAL dimensions except tangible and assurance were shown to be significantly related with patient dissatisfaction (reliability, p < 0.001; responsiveness, p = 0.016; and empathy, p < 0.001). Rural hospitals had a higher probability of patient dissatisfaction (p < 0.001). Therefore, POR, assisted by machine learning technologies, provided a pragmatic and feasible way for capturing patient perceptions of care quality and supplementing conventional patient satisfaction surveys. The findings offer critical information that will assist healthcare authorities in capitalising on POR by monitoring and evaluating the quality of services in real time.
Collapse
Affiliation(s)
- Afiq Izzudin A. Rahim
- Department of Community Medicine, School of Medical Science, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu 16150, Kelantan, Malaysia; (A.I.A.R.); (K.I.M.)
| | - Mohd Ismail Ibrahim
- Department of Community Medicine, School of Medical Science, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu 16150, Kelantan, Malaysia; (A.I.A.R.); (K.I.M.)
| | - Kamarul Imran Musa
- Department of Community Medicine, School of Medical Science, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu 16150, Kelantan, Malaysia; (A.I.A.R.); (K.I.M.)
| | - Sook-Ling Chua
- Faculty of Computing and Informatics, Multimedia University, Persiaran Multimedia, Cyberjaya 63100, Selangor, Malaysia;
| | - Najib Majdi Yaacob
- Unit of Biostatistics and Research Methodology, Health Campus, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu 16150, Kelantan, Malaysia;
| |
Collapse
|
8
|
A. Rahim AI, Ibrahim MI, Musa KI, Chua SL, Yaacob NM. Assessing Patient-Perceived Hospital Service Quality and Sentiment in Malaysian Public Hospitals Using Machine Learning and Facebook Reviews. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:9912. [PMID: 34574835 PMCID: PMC8466628 DOI: 10.3390/ijerph18189912] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 09/17/2021] [Accepted: 09/18/2021] [Indexed: 02/05/2023]
Abstract
Social media is emerging as a new avenue for hospitals and patients to solicit input on the quality of care. However, social media data is unstructured and enormous in volume. Moreover, no empirical research on the use of social media data and perceived hospital quality of care based on patient online reviews has been performed in Malaysia. The purpose of this study was to investigate the determinants of positive sentiment expressed in hospital Facebook reviews in Malaysia, as well as the association between hospital accreditation and sentiments expressed in Facebook reviews. From 2017 to 2019, we retrieved comments from 48 official public hospitals' Facebook pages. We used machine learning to build a sentiment analyzer and service quality (SERVQUAL) classifier that automatically classifies the sentiment and SERVQUAL dimensions. We utilized logistic regression analysis to determine our goals. We evaluated a total of 1852 reviews and our machine learning sentiment analyzer detected 72.1% of positive reviews and 27.9% of negative reviews. We classified 240 reviews as tangible, 1257 reviews as trustworthy, 125 reviews as responsive, 356 reviews as assurance, and 1174 reviews as empathy using our machine learning SERVQUAL classifier. After adjusting for hospital characteristics, all SERVQUAL dimensions except Tangible were associated with positive sentiment. However, no significant relationship between hospital accreditation and online sentiment was discovered. Facebook reviews powered by machine learning algorithms provide valuable, real-time data that may be missed by traditional hospital quality assessments. Additionally, online patient reviews offer a hitherto untapped indication of quality that may benefit all healthcare stakeholders. Our results confirm prior studies and support the use of Facebook reviews as an adjunct method for assessing the quality of hospital services in Malaysia.
Collapse
Affiliation(s)
- Afiq Izzudin A. Rahim
- Department of Community Medicine, School of Medical Science, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu 16150, Kelantan, Malaysia; (A.I.A.R.); (K.I.M.)
| | - Mohd Ismail Ibrahim
- Department of Community Medicine, School of Medical Science, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu 16150, Kelantan, Malaysia; (A.I.A.R.); (K.I.M.)
| | - Kamarul Imran Musa
- Department of Community Medicine, School of Medical Science, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu 16150, Kelantan, Malaysia; (A.I.A.R.); (K.I.M.)
| | - Sook-Ling Chua
- Faculty of Computing and Informatics, Multimedia University, Persiaran Multimedia, Cyberjaya 63100, Selangor, Malaysia;
| | - Najib Majdi Yaacob
- Units of Biostatistics and Research Methodology, School of Medical Sciences, Health Campus, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu 16150, Kelantan, Malaysia;
| |
Collapse
|